chash
stringlengths
16
16
content
stringlengths
267
674k
9c3b9d06015cd81a
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer For something I'm writing -- I'm interested in examples of bad arguments which involve the application of mathematical theorems in non-mathematical contexts. E.G. folks who make theological arguments based on (what they take to be) Godel's theorem, or Bayesian arguments for creationism. (If necessary I'm willing to extend the net to physics, to include bad applications of the second law of thermodynamics or the Uncertainty Principle, if you know any really amusing ones.) share|cite|improve this question Do you want examples where they use the theorem correctly, but the real-world context violates one of the assumptions (e.g., ignoring that the Earth is not thermodynamically a closed system), or that they just misunderstand the theorem itself? – Scott McKuen Apr 12 '11 at 14:59 Does "applying the Banach-Tarski paradox to an orange" qualify? – Someone Apr 12 '11 at 15:14 Rather than Gödel's incompleteness theorem applied to theological arguments, there is Gödel's ontological proof of the existence of God (ödel's_ontological_proof), which is more likely to be misapplied... – godelian Apr 12 '11 at 15:19 I feel like most people misapply Godel's incompleteness theorem. – Sean Tilson Apr 12 '11 at 15:25 Perhaps it was my being ignorant of algebraic topology as a kid, but splitting my sandwich with my brother did not seem to be fair! – F Zaldivar Apr 13 '11 at 0:45 29 Answers 29 A tragic example of this is the case People v. Collins, in which a prosecutor asked a mathematician (as an expert witness) a question of the form, "assuming these events are independent, what is the probability that...". The events were obviously not independent, things like "drives a convertible", "has a caucasian girlfriend", "girlfriend has blond hair", and some others. The mathematician answered the misleading question correctly (assuming independence), and the defendant went to jail. The California Supreme Court later overturned the verdict, in a decision that shows a surprisingly solid understanding of probability. This case could be required reading (the supreme court decision, anyway) in any introduction to probability course. It has counting, independence, and conditional probability all involved in a fundamental way. share|cite|improve this answer And one of the judges dissented?! – Mariano Suárez-Alvarez Apr 13 '11 at 18:57 Very interesting! I'd heard of the case before, but I never knew the verdict was overturned. Arguably, the best-known (and most awful) example of this kind of thing is the Sally Clark case. The Royal Statistical Society wrote a very good public statement about it. The statement is posted on the RSS web site, but unfortunately the link seems to be broken. – Vectornaut Apr 13 '11 at 23:03 From the Supreme Court's decision: "Mathematics, a veritable sorcerer in our computerized society, while assisting the trier of fact in the search for truth, must not cast a spell over him." – Beren Sanders May 31 '11 at 19:42 Eric, maybe you mean 1-(1-1/10000)^20000 ? – Kevin H. Lin Jun 1 '11 at 23:49 It is worth looking at the multiple testing example under the Prosecutor's Fallacy:… This explains another way that data in the court room can be misinterpreted. – Eric Naslund Jun 2 '11 at 1:11 Here are some examples, ranging from the comical to the debatable. Comical: Pretty much any mention of mathematics in Jacques Lacan. To give you an idea, here is a typical passage: And here's another one: Thus, by calculating that signification according to the algebraic method used here, namely $$\frac{S(\text{Signifier})}{s(\text{signified})} = s(\text{the statement})$$ with $S=(-1)$ produces $s=\sqrt{-1}$[...]Thus the erectile organ comes to symbolize the place of jouissance, not in itself, or even in the form of an image, but as a part lacking in the desired image: that is why it is equivalent to the of the signification produced above, of the jouissance that it restores by the coefficient of its statement to the function of the lack of signifier -1. [Lacan (1971); seminar held in 1960.] Interesting/Rigorous but still quite a stretch: The work of Alain Badiou on set theory, although more rigorous and advanced, also provides a very good resource for misapplications of formal mathematics in order to draw non-mathematical conclusions, cf. especially Being and Event which is his magnum opus, in which he uses set theory to support the tagline that 'Mathematics is Ontology'. Unlike Lacan, Badiou at least knows his stuff when it comes to the statement and development of formal results. That said, his interpretations and conclusions are often huge stretches. Here's a related MO post on Badiou: Badiou and Mathematics Interesting/Philosophy: I don't know if you'd call these misapplications, but they are certainly attempts to use formal results to draw philosophical conclusions that are not in any formal way entailed by those results. Here are some examples: • Michael Dummett on how Godel Incompleteness might/might not threaten the thesis that meaning is use (philosophical anti-realism): The philosophical significance of Gödel's theorem, M Dummett - Ratio, 1963 • Hilary Putnam on how the Lowenheim-Skolem Theorem proves that reference is underdetermined by all possible theoretical or operation constraints (i.e. that the meaning of our mathematical vocabulary can never be accurately understood in order to fix an intended model): Pretty much anything philosophical that has been written about the so-called Skolem Paradox involves formal-to-informal entailments. • Roger Penrose in The Emperor's New Mind again using Godel to draw conclusions about consciousness and mechanism share|cite|improve this answer Great answer -- and your mention of Lacan reminds me that, while it's not quite about a THOEOREM, the Lang-Huntington affair is certainly a good example of what I'm looking for! – JSE Apr 12 '11 at 15:54 Lacan featured prominently in Sokal's hoax. – Steve Huntsman Apr 12 '11 at 18:04 ... Given how frequent misunderstanding of words and even whole passages of text are, I would not be too surprised if one day somebody manages to misunderstand a whole language. – darij grinberg Apr 13 '11 at 8:02 I've looked at Badiou's work and I'd put it in your third category of "Interesting/Philosophy." These don't strike me as misapplications. It sounds to me that you're tacitly assuming that it is always illegitimate to use formal results to support a philosophical thesis unless the formal results formally entail the thesis, but this is itself a controversial philosophical stance. I'd reserve the term "misapplication" for situations where the applier misunderstands the mathematical result, or gets the technical details wrong, or claims that something follows formally when it doesn't. – Timothy Chow Apr 13 '11 at 21:55 One of my collegues, Lucien Guillou, told me that he was asked to give lessons in topology, especially knot theory, to psychanalysts of the Lacanian sort. One of the reason for their interest was the Borromean rings which they took for a illustration of the link between body, spirit and soul. Take one out and the remaining two fall apart. – Roland Bacher Jun 1 '11 at 6:15 My favourite in this direction is an application of Noether's theorem to public relations: Sha, "Noether's Theorem: The Science of Symmetry and the Law of Conservation", J. Public Relations Research, 16 (2004) 391-416. I quote from the abstract: Noether's Theorem shows that symmetry-or change-can only exist simultaneously with conservation or invariance. For public relations, the implication is that an organization can behave "symmetrically" while maintaining certain beliefs, principles, or purposes that will never be relinquished. A case study of the Democratic Progressive Party (DPP) on Taiwan using participant observation (13 months), qualitative interviews (n = 22), and a quantitative survey (n = 166; response rate = 28.77%) showed that the organization exhibited symmetry by reaching out to external publics, engaging in dialogue with them, and expressing openness regarding Taiwan independence. Simultaneously, the party conserved its interests in gaining power and establishing an independent Taiwan. Recent electoral victories of the DPP suggest the effectiveness of symmetry-conservation for public relations practice. share|cite|improve this answer This is amazing. I find it hard to believe it's not a joke, but the paper actually seems to have been cited a few times, with no indication that it is being read as anything other than at least a serious analogy with physics (for example, in "New media and public relations" by Sandra Duhé; see – Henry Cohn Apr 12 '11 at 21:08 I agree, this "application" is extremely laughable. I was expecting a million answers about Godel's theorems, but Noether's theorem applied to public relations? grooooaaaaaan..... I guess the problem is in thinking that our definition of "symmetry" is the same as their definition of "symmetry" – William Feb 24 '12 at 2:28 From Bey-Ling Sha's biography at San Diego State University: "Dr. Sha's primary research program combines theories of mathematical physics with public relations scholarship. Her other research areas include international public relations, activism, cultural identity, gender, and health communication. Her research has been published in Journal of Public Relations Research, Public Relations Review, and Journal of Promotion Management, as well as various book chapters." Needless to say, she has a PhD in mass communication, not mathematical physics. – Tom LaGatta Apr 29 '12 at 6:36 This is actually very typical for social sciences courses. I've seen a number of required books with titles like "quantum leadership" that start by misquoting physics or math and then pretend to apply the misquoted concepts. – Michael Jun 12 '14 at 14:38 This is not an answer. Just a very long comment. Mostly I am stunned by the answers given. (1) I'm surprised to see Lacan featured as the main example. What I see in these quotes is an attempt to formalise human condition. Is it laughable? Yes! But no more that 16th century physics and widely taken as such. I'm pretty sure 99,9% of the human population never heard of Lacan and was never affected by his thoughts on maths in any way. (2) If I was in the audience for a talk on "Theorems misapplied to non-mathematical contexts" I'd selfishly want to see examples that affected me or someone I know. Amazingly, none of the answers given until now mentionned the field of ECONOMICS. Some people in this field are passing opinions (often political) for mathematical facts every day and this translates into policies that have influence on the lives of millions (if not billions) of people. Just an example. When the subprime mortgage buble exploded, we heard most banks and insurance companies were shocked because "their experts(*) said the price of houses couldn't go down everywhere in the US at the same time". In fancier terms, it was widely believed that the use of Collateralized Debt Obligations (CDO) and Credit Default Swaps (CDS) were minimizing the risk of default while it was actually just spreading and increasing it. I am very ignorant in mathematical finance but I'd like someone to try and explain to me which theorems that was based on. I'm pretty sure this should go straight to the top of the list. (*) I used the word "experts" as a generic word for "economists and mathematicians employed by financial institutions". share|cite|improve this answer The "experts" also more or less assumed that mortgage failures were events independent from each other... – Thierry Zell Jun 1 '11 at 0:14 Not sure where you heard that from, but I've never heard of someone in credit products assuming defaults are uncorrelated. But I feel this is getting off-topic. People build models based on assumptions that approximate reality, sometimes the assumptions are good and sometimes not. It would be unfair to say that is misapplying math--if anything, math is the science of making conclusions from assumptions! I don't think people claimed to have theorems saying mortgage delinquencies couldn't double, but they might have had models assigning 0 probability to that due to the assumptions. – Luke Gustafson Jun 1 '11 at 5:58 @Luke: I suggest you try to explain to your Greek friends the subtle distinction you drew in the last sentence of your comment. You know, like, in terms of actual consequences for their real lifes... – Did Jun 1 '11 at 6:46 But Didier Piau, while it might be true that the distinction Luke Gustafson makes does not have much practical influence, I think it does make quite a difference for the question at hand. Consider something else and made up: If a bridge colapses because a mathematician used the wrong PDE or an illsuited solver or whatever to compute the static it is on-topic here, if the math. was told the max load will be 1000 tons and s/he should compute with a marging of safety for 1200 t. but then for some reason there where 1500 t. on that bridge and it collapsed then it seems off-topic here. – user9072 Jun 2 '11 at 13:20 A couple of misapplications of physics come to mind: Conservation of angular momentum does not mean what people think it means. If you have an object spinning on a flat surface, it can't turn around without outside forces, right? Wrong, the rattleback toy does this (video). The Coriolis effect is real, but the idea that this has something to do with the direction water spins down the drain is a false urban legend. share|cite|improve this answer That's neat. Thanks for the link to the rattleback. – Willie Wong Apr 12 '11 at 20:46 Evidently I don't understand conservation of angular momentum. I am very confused. – Harry Altman Apr 12 '11 at 23:29 Well, perhaps my presentation was misleading. Rolling is complicated, and there is a transfer of angular momentum from the rattleback to the surface. – Douglas Zare Apr 13 '11 at 1:30 On a related note, this reminds of R. Montgomery's paper "On the Gauge Theory of a Falling Cat", which use some fairly high powered mathematics to show how it is possible for a falling object (ie a cat) to spin around and reorient itself without acting on any outside forces. The same technique is actually used in satellites to reposition themselves in outer space! (Link to PDF: ) – Mikola Apr 13 '11 at 18:41 I have been taught that friction and normal reaction are both outside force and that the sum of one vertical vector and one horizontal is just anything you want, so if applied together away from the center of gravity, these two can create any torque you fancy. Of course, the difficulty is in making them to coordinate in an interesting way, but that has little to do with any conservation laws. – fedja Jun 24 '14 at 15:08 There are very many examples of the misuse of probability arguments in legal cases. See e.g. the Prosecutor's fallacy. share|cite|improve this answer The link is broken. – John Bentin Apr 12 '11 at 17:29 I fixed the link. – Nate Eldredge Apr 12 '11 at 17:55 So there is no more chance than one in a million that person has these characteristics - and a person of this kind committed the crime. So it must be the person in front of you, how could it possibly be anyone else? But the population of the nation is 60 million. What distinguishes this person from the other 59 (on average) who have the same profile? – Mark Bennet Apr 12 '11 at 21:56 As you mentioned, an often misapplied mathematical statement is Heisenberg's uncertainty principle, which for me, as a reader of Chriss-Ginzburg, is the purely mathematical statement that any subvariety of classical phase space ($\mathrm{Specm}(\mathrm{gr}A)$) that arises from a noncommutative system of equations (an ideal in A) is coisotropic. The Encyclopedia of Science and Religion states: There has also been an interest in using quantum uncertainty, and the breakdown of rigid determinism that it ensures, to defend the concept of free will and to provide a channel for divine action in the world in the face of unbreakable laws of nature. I've come across this often in religious discourse- the claim that the uncertainty principle states that "everything is uncertain" and that therefore the laws of nature are subject to the decisions of G-d. I've heard it freely confused with the "law of relativity", which apparently states that "everything is relative". Moreover, some anthropologists cite Heisenberg's uncertainty principle as follows: In social situations, too, the simple presence of an observer - an anthropologist at a tribal ceremony, a news reporter at a schoolboard meeting, or a TV camera in a courtroom - generally influences the course of events to some uncertain degree as they are recorded. The distortion that results from measurement or observation is called the Heisenberg Effect as in “No one does or can do the same thing on stage that he does unobserved...” share|cite|improve this answer This answer does not make it obvious, to me, that the uncertainty principle cannot be applied to religion in the way suggested. I would guess that, at least, most attempts to make this application are flawed, but the surely the physically interpreted version, not the abstract mathematical one, is the uncertainty principle in question? – Charles Staats Apr 13 '11 at 20:32 I don't see why $\Delta x \Delta p \geq \frac{\hbar}{2}$ would have any more application to religion that the formulation which I gave... I would doubt that most non-scientists are familiar with either formulation, nor do they mean either formulation when they cite it (although it would be entertaining if they did). Rather, it's turned into "everything is uncertain" or "the presence of an observer influences what is being observed". – Daniel Moskovich Apr 13 '11 at 22:39 Daniel, I voted down this answer for two reasons: (1) As Charles tried to tell you, your first example is, at best, an example of misuse of a physical, not mathematical result, and therefore does n;t answer the question as asked. (2) But actually, this is not a misuse at all, since the question of determinism is relevant to the old philosophical/theological debate on free will. Indeed, for centuries, the main argument against free will was based on the syllogism "if the world is deterministic, free will is impossible", which was roughly justified as follows: "if the present state... – Joël Oct 10 '11 at 22:45 ...of the world determines the future state, there is nothing the will can change about the future". This line of reasoning was used by both scientific and religious people, with the determinism of Newton's law of physics as one of the way to justify that the world is indeed determinism. Now the fact that the formulation of Quantum Mechanics is not deterministic surely undermines this argument. – Joël Oct 10 '11 at 22:49 The uncertainty principle doesn't have anything to do with determinism (or the lack thereof)! It just implies our inability to measure with arbitrary accuracy two quantities at the same time. Quantum mechanics of course "non deterministic", but non-determinism is built into the theory: it is not a consequence of the uncertainty principle. – Qfwfq Dec 24 '11 at 22:42 The original question, and several of the answers, refer to misuse of Godel's work, but with very few specific citations. For these, I would suggest Torkel Franzen's book, Godel's Theorem: An Incomplete Guide to its Use and Abuse. share|cite|improve this answer Oh yeah! Should have said I have Franzen's book at hand. I recommend it. – JSE Apr 13 '11 at 1:28 Franzen's book is excellent and a quick read: it is not an exhaustive list of the abuses in questions (or it would be longer!) but it does cover some common misconceptions and high profile cases (e.g. Penrose). More importantly, it does an excellent job of going straight to the problem, and makes the subtleties of the technique quite accessible to a non-logician but mathematically sophisticated reader. – Thierry Zell Apr 13 '11 at 1:34 I was more referring to misuse in conversation with artsy people who are in love with "What the Bleep do we know?" – Sean Tilson Apr 14 '11 at 4:33 Is the book incomplete as a consequence of Godel's theorem? – Asaf Karagila Jan 7 '12 at 14:10   "Therefore, socialist economy is impossible, in every sense of the word." Robert Murphy comes to this conclusion in Cantor’s Diagonal Argument: An Extension to the Socialist Calculation Debate.$^1$ The debate is over whether a Central Planning Board can, even in theory, correctly price goods and services, as it is assumed a market economy can. Socialists such as Dickinson argued that a market economy can, in principle, be simulated by the Board, even if it means solving a large system of simultaneous equations. Hayek, on behalf of the Austrians, agreed, yet maintained the number of equations—presumably one for each product and potential product—is clearly too large in practice. Both sides claimed victory. In the cited article, the author takes the ball from Hayek and carries it across the goal line: after a decent three-page explanation of the diagonal argument, Murphy concludes the Planning Board’s task would not merely be impractical, but fully impossible because of the requirement to publish an uncountably infinite list of prices. I suppose if one started with the assumption there are (at least) countably infinite number of products/services $p_1, p_2, \dots$ and also agreed that any possible subset of these products is again a product itself, the price of which is not necessarily the sum of the component prices (let’s ignore issues of convergence!), then one could conclude using Cantor’s Theorem ($2^S>S$) there are an uncountable number of products the Board must “list”. But I’m not sure why, if we take the listing process literally, it matters how large the infinity is. share|cite|improve this answer The key thing about the market is that nobody needs to know all the money prices but they are consistent (each contains information about all) or there is a thermostat negative feedback that rapidly converges to such a state. Hayek and especially his opponents, in particular, were wrong regarding solving market simultaneous equations absent an unbiased common medium of exchange. Behaviorally value is known to be not a function, an invariant in the nonclosed category of all partially ordered sets of words monotonically transformed. And factors of production are not in the domain of ... – Guido Jorg Apr 21 '15 at 10:17 this ordering. They don't satisfy wants directly and values of possibly consumed goods correspond to concrete wants one to one satisfied or anticipated to be satisfied. Without a perfectly liquid consumption good with an unbiased price in each goods exchange pair it is present and relative scarcity being used to map quantities of it to factors of production, no value proxy for factors of production exists and they cannot be valued. In which case costs exist but are unknown. The equations to solve would be unknown. Which is a problem, for ends don't justify means, foregone ends (costs) exist. – Guido Jorg Apr 21 '15 at 10:28 If values of factors of production are unknown, they cannot be compared with values of consumed goods by anybody, and which quantities of each are to be produced to cause least dissatisfaction, given that resources are insufficient to produce arbitrary quantities of each all at once, becomes unknown and unknowable. Listed prices would be correct only be accident if ever. Assuming an unbiased money exists (so it's not a socialist economy) the solving of equations explicitly is complicated ... – Guido Jorg Apr 21 '15 at 10:48 by the fact that, as Milne (1949) showed, these types of equations are typically unstable, a small change of parameters leads to orders of magnitude different solutions. Which means that if measurement or estimation errors occur, the validity of solutions cannot be confirmed. An calculated price due mostly to error in measuring conditions cannot be distinguished from a solution of completely different conditions. The reason the linked authors thought it matters how large infinity is, I guess, because perhaps they imagined Russell-type hypercomputers possibly doing the calculations? – Guido Jorg Apr 21 '15 at 10:56 The article has a problem: there are no calculations to solve in a socialist economy. The authors argue apparently that socialist economies cannot exist because they could never make sufficiently many arbitrary guesses and list them all even with a hypercomputer, and so cannot be called socialist economies: they didn't plan absolutely everything, implying no socialist economies can exist. Which is true but trivially so. One defines socialisms as planned almost everywhere. Socialist economies have a problem because they arbitrarily guess their productions, not a problem listing their guesses. – Guido Jorg Apr 21 '15 at 11:10 The "No free lunch" (NFL) theorem from mathematical optimization was used by William Dembski to disprove Darwinian theory of evolution. (The relevance of NFL's theorem to evolution was proposed earlier by Stuart Kauffman.) Olle Haggstrom wrote a paper debunking Dembski's argument. (Here is an early version with stronger rhetorics.) share|cite|improve this answer Arrow's theorem is often glossed as "there is no good voting system". Press' paper Strong profiling is not mathematically optimal for discovering rare malfeasors has been misinterpreted by the popular press as a mathematical endorsement of certain politics, though that's perhaps due in part to the intentional framing of the problem by Press. Goedel's theorem is misapplied arguably more than it is used properly. share|cite|improve this answer How exactly is that a missapplication of Arrow's theorem? It is certainly a valid interpretation. – Michael Greinecker Feb 24 '12 at 3:14 @Michael: I haven't time to go into specifics at the moment, but a much better gloss of Arrow's theorem is "IIA is an unreasonable condition in ordinal voting systems"; consider the 3-voter case and any majoritarian system. – Charles Feb 24 '12 at 4:31 Alan Sokal's Book deserves some mention if we are talking about misuse of theorems. share|cite|improve this answer This isn't exactly what you asked for, but I find it so amusing I could not resist. The Indiana $\pi$ bill, when they almost passed a bill claiming that $\pi=3.2$, in order to be able to square the circle. share|cite|improve this answer The bill did not say pi is 3.2, it was actually far too incomprehensible to infer any specific value of pi. – Michael Renardy Apr 12 '11 at 18:11 One interpretation I read was pi = 9. – Allen Knutson Apr 13 '11 at 0:44 One interesting thing about this bill is that it was not introduced to legislate on what the value of $\pi$ should be (an easy way to misunderstand the story), but rather in order to copyright of a method to square the circle for exclusive use free of charge by the State of Indiana. What the legislature thought they could use this for escapes me. – Thierry Zell Apr 13 '11 at 1:44 Apparently (according to Wikipedia) Goodwin had also proven such "truths" as trisecting a given angle, and had them published in American Mathematical Monthly, with the disclaimer 'published by request of the author.' Although this happened in the late 1800s, it makes me skeptical of ALL published mathematical results... – William Feb 24 '12 at 2:42 In order to baffle the uninitiated, some authors interpret Banach-Tarski paradox (stating that "it is possible to decompose a ball into five pieces which can be reassembled by rigid motions to form two balls of the same size as the original.", cf. in an obviously false way as if it could be applied to physical objects. E.g. Reuben Hersh writes (Reuben Hersh: "What Is Mathematics, Really?" p.255): "Stefan Banach and Alfred Tarski proved, using the axiom of choice, that it's possible to divide a pea (or a grape or a marshmallow) into 5 pieces such that the pieces can be moved around (translated and rotated) to have volume greater than the sun." Clearly, this formulation is very much misleading, since it suggests that the paradox can be applied to a physical objects, which is obviously false. Indeed, the construction is such that the ball is divided into non-measurable parts and, clearly, there is no physical objects corresponding to non-measurable sets. share|cite|improve this answer Why is it clear that there are no physical objects corresponding to non-measurable sets? For the same reason that "Hilbert Space" is a purely abstract mathematical construct with no utility in physics? Oh, wait... – Igor Rivin May 31 '11 at 19:51 The quotation from Hirsch also mixes two versions of the Banach-Taski theorem. The number 5 of pieces is, if I remember correctly, for making two balls the same size as the original. To get from a pea to the sun, more pieces would be needed (but still only finitely many). – Andreas Blass May 31 '11 at 20:42 I'm really bothered by the wording "applying the theorem to physical objects": a theorem is a mathematical statement, it can be applied to a mathematical object in the course of a proof, but talking about applying it to physical objects does not even begin to make sense. – Thierry Zell Jun 1 '11 at 0:01 @IgorRivin: Because there is no constructive way of building a non-measurable set. – Martin Hairer Jun 25 '14 at 20:58 In his book Everybody for Everybody, Samual A. Nigro argues that Gödel's theorems not only cast doubt on the theory of evolution, but prove the doctrine of original sin, the need for sacrament and penance, and that there is a future eternity. enter image description here share|cite|improve this answer This could be an unfair example, since I don't know the text myself. All I can say is that my skepticism is aroused just by the title of • Guerino Mazzola, The Topos of Music: Geometric Logic of Concepts, Theory, and Performance (Birkhäuser, 2002) (in other words, topos theory applied to music theory). At least one MO participant at MO (Mikael Vejdemo Johansson) has tried to read this book and came away feeling skeptical, according to his remarks here. I'd be interested in hearing other reactions from people who have taken a stab at it. share|cite|improve this answer I waited a while to say something on this as I do not really feel qualified, but as nobody else said anything so far: in view of other, traditional, mathematical work of the author, it seems highly likely to me that if this deserves to be on the list at all then only in the category 'math is solid, but for some reason inapplicable/not relevant to the application.' Now, whether the latter is the case or not is perhaps hard to tell as (I assume) 'theory of music' is not a 'hard' subject with a clear right or wrong. Based on a talk I heard years ago I remember that using this theory one... – user9072 Apr 13 '11 at 11:46 ...can make concrete assertions. Specifically (the details I remember are vague and my general musical is insufficient) there was some investigation carried out whether a certain sequence A of notes constitutes the motif (in the musical sense of a certain well-known piece of music) or whether it is a sequence A'. Using this theory an anwer was given; somebody with a music background in the audience disagreed with this answer, but it was my understanding that regarding this question there is debate in the music comm., I guess somebody else might have agreed. So, not sure what this tells. – user9072 Apr 13 '11 at 11:49 Thanks, unknown. I would love to get my hands on the book, even if my knowledge of music is not up to the task of deciding whether this is a worthwhile investigation. I am not challenging the mathematical competence of the author, by the way. Hopefully Mikael will see this sometime and share some of his thoughts on the subject (he wrote a review that was rejected for being overly harsh, even if admired within the publishing office). – Todd Trimble Apr 13 '11 at 12:45 You might find the book on the internet... – Michael Bächtold May 31 '11 at 19:40 Thanks, Michael. In fact, someone I know sent me an internet copy. – Todd Trimble May 31 '11 at 21:57 This is a wonderful and fascinating still life by Juan Sanchez Cotán: It is thought by many art historians that Cotán used a mathematical formula to determine the heights at which the various items would appear. For all I know this may be the case -- it would seem only appropriate given the name of the artist -- but I once read part of a book by a very respectable art historian (whose name I have maddeningly forgotten but I'm working on it) who said what the formula was. His evidence was just the picture itself and not any surviving record of how it was painted. But of course, given that the heights of the items are not precisely determined (anything like), it is clear that any number of curves could be declared to fit. This is not exactly misuse of a theorem but it was certainly misuse of mathematics, similar to finding the golden ratio everywhere but a bit more sophisticated. Added: I've tracked it down now. The critic is Norman Bryson and he says this: "In relation to the quince, the cabbage appears to come forward slightly; the melon is further forward than the quince, the melon slice projects out beyond the ledge, and the cucumber overhangs it still further. The arc is therefore not on the same plane as its co-ordinates, it curves in three dimensions: it is a true hyperbola, of the type produced when a cone is viewed in oblique section." I haven't found more of the quotation, but I seem to remember that it was quite important to Bryson that it really was a hyperbola and not, say, an exponential decay. (As a matter of fact, looking at the picture again I am not convinced that the items form a nice curve of any kind: the cabbage is too far to the left and too near to being directly under the apple. And the relationship of the string of the cabbage with the leaves of the apple leads me to doubt whether the curve lies in an oblique plane, or indeed any plane, as he suggests.) share|cite|improve this answer Heard in high-school History class, no reference unfortunately: in the early 20th century, someone published a monograph on the dimensions of a certain small building; derived many important constants from basic operations on said dimensions, showing the intent of the architects. The building in question was... ...a public urinal! – Thierry Zell Jun 1 '11 at 0:12 Sokal once again, with Brown and Friedman, wrote this paper: The complex dynamics of wishful thinking: The critical positivity ratio (arXiv version). The story behind this is that Nick Brown, "who began a part-time psychology course in his 50s – and ended up taking on America's academic establishment" according to Andrew Anthony in the guardian share|cite|improve this answer The whole "transformation" and "network centric warfare" push in the US Department of Defense last decade under Cebrowski and Rumsfeld invoked a heap of dubious interpretations and purported applications of nonlinear phenomena (perhaps most notably when 9/11 was referred to as a "system perturbation"). See here for an introductory overview. share|cite|improve this answer This recent article is a striking example of debunking a misuse of mathematics in social sciences. In short, some diversity scholars had claimed to prove a "theorem" that diverse groups of less able individuals outperform uniform groups of more able ones. Upon examination, it turns out that the theorem is • wrong; • trivial and contentless if corrected; • has assumptions that make it irrelevant for applications, in particular, they are not met in the numerical experiment featured in the paper to illustrate the theorem. Remarkably, the authors use an expression "for any probability measure on (a finite set) $\Phi$ with full support, (something holds) with probability one", instead of saying that it holds for every element of $\Phi$. It seems to be a widely accepted result, published in PNAS with about 500 citations in Google Scholar. share|cite|improve this answer Not really a theorem but amusing non-sense. Somebody (it was perhaps Sokal) told me about a psychanalytical book based on set theory. The author wrote it in English and translated the french terminology "th\'eorie des ensembles" as "Theory of the (w)hole". The book was later translated into French with the title "Th\'eorie des t(r)ous". share|cite|improve this answer I submit, to your consideration, this paper by Frank Tipler, Professor at Tulane University. The paper was published in the peer-reviewed Reports on Progress in Physics, volume 68 (2005), pages 897-964. Tipler's book "The Physics of Christianity" is based on this paper. Tipler invokes Gödel's theorem (see p. 905 onwards), Presburger arithmetic, Löwenheim-Skolem, Hales' proof of the Kepler conjecture (the latter only as an example, I believe), and various other mathematical results. share|cite|improve this answer Wow, do you think he's making a bid for the Templeton Prize? – gowers Jun 2 '11 at 7:38 If he does, it's a slam dunk. – Alon Amit Jun 3 '11 at 3:50 It's physics rather than math, but surely this creative paper by Alan Sokal deserves mention. share|cite|improve this answer A rare instance of Gödel-abuse in a published paper is "Bacterial wisdom, Gödel's theorem and creative genomic webs" by Eshel Ben-Jacob. Here, Gödel's theorem is used to prove that "a system cannot self-design another system which is more advanced than itself", with application to genomics. share|cite|improve this answer I call your attention to where you will find the book, Quantum Mechanics for Beginners; an Introduction with the blurb, Quantum Mechanics studies the peculiar world of the "ones"; those things in nature that can not be divided. Since God is a One, and the Body of Christ as well, it shouldn't be surprising that the Bible discusses the "ones" at length, and this a few millennia before the emergence of Quantum Mechanics in the scientific arena. To appreciate this unexpected dimension of the Bible, Abarim Publication's fun-filled crash course in Quantum Mechanics should be mandatory at every seminary. Also, Chaos Theory for Beginners; an Introduction: Chaos Theory looks at patterns and their reoccurrence in nature. Since Moses built the tabernacle - which would turn into the temple, and later still in the Body of Christ - after patterns he saw in heaven, Chaos Theory is a must for every serious student of the Bible. One of the chapters is entitled, Agape and Gravity Live Together in Perfect Harmony. Fans of Stevie Wonder may see a pun there. There is also Scripture Theory for Beginners; an Introduction: What Chaos Theory does with nature, Scripture Theory does with Scriptures: the identification of reoccurring patterns and their meanings. Especially interesting are those Biblical patterns that are identical to those found in high-energy physics. share|cite|improve this answer Russian media provide a lot of amusing examples. Let me mention two: 1) (Perelman's proof of) the Poincaré conjecture leads to understanding the shape of the Universe; 2) (this is maybe what you mean in the post) it follows from the Godel's theorem that God does not exist. share|cite|improve this answer I'm not altogether convinced by your first example: see… – Daniel Moskovich Apr 14 '11 at 0:24 This is my favourite example. From the text: Authors try to analyze how librarians work, by making an analogy with fractals. Also, the obligatory reference to Heisenbergs uncertainty principle. share|cite|improve this answer In the same vein as the bayesian argument for creationism and misapplications of Gödel's incompleteness theorems, there are misapplications of the second law of thermodynamics against evolution of life ("undesigned", e.g. darwinian or lamarckian). The second law is a mathematical consequence of Hamilton and Schrödinger equations for reasonable hamiltonians, in particular of fundamental physical evolution equations, and also of simple statistical models (statistical ensembles). See Wikipedia. The argument is that life is complex and evolution implies a decrease in entropy/increase in complexity contradicting the second law. See for instance here. The flaw is that the Earth, where evolution occurs, is not an isolated system. If we consider rather the solar (or just Sun-Earth) system there is loss of entropy on Earth but a compensating gain on the Sun. For a recent anecdote (and a nice blog to add to your blogroll) see Retraction Watch. share|cite|improve this answer This reference is an excellent parody of the so-called application of mathematics to economics and other social sciences (it purports to apply mathematics to theology): share|cite|improve this answer Your Answer
5155b4c21f03275f
Religion and Science First published Tue Feb 20, 2007; substantive revision Thu May 27, 2010 Modern western empirical science has surely been the most impressive intellectual development since the 16th century. Religion, of course, has been around for much longer, and is presently flourishing, perhaps as never before. (True, there is the thesis of secularism, according to which science and technology, on the one hand, and religion, on the other, are inversely related: as the former waxes, the latter wanes. Recent resurgences of religion and religious belief in many parts of the world, however, cast considerable doubt on this thesis.) The relation between these two great cultural forces has been tumultuous, many-faceted, and confusing. This entry will concentrate on the relation between science and the theistic religions: Christianity, Judaism, Islam, where theism is the belief that there is an all-powerful, all-knowing perfectly good immaterial person who has created the world, has created human beings ‘in his own image,’ and to whom we owe worship, obedience and allegiance. Most of what follows will also apply to monotheistic and henotheistic varieties of Buddhism and Hinduism. There are many important issues and questions in this neighborhood; this entry concentrates on just a few. Perhaps the most salient question is whether the relation between religion and science is characterized by conflict or by concord. (Of course it is possible that there be both conflict and concord: conflict along certain dimensions, concord along others.) This question will be the central focus of what follows. Other important issues to be considered are the nature of religion, the nature of science, the epistemologies of science and, in particular, of religious belief, and the question how the latter figures into the (alleged or actual) conflict or concord between religion and science. 1. The Nature of Science and the Nature of Religion 1.1 Science The first thing to say, here, is that it is exceedingly difficult to characterize these phenomena. First, consider science: what exactly is science? How can we characterize it? What are the necessary and sufficient conditions for a given inquiry or theory or claim to be scientific, a part of science? This is far from easy to say. Many conditions have been proposed as essential to science. According to Jacques Monod, “The cornerstone of the scientific method is the postulate that nature is objective…. In other words, the systematic denial that ‘true’ knowledge can be got by interpreting nature in terms of final causes …” (Monod 1971, 21, Monod's emphasis). In the 1930s, the eminent German Chemist Walther Nernst claimed that science, by definition, requires an infinite universe; hence Big Bang theory, he said, isn't science (von Weizsäcker 1964, 151). Another proposed constraint: science can't involve moral judgments, or value judgments more generally. Clearly there is an intimate connection between the nature of science and its aim, the conditions under which something is successful science. Some say the aim of science is explanation (whether or not this is put in the service of truth). Some (realists) say the aim of science is to produce true theories; others say the aim of science is to produce empirically adequate theories, whether or not they are true (van Fraassen 1980). Some say science can't deal with the subjective, but only with what is public and sharable (and thus reports of consciousness are a better subject for scientific study than consciousness itself). Some say that science can deal only with what is repeatable; others deny this. In the furor over the teaching of “Intelligent Design” (ID) in public schools, some have said that scientific theories must be falsifiable, and, since the proposition that living things (rabbits, say) have been designed by one or more intelligent designers isn't falsifiable, ID isn't science. Others point out that many eminently scientific claims—for example, there are electrons—aren't falsifiable in isolation: what is falsifiable are whole theories about electrons. And while the proposition living things have been designed by an intelligent being is not falsifiable in isolation, the proposition an intelligent being has designed and created 800 lb. rabbits that live in Cleveland is clearly falsifiable (and false). The first group may reply that this proposition about 800 lb. rabbits is really just equivalent to its empirical implications, i.e., to the proposition that there are 800 lb. rabbits that live in Cleveland, so that the bit about the designer really drops out. The second group may then retort that if so, the same must hold for theories about electrons; but then theories about electrons are really just equivalent to their empirical implications, so that electrons drop out. Still others claim that science is constrained by ‘methodological naturalism’ (MN)—the idea that neither the data for a scientific investigation nor a scientific theory can properly refer to supernatural beings (God, angels, demons); thus one couldn't properly propose (as part of science) a theory according to which the recent outbreak of weird and irrational behavior in Washington D.C. is to be accounted for in terms of increased demonic behavior in that neighborhood. How do we know that MN really is an essential constraint on science? Some claim that it is simply a matter of definition; thus Nancey Murphy: “… there is what we might call methodological atheism, which is by definition common to all natural science” (Murphy 2001, 464). She continues: “This is simply the principle that scientific explanations are to be in terms of natural (not supernatural) entities and processes”. Similarly for Michael Ruse: “The Creationists believe that the world started miraculously. But miracles lie outside of science, which by definition deals only with the natural, the repeatable, that which is governed by law” (Ruse 1982, 322). By definition of what? By definition of the term ‘science’ one supposes. But others then ask: what about the Big Bang: if it turns out to be unrepeatable, must we conclude that it can't be studied scientifically? And consider the claim that science, by definition, deals only with that which is governed by law—natural law, one supposes. Some empiricists (in particular, Bas van Fraassen) argue that there aren't any natural laws (but only regularities): if they are right, would it follow that there is nothing at all for science to study? Still further, while some people argue that MN is an essential constraint on science, others dispute this: but can a serious dispute be settled just by citing a definition? Giving plausible necessary and sufficient conditions for science, therefore, is far from trivial; and many philosophers of science have given up on the “demarcation problem,” the problem of proposing such conditions (Laudan 1988). Perhaps the best we can do is point to paradigmatic examples of science and paradigmatic examples of non-science. Of course it may be a mistake to suppose that there is just one activity here, and just one aim. The sciences are enormously varied; there is the sort of activity that goes on in highly theoretical branches of physics (for example, investigating what happened during the first 10−43 seconds, or trying to figure out how to subject string theory to empirical check). But there is also the sort of project exemplified by an attempt to learn how the population of touconderos has responded to the decimation of the Amazon jungle over the last 25 years. In the first kind of account it may make sense to think what is desired is an empirically adequate theory, with the question of the truth of the theory at least temporarily bracketed. Not so in cases of the second kind; here nothing but the sober truth will do. Similarly with methodological naturalism. Some scientific projects are clearly constrained by MN (see below); a condition for theoretical adequacy, for them, will certainly be that the account in question is naturalistic. But is MN just part of the very nature of science as such? According to Isaac Newton, often said to be the greatest scientist of all time, the orbits of the planets would decay into chaos without outside intervention; he therefore proposed that God periodically adjusted their orbits. While that hypothesis is one of which we no longer have need, is it clear that its addition to Newton's account of the motions of the planets resulted in something that wasn't science at all? That seems unduly harsh. Perhaps we should think of the concept of science as one of those cluster concepts called to our attention by Thomas Aquinas and Ludwig Wittgenstein. Perhaps there are several quite different activities that go under the name ‘science’; these activities are related to each other by similarity and analogy, but there is no one single activity which is just science as such. There are projects for which the criterion of success involves producing true theories; there are others where the criterion of success involves producing theories that are empirically adequate, whether or not they are also true. There are projects constrained by MN; there are other projects that are not so constrained. These projects or activities all fall under the meaning of the term ‘science’; but there is no single activity of which all are examples. (In the same way, chess, basketball and poker are all games; but there is no single game of which they are all versions.) Perhaps the best we can do, with respect to characterizing science, is to say that the term ‘science’ applies to any activity that is (1) a systematic and disciplined enterprise aimed at finding out truth about our world,[1] and (2) has significant empirical involvement. This is of course vague (How systematic? How disciplined? How much empirical involvement?) and perhaps unduly permissive. (Does astrology count as science, even if only bad science?) Still, we do have many excellent examples of science, and excellent examples of non-science. 1.2 Religion If it is difficult to give an account of the nature of science, it is not much easier to say just what a religion is. Of course there are multifarious examples: Christianity, Islam, Judaism, Hinduism, Buddhism and many others. What characteristics are necessary and sufficient for something's being a religion? How does one distinguish a religion from a way of life, such as Confucianism? That's not easy to say. Not all religions involve belief in something like the almighty and all-knowing, morally perfect God of the theistic religions, or even in any supernatural beings at all. (Of course a substantial majority of them do.) With respect to our present inquiry, what is of special importance is the notion of a religious belief: what does a belief have to be like to be religious? Once more, that's not easy to say. To cite the furor over intelligent design again, some say the proposition that there is an intelligent designer of the living world is religion, not science. But not just any belief involving an intelligent designer, indeed, not just any belief involving God, is automatically religious. According to the New Testament book of James, “the devils believe [that God exists] and tremble”; the devils' beliefs, presumably, aren't religious.[2] Someone might propose theories about an omnipotent, omniscient and wholly good being as a key part of a metaphysical system: belief in such theories need not be religious. And what about a system of beliefs that answers the same great human questions answered by the clear examples of religion: questions about the fundamental nature of the universe and what is most real and basic in it, about the place of human beings in that universe, about whether there is such a thing as sin or an analogue, and if there is, what there is to be done about it, where we must look to improve the human condition, whether human beings survive their deaths and how a rational person should act? Will any system of beliefs that provides answers to those questions count as a religion? Again, not easy to say; probably not. The truth here, perhaps, is that a belief isn't religious just in itself. The property of being religious isn't intrinsic to a belief; it is rather one a belief acquires when it functions in a certain way in the life of a given person or community. To be a religious belief, the belief in question would have to be appropriately connected with characteristically religious attitudes on the part of the believer, such attitudes as worship, love, commitment, awe, and the like. Consider someone who believes that there is such a person as God, all right, because the existence of God helps with several metaphysical problems (for example, the nature of causation, the nature of propositions, properties and sets, and the nature of proper function in creatures that are not human artifacts). However, this person has no inclination to worship or love God, no commitment to try to further God's projects in our world; perhaps, like the devils, he hates God and intentionally does whatever he can to frustrate God's purposes in the world. For such a person, belief that there is such a person as God need not be a religious belief. In this way it's possible that a pair of people share a given belief which functions as a religious belief in the life of only one of them. It is therefore extremely difficult to give (informative) necessary and sufficient conditions for either science or religion. Perhaps for present purposes that is not a really serious problem; we do have many excellent examples of each, and perhaps that will suffice for our inquiry. 2. Epistemology and Science and Religion There are many interesting epistemological questions about science. A central topic has been the under determination of theory by evidence: evidence for a theory seldom entails the theory, in which case there will be several empirically equivalent theories—theories with the same consequences with respect to experience. Can empirically equivalent theories differ in epistemic status or value? If so, what makes the difference? Here it is common to appeal to the so-called theoretical virtues, such as simplicity, fecundity, beauty and the like. What shall we think of the “pessimistic induction” according to which nearly all past scientific theories have been later rejected; should that reduce our confidence in present scientific theories? How much, if any, of current scientific lore constitutes knowledge? And how far does the scientific method reach? Are there subjects science isn't competent to deal with? Is science more competent to deal with some subjects than others? Scientific modes of procedure seem to have been most successful in the hard sciences; the human sciences seem to lag. Are there differences in epistemic well-foundedness between different sciences, or perhaps between the hard sciences and the softer sciences? Questions of this sort, while of great intrinsic interest, aren't directly relevant to our present inquiry. What is most important to see is that the epistemology of science is really the epistemology of the main human cognitive faculties: memory, perception, rational intuition (logic and mathematics), testimony, perhaps Reid's sympathy, induction, and the like. What is characteristic of science is that these faculties are employed in a particularly disciplined and systematic way, and that there is particular emphasis upon perceptual experience. With respect to religious belief, there are also several sorts of epistemological questions. Are there good arguments for the existence of God? If there aren't, does it matter? Is the existence of evil, in all the horrifying forms it displays, evidence against theistic belief? Does it constitute a defeater for theistic belief? What about the question of pluralism: religion comes in so many kinds—Christianity, Islam, Judaism, Hinduism, Buddhism (with sub versions of each kind), but also a host of less widely practiced varieties. According to Jean Bodin, “each is refuted by all” (Bodin 1975, 256); does this variety constitute a defeater for each particular variety of religious belief? Some religious doctrines—Trinity, Incarnation, Atonement—are not easy to understand; does that mean they cannot be known or even rationally believed? If religious belief is based on faith rather than on reason, does that mean that it is at best seriously insecure, so that talk of a ‘leap of faith’, or ‘blind faith’ is appropriate? These questions have been most fully investigated with respect to Christian belief; hence what follows will concentrate on some questions about the epistemology of Christian belief. For present purposes, perhaps the main epistemological question is this: what is the source of rationality, or warrant, or positive epistemic status, if any, enjoyed by religious belief? Is it of the same sort as that enjoyed by belief in the teachings of current science? Is the evidence, if any, for religious belief of the same sort as that for scientific beliefs? Or is there some special source of positive epistemic status for religious belief? This is really a contemporary version of a question that goes back a long way: the question about the relation between faith and reason. It is connected with the question whether there are cogent arguments (rational arguments, arguments drawn from the deliverances of reason) for theistic belief, and whether the existence of cogent argument is required for rational acceptance of religious belief. Here there are fundamentally two views. According to ‘evidentialism’, the source of positive epistemic status for religious belief, if indeed it has such status, is just reason—the ensemble of rational faculties including, preeminently, perception, memory, rational intuition, testimony, and the like. The source of positive epistemic status for religious belief, therefore, is the same as that for scientific belief. This view goes back at least to John Locke (1689) and has prominent contemporary representatives. On this view, the existence of cogent arguments for a religious belief is required for rational acceptance of that belief, or at any rate is intimately related to rational acceptance. Some who endorse this view believe there aren't any such cogent arguments; accordingly they reject religious belief as unfounded and rationally unacceptable (Mackie 1982); others hold that in fact there are excellent arguments for theism and even for specifically Christian belief. Here the most prominent contemporary spokesperson would be Richard Swinburne, whose work over the last 30 years or so has resulted in the most powerful, complete and sophisticated development of natural theology the world has so far seen (see, e.g., Swinburne (1979, 2004), (1981, 2005)). The other main view, one adopted by, for example, both Thomas Aquinas (Summa Theologiae) and John Calvin (1559), is that belief in God in the first place, and in the distinctive teachings of Christianity in the second, can be rationally accepted even if there are no cogent arguments for them from the deliverances of reason; they have a source of warrant or positive epistemic status independent of the deliverances of reason. This view also has prominent contemporary representation (Alston 1991; Plantinga and Wolterstorff 1984; Plantinga 2000). To use Calvin's terminology, there is the Sensus Divinitatis, which is a source of belief in God, and the Internal Testimony of the Holy Spirit, which is the source of belief in the distinctive doctrines of Christianity. Beliefs produced by these sources go beyond reason in the sense that the source of their warrant is not the deliverances of reason; of course it does not follow that such beliefs are irrational, or contrary to reason; nor does it follow that there is something especially dicey or insecure, or chancy about them, as if faith were necessarily blind or a leap in the dark. Indeed, John Calvin defines faith as “a firm and certain knowledge of God's benevolence towards us, … .” (Calvin, 1559, p. 551 (emphasis added)). On this view, religion and faith have a source of properly rational belief independent of reason and science; it would therefore be possible for religion and faith to correct as well as be corrected by science and reason. There is some reason to think that if theism is indeed true, if indeed there is an all-powerful, all-knowing perfectly good person who has created the world and created human beings in his image, then religious belief would be independent of arguments from reason; it would not require such argument for rationality or positive epistemic status. For if theism is true, God would presumably want human beings to know of his presence (and in fact the vast majority of the human population believe in God or something very much like him); he would therefore arrange for human beings to be able to come to knowledge of him. But if knowledge of God depended on the theistic arguments, or other arguments from the deliverances of reason, then, as Aquinas says, only a few human beings would ever come to a knowledge of this truth, and they only after a long time, and with a substantial admixture of error. 3. Conflict and Concord 3.1 Concord Let's begin with concord. The early pioneers and heroes of modern Western science—Copernicus, Galileo, Kepler, Newton, Boyle, and so on—were all serious Christians, if occasionally, as with Newton, Christologically unorthodox. Furthermore, many (Foster 1934, 1935, 1936; Ratzsch 2009) have pointed out that theistic belief and empirical science display a deep concord, fit together neatly. This is in part a result of the doctrines of creation embraced by theistic religions—in particular two aspects of those doctrines. First, there is the thought that God has created the world, and has of course therefore also created human beings. Furthermore, he has created human beings in his own image. Now God, according to theistic belief, is a person: a being who has knowledge, affection (likes and dislikes), and executive will, and who can act on his beliefs in order to achieve his ends. One of the chief features of the divine image in human beings, then, is the ability to form beliefs and to acquire knowledge. As Thomas Aquinas puts it, “Since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God” (ST Ia q. 93 a. 4). God has therefore created both us and the world, and arranged for the former to know the latter. Thinking of science at the most basic level as the project of acquiring knowledge of ourselves and our world, it is clear, from this perspective, that the doctrine of imago dei underwrites this project. Indeed, the pursuit of science is a clear example of the development and enhancement of the image of God in human beings, both individually and collectively. Second, there is the thought that divine creation is contingent. According to theism, many of God's properties—his omniscience and omnipotence, his goodness and love—are essential to him: he has them in every possible world in which he exists. (And since, according to most theistic thought, he is a necessary being, one that exists in every possible world, he has those properties in every possible world.) Not so, however, with his property of creating. He isn't obliged, by his nature or anything else, to create the world; it is rather a free action on his part. Furthermore, given that he does create, he isn't obliged to do so in any particular way, or to create any particular kinds of things; that he has created the kinds of things we actually find is again contingent, a free action on his part. It is this doctrine of the contingency of divine creation that underwrites the empirical character of modern Western science (Ratzsch, 2009). For the realm of the necessary is (for the most part) the realm of a priori knowledge; here we have mathematics and logic and much philosophy.[3] What is contingent, on the other hand, is the domain or realm of a posteriori knowledge,[4] the sort of knowledge produced by perception, memory, and the empirical methods of science. This relationship between the contingency of creation and the importance of the empirical was recognized very early. Thus Roger Cotes, from the preface he wrote to Newton's Principia Mathematica: Without all doubt this world, so diversified with that variety of forms and motions we find in it, could arise from nothing but the perfect free will of God directing and presiding over it. From this fountain it is that those laws, which we call the laws of Nature, have flowed, in which there appear many traces of the most wise contrivance, but not the least shadow of necessity. These therefore we must not seek from uncertain conjectures, but learn them from observations and experiments (Cotes 1953, 132–33) [emphasis added]. What we've just seen is that in a certain way theistic belief supports modern science by licensing or endorsing the whole project of empirical investigation; it is also sometimes claimed that science supports theistic belief. Here there are several arguments, arguments that have historically fallen into two basic types: biological and cosmological. An example of the first type is the argument proposed by Michael Behe (Behe, 1996), according to which some structures at the molecular level exhibit “irreducible complexity.” These systems display several finely matched interacting parts all of which must be present and working properly in order for the system to do what it does; the removal of any part would preclude the thing's functioning. Among the phenomena Behe cites are the bacterial flagellum, the cilia employed by several kinds of cells for locomotion and other functions, blood clotting, the immune system, the transport of materials within cells, and the incredibly complex cascade of biochemical reactions and events that occur in vision. Such irreducibly complex structures and phenomena, he argues, can't have come to be by gradual, step-by-step Darwinian evolution (unguided by the hand of God or any other person); at any rate the probability that they should do so is vanishingly small. They therefore present what he calls a Lilliputian challenge to unguided Darwinism; if he is right, they present it with a Gargantuan challenge as well. Not only do they challenge Darwinism; they are also, he says, obviously designed: their design is about as obvious as an elephant in a living room: “to a person who does not feel obliged to restrict his search to unintelligent causes, the straightforward conclusion is that many biochemical systems were designed” (Behe, p. 193). Others, for example Paul Draper (2002) and Kenneth R. Miller (1999, 130–64), argue that Behe has not proved his case. A second type of argument for theism starts from the apparent fine-tuning of several of the physical parameters. Starting in the late sixties and early seventies, astrophysicists and others noted that several of the basic physical constants must fall within very narrow limits if there is to be the development of intelligent life—at any rate in a way anything like the way in which we think it actually happened. Thus B. J. Carr and M. J. Rees: The basic features of galaxies, stars, planets and the everyday world are essentially determined by a few microphysical constants and by the effects of gravitation… . several aspects of our Universe—some of which seem to be prerequisites for the evolution of any form of life—depend rather delicately on apparent ‘coincidences’ among the physical constants (Carr and Rees, 1979, 605). For example, if the force of gravity were even slightly stronger, all stars would be blue giants; if even slightly weaker, all would be red dwarfs; in neither case could life have developed (Carter 1979, 72). The same goes for the weak and strong nuclear forces; if either had been even slightly different, life, at any rate life of the sort we have, could probably not have developed. Apparently life is possible only because the universe is expanding at just the rate required to avoid recollapse. At an earlier time, the fine-tuning had to be even more remarkable: … we know that there has to have been a very close balance between the competing effect of explosive expansion and gravitational contraction which, at the very earliest epoch about which we can even pretend to speak (called the Planck time, 10–43 sec. after the big bang), would have corresponded to the incredible degree of accuracy represented by a deviation in their ratio from unity by only one part in 10 to the sixtieth (Polkinghorne 1989, 22). Other examples: the value of cosmological constant, of the vacuum expectation value of the Higgs field, and the ratio of the mass of the proton to the electron must all be fine-tuned to an incredible degree for the universe to be life-permitting (Barr 2003, 123-130). A particularly informed and technically detailed account of some of these fine-tunings is to be found in Robin Collins's “Evidence for Fine-Tuning” (Collins 2003). Many see these apparent enormous coincidences as substantiating the theistic claim that the universe has been created by a personal God who intends that there be life and indeed intelligent life; they take fine-tuning as offering the material for a properly restrained theistic argument. These arguments take several versions; perhaps the most successful versions argue that the epistemic probability of these fine-tuning phenomena on theism is much greater than their epistemic probability on the atheistic chance hypothesis. Here the conclusion is not (as such) that probably theism is true, but rather that theism is much better supported by these phenomena than the chance hypothesis is (Swinburne 2003; Collins 1999). Objections come in many varieties. Some who offer these arguments, in particular those associated with the so-called ‘Intelligent Design’ movement, take them to be contributions to science rather than philosophy or theology; the most common objection is that they don't meet the conditions for being science, in particular because their conclusion, that the universe has been designed by an intelligent being, isn't falsifiable. Others (as we saw above) reply that falsifiability is ordinarily not a property of individual propositions, but of entire theories, and that theories involving intelligent design can perfectly well be falsifiable. A more interesting objection to fine-tuning arguments is the “many universe” suggestion: perhaps there are very many, even infinitely many different universes or worlds; the cosmological constants take on different values in different worlds, so that very many (perhaps all possible) different sets of such values get exemplified in one world or another. Couldn't there be an eternal cycle of ‘big bangs’, with subsequent expansion to a certain limit and then subsequent contraction to a ‘big crunch’ at which the cosmological values are arbitrarily reset? (Dennett 1995, 179) Alternatively, couldn't it have been that at the Big Bang, there was enormous initial inflation, resulting in many cosmoi with many different settings for the physical constants? In either case it isn't at all surprising that in one or another of the resulting universes, the values of the cosmological constants are such as to be life-permitting. Nor is it at all surprising that the universe in which we find ourselves has life-permitting values; we couldn't exist elsewhere. If so, then the fine-tuning argument is ineffective: the probability of fine-tuning on the many worlds suggestion together with atheism is at least as large as the probability of fine-tuning on theism. There are responses (for example, that on this account there would have to be a universe generator which was itself fine-tuned (Collins 1999), or that even if it is likely that some universe be fine-tuned, nevertheless the likelihood that this universe be fine-tuned is unaffected by the pluriverse suggestion (White 2003)) and responses to the responses, and so on; not surprisingly, there is no consensus as to whether these fine-tuning arguments are successful. 3.2 Conflict? The Christian doctrine of creation supports a deep concord between Christian belief and science; yet it is of course compatible with this sort of concord that there also be conflict. Many have claimed that there is conflict, indeed warfare, between religion and science (Draper 1875) (White 1895). This is certainly too strong; but obviously the relation between the two has not always been smooth and irenic. There is the famous Galileo incident, often portrayed as a contest between the Catholic hierarchy, representing the forces of repression and tradition, the voice of the old world, the dead hand of the past, and, on the other hand, the forces of progress and the dulcet voice of reason and science. This way of looking at the matter is simplistic (Brooke 1991, 8–9); much more was involved. The dominant Aristotelian thought of the day was heavily a prioristic; hence part of what was involved was a dispute about the relative importance of observation and a priori thought in astronomy. Also involved were questions about what the Christian (and Jewish) Bible teaches in this area: does a passage like Joshua 10:12–15 (in which Joshua commanded the sun to stand still) favor the Ptolemaic system over the Copernican? And of course the usual questions of power and authority were also present.[5] More recently, a central locus of alleged conflict has been the theory of evolution. This particular flap is of course still very much with us. Many Christian fundamentalists accept a literal interpretation of the creation account in the first two chapters of Genesis; they therefore find incompatibility between the contemporary Darwinian evolutionary accounts of our origins and the Christian faith, at least as they understand it. Many Darwinian fundamentalists (as the late Stephen J. Gould called them) second that motion: they too claim there is conflict between Darwinian evolution and classical Christian or theistic belief. Contemporaries who champion this conflict view would include, for example, Richard Dawkins (1986, 2003), and Daniel Dennett (1995). An important part of the alleged conflict turns on the Christian belief that human beings and other creatures have been designed—designed by God; according to evolution, however, (so say Dawkins and Dennett), human beings have not been designed, but are a product of the unguided blind process of natural selection operating on some such source of genetic variation as random genetic mutation. Thus Dawkins: Others point out that this proposed conflict is far from obvious. The central feature of the modern doctrine of evolution is that the main driving force of the process is natural selection, winnowing some form of genetic variation, the most popular version being random genetic mutation. It is no part of the theory to say that these mutations occur just by chance in a sense of that term that implies that they are uncaused; they are random only in the sense that they do not arise from the design plan of the creatures to which they accrue, and do not occur because they enhance the organism's reproductive fitness. Thus Ernst Mayr, the dean of post-World War II biology: “When it is said that mutation or variation is random, the statement simply means that there is no correlation between the production of new genotypes and the adaptational needs of an organism in the given environment” (Mayr 1998, 98). If so, evolution, as currently stated and currently understood, is perfectly compatible with God's orchestrating and overseeing the whole process; indeed, it is perfectly compatible with that theory that God causes the random genetic mutations that are winnowed by natural selection. Those who claim that evolution shows that humankind and other living things have not been designed, so say their opponents, confuse a naturalistic gloss on the scientific theory with the theory itself. The claim that evolution demonstrates that human beings and other living creatures have not, contrary to appearances, been designed, is not part of or a consequence of the scientific theory, but a metaphysical or theological add-on (van Inwagen 2003).[6] A second area of alleged conflict has to do with divine action in the world. According to classical theistic religion, God has created the world; he also upholds and conserves it, preserves it in being. Apart from his conserving activity, the world would disappear like a candle flame in a high wind. So there is creation and conservation; but, so say the classical theistic religions, there is also special divine action, action going beyond creation and conservation. There are the miracles reported in both the Jewish and Christian Bibles: the parting of the Red Sea, for example, as well as Jesus's walking on water, feeding the 5,000, and rising from the dead. Miracles are also reported in the Koran. Many believers don't think of these special divine actions as restricted to Bible times: God still, at present, responds to prayers and accomplishes miraculous healings. Further, according to Christian ways of thought, God works in the hearts and minds of his children in such a way as to produce faith; Thomas Aquinas called this divine activity ‘the internal instigation of the Holy Spirit’ and John Calvin called it ‘the internal witness (or testimony) of the Holy Spirit.’ All of these would be examples of special divine action. Now many see here conflict with modern science. Among them are a large number of theologians; thus according to Langdon Gilkey, … contemporary theology does not expect, nor does it speak of, wondrous divine events on the surface of natural and historical life. The causal nexus in space and time which the Enlightenment science and philosophy introduced into the Western mind … is also assumed by modern theologians and scholars; since they participate in the modern world of science both intellectually and existentially, they can scarcely do anything else. Now this assumption of a causal order among phenomenal events, and therefore of the authority of the scientific interpretation of observable events, makes a great difference to the validity one assigns to biblical narratives and so to the way one understands their meaning. Suddenly a vast panoply of divine deeds and events recorded in scripture are no longer regarded as having actually happened… Whatever the Hebrews believed, we believe that the biblical people lived in the same causal continuum of space and time in which we live, and so one in which no divine wonders transpired and no divine voices were heard. (Gilkey 1983,31) Of course many philosophers and scientists would agree. The problem is alleged to be with God's special action in the world; there is no particular problem with creation and conservation, but divine action going beyond that is widely thought to be incompatible with modern science. Where exactly is this incompatibility thought to arise? The thought seems to be that special divine activity would be incompatible with the laws of nature as disclosed by science. Thus the distinguished biologist H. Allen Orr: Now Gilkey and the others are apparently thinking in terms of a Newtonian world-picture, according to which the universe is like a great machine proceeding according to the laws disclosed in science. This isn't sufficient for the hands-off, anti-interventionist theology of these theologians. After all, Newton himself, one hopes, accepted the Newtonian world-picture, and Newton proposed that God periodically adjusted the planetary orbits, which according to his calculations would otherwise gradually go awry. What Gilkey and his friends add, here, apparently, is determinism: the thought that the laws of nature together with the state of the universe at any time, entail the state of the universe at any other time. Here the classical source is Pierre Laplace: We ought then to regard the present state of the universe as the effect of its previous state and as the cause of the one which is to follow. Given for one instant a mind which could comprehend all the forces by which nature is animated and the respective situation of the beings that compose it—a mind sufficiently vast to subject these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. (Laplace 1796) It is the Laplacian world-picture that apparently animates Gilkey, et al. It is worth noting, however, that determinism and the Laplacian world-picture don't follow from classical science. That is because the great conservation laws deduced from Newton's Laws are stated for closed or isolated systems. Thus Sears and Zemansky (1963): The principle of conservation of energy states that the internal energy of an isolated system remains constant. This is the most general statement of the principle of conservation of energy. (p. 415) Newton's laws (as well as Maxwell's later physics of electricity and magnetism) apply to isolated or closed systems; they describe how the world works provided that the world is a closed (isolated) system, subject to no outside causal influence. But it is no part of Newtonian mechanics or classical science generally to declare that the material universe is indeed a closed system. (How could a thing like that be experimentally verified?) Hence there is nothing in classical science (at least in this area) incompatible with God's changing the velocity or direction of a particle, or a whole system of particles (or, for that matter, creating ex nihilo a full-grown horse). Energy, momentum and the like are conserved in a closed system; but the claim that the material universe is in fact a closed system is not part of classical physics; it is another metaphysical or theological add-on. So here there is no conflict between classical physics and special divine action in the world. This classical, Laplacian picture has of course been superseded by the development of quantum mechanics, beginning in the first couple of decades of the 20th century. According to quantum mechanics, associated with any physical system, a system of particles, for example, there is a wave function whose evolution through time is governed by the Schrödinger equation for that system. Now the interesting thing about quantum mechanics is that, unlike classical mechanics, it doesn't specify or predict a single configuration for this system of particles at a future time t. The wave function assigns a value at t to each of the configurations possibly resulting from the initial conditions; by applying Born's Rule to those values we get an assignment of probabilities to each of those possible configurations at t. Accordingly, we aren't told which configuration will in fact result (given the initial conditions) when the system is measured at t; instead we are given a distribution of probabilities for the many possible outcomes. Clearly miracles (parting the waters, rising from the dead, etc.) are not incompatible with these assignments. (No doubt such events would be assigned very low probabilities; but of course we don't need quantum mechanics to know that such events are improbable.) Further, on collapse interpretations such as those of Ghirardi, Rimini, and Weber, there is plenty of room for divine activity. Indeed, God could actually be the cause of the collapses, and of the way in which they occur (i.e., where P is the possibility that gets actualized at t, it could be that God causes P to be actualized then). (This could perhaps be seen as a halfway house between occasionalism and secondary causation.) With the advent of quantum mechanics, therefore, there seems to be even less reason to see special divine action in the world as somehow incompatible with science. Nevertheless, many who are entirely aware of the quantum mechanical revolution still find a problem with special divine action. For example, there is the “Divine Action Project” (Wildman 1988–2003, 31–75), a 15-year series of conferences and publications that began in 1988. So far these conferences have resulted in some 6 books of essays involving at least 50 or more authors from various fields of science together with philosophers and theologians, including many of the most prominent writers in the field. Most of these authors find a problem with special divine action. That is because they believe that a satisfactory account of God's action in the world would have to be noninterventionist, as Wildman says. Thus Arthur Peacocke, commenting on a certain proposal for divine action: God would have to be conceived of as actually manipulating micro-events (at the atomic, molecular, and according to some, quantum levels) in these initiating fluctuations on the natural world in order to produce the results at the macroscopic level which God wills. But such a conception of God's action … would then be no different in principle from that of God intervening in the order of nature with all the problems that that evokes for a rationally coherent belief in God as the creator of that order. (Peacocke 2004) Apparently, then, the project is to develop a conception of special divine action (action beyond creation and conservation) that doesn't involve intervention. But what would intervention be in the quantum mechanical picture? That's not easy to say. Indeed, it's not easy to see how intervention could be distinct from divine action beyond creation and conservation. If they aren't distinct, however, special divine action would just be intervention, in which case the project of developing a conception of special divine action that doesn't involve intervention is unhopeful. Still a third area of alleged conflict between religious belief and science has to do with the different epistemic attitudes associated with each. Thus, for example, John Worrall: In science, the dominant epistemic attitude (so the claim goes) is one of critical empirical investigation, issuing in theories which are held tentatively and provisionally; one is always prepared to give up a theory in favor of a more satisfactory successor. In religious (e.g., Christian) belief, the epistemic attitude of faith plays an important role, an attitude which differs both in the source of the belief in question, and in the readiness to give it up. Others (Ratzsch, 2004), however, point out that there isn't obviously a conflict here. Clearly those two attitudes are indeed different, and perhaps they can't be taken simultaneously with respect to the same proposition. Does that show a conflict between science and religious belief? Perhaps some ways of forming belief are appropriate in one area and others in other areas. To get a conflict, we must add that the scientific epistemic attitude is the only one appropriate to any area of cognitive endeavor. That claim, however, is not itself part of the scientific attitude; it is an epistemological declaration for which substantial argument is required (but not so far in evidence). Furthermore, scientists themselves don't seem to take the scientific epistemic attitude (as characterized above) to all of what they believe, or even all of what they believe as scientists. Thus it is common for scientists to believe that there has been a past, and indeed they sometimes tell us how long ago the earth, or our galaxy, or even the entire universe, was formed. Scientists seldom hold this belief—that there has been a past—as a result of empirical investigation; nor do they ordinarily hold it in that tentative, critical way, always looking for a better alternative. In these areas, therefore, it is hard to find conflict between theistic religious belief and contemporary science. 4. Where there is conflict? Other areas of science, however, do appear to produce conflict. First, there is the relatively new but rapidly growing discipline of evolutionary psychology. The heart and soul of this project is the effort to explain distinctive human traits—our art, humor, play, love, poetry, sense of adventure, love of stories, our music, our morality, and our religion—the heart and soul of this project is to explain all of these traits in terms of our evolutionary origin and history. And here we do find theories incompatible with religious belief. One important topic in this area has been altruistic behavior—behavior that promotes the reproductive fitness of someone else at the expense of the altruist's own reproductive fitness. How is it that there are people like missionaries and Mother Teresa, people who devote their entire lives to the service of others, paying little attention to their own reproductive prospects? Herbert Simon attempts to explain altruism from an evolutionary perspective in terms of two mechanisms, docility and limited rationality: Because of bounded rationality, the docile individual will often be unable to distinguish socially prescribed behavior that contributes to fitness from altruistic behavior [i.e., socially prescribed behavior that does not contribute to fitness]. In fact, docility will reduce the inclination to evaluate independently the contributions of behavior to fitness. … . By virtue of bounded rationality, the docile person cannot acquire the personally advantageous learning that provides the increment, d, of fitness without acquiring also the altruistic behaviors that cost the decrement (Simon 1990, 3, 4). Simon's theory is carefully worked out, well developed, and of considerable interest; it is also incompatible with theistic religious belief. According to his theory, the explanation of the altruist's behavior is failure to see that the behavior in question compromises evolutionary fitness. Hence, according to Simon's theory, the answer to the question ‘Why did Mother Teresa behave in such a way as to compromise her evolutionary fitness?’ is ‘Due to bounded rationality, she was unable to see that her mode of behavior would compromise her fitness.’ From a Christian perspective, that's not at all the right answer, which would rather be something like ‘She wanted to follow the example of Jesus and do what she could do to help the poor and sick.’ Another example from this area is provided by the many theories of religion and religious belief. According to some of these theories, religious belief is false but adaptive; according to others it is false and maladaptive. An example of the first group would be the theory proposed by David Sloan Wilson, who says that religion is a group adaptation: “Many features of religion, such as the nature of supernatural agents and their relationships with humans can be explained as adaptations designed to enable human groups to function as adaptive units” (Wilson 2002, p. 51). Religious belief, he says, is fictitious, but adaptive at the group level: it promotes cooperation, mutual respect, and solidarity, thus enabling the group to do well in competition with other groups. That religious belief can function as a group adaptation is of course consistent with theistic belief; what about the bit about religious belief's—theistic belief, for example—being fictitious? How could the claim that there is no such person as God be part of empirical science? And even if it could be, Wilson's theory, one thinks, would be on more solid ground if that easily detachable theological add-on were detached. What is not so easily detachable is the claim that religious belief (unlike memory, perceptual beliefs, rational intuition) is produced by cognitive faculties or processes that are not aimed at the production of true belief. According to Wilson, these processes or faculties have a function conferred on them by evolution; but it is not that of producing true beliefs. It is rather the function of producing beliefs that promote cooperation and solidarity; ultimately their function is to produce beliefs that are adaptive, i.e., promote reproductive fitness. Here a comparison with Sigmund Freud's views of theistic belief may be illuminating. Freud claims that theistic belief is illusion. This doesn't mean that theistic belief is false (although Freud thinks it is false); what it means is that theistic belief is produced by a cognitive process (wishful thinking) that is not ‘reality oriented’; its purpose is not the production of true belief, but (in this case) a belief that enables the believer to avoid the depression and apathy that would set in if she saw clearly the miserably appalling condition in which we human beings actually find ourselves. Wilson's view is like Freud's, then, in that he too proposes that theistic belief is produced by cognitive faculties that are not reality oriented. Whereas Freud takes a dim view of theistic belief, Wilson is much more appreciative: In the first place, much religious belief is not detached from reality …. Rather, it is intimately connected to reality by motivating behaviors that are adaptive in the real world—an awesome achievement when we appreciate the complexity that is required to become connected in this practical sense. … Adaptation is the gold standard against which rationality must be judged, along with all other forms of thought. Evolutionary biologists should be especially quick to grasp this point because they appreciate that the well-adapted mind is ultimately an organ of survival and reproduction (Wilson 2002, p. 228). Although Wilson has kind words for religion, his claim that religious belief is not aimed at the truth is incompatible with theistic religious belief. According to Christianity, for example, faith, including belief in the essentials of the Christian faith, is a divine gift; and the process producing it in the believer (the internal instigation of the Holy Spirit, according to Thomas Aquinas, the internal witness or testimony of the Holy Spirit, according to John Calvin) is indeed aimed at the truth and has as its function the production of true belief. So here there is conflict between science and religion. What accounts for this conflict? Several things, no doubt; but part of the explanation is to be found in methodological naturalism, a widely accepted constraint on science. According to methodological naturalism (MN), in doing science one must proceed “as if God is not given”, to use the words of Hugo Grotius. Exactly what does that mean? There are various suggestions; here is one. According to MN, (1) the data set (data model) for a proper scientific theory can't refer to God or other supernatural agents (angels, demons), or employ what one knows or thinks one knows by way of (divine) revelation. Thus the data for a theory wouldn't include, for example, the proposition that there has recently been an outbreak of demon possession in Washington, D. C. (2) A proper scientific theory can't refer to God or any other supernatural agents, or employ what one knows or thinks one knows by way of revelation. So if the data model contained the proposition that there has been an outbreak of weird and irrational behavior in Washington, one couldn't properly propose a theory involving demon possession to explain it. (3) Note first that the probability or plausibility of theory candidates and their capacity to explain the data, as well as their empirical implications, is always relative to an array of background information or an epistemic base. The third constraint, then, is that the epistemic base of a proper scientific theory can't include propositions obviously entailing[7] the existence of God or other supernatural agents, or propositions one knows or thinks one knows by way of revelation. So consider someone who in fact accepts the main lines of one of the theistic religions, and works in the area of evolutionary psychology. No doubt she will honor MN as a constraint on her scientific activity. If so, for scientific purposes she will eliminate from her evidence base propositions obviously entailing the existence of God or other supernatural beings, as well as what she knows or thinks she knows by way of faith or revelation. But then she might very well come up with theories of the kind we've been pointing to, theories incompatible with theistic religion. A rather different area with the same dialectic: historical biblical criticism (HBC). HBC is to be contrasted with traditional biblical commentary. The practitioner of the latter assumes that the bible is the word of God, and tries to lay bare the meaning of what is taught in various parts of the bible. The practitioner of HBC, on the other hand, specifically brackets the belief that the bible is divine revelation, and intends instead to study it scientifically. Thus the late Raymond Brown, a highly respected Catholic scripture scholar, believes that HBC is “scientific biblical criticism” (Brown 1973, p. 6); it yields “factual results” (p. 9); he intends his own contributions to be “scientifically respectable” (p. 11): and practitioners of HBC investigate the scriptures with “scientific exactitude” (pp. 18–19); see also Meier 1991, p. 6. To study the bible scientifically, therefore, is to study it in a way constrained by MN. (See also Sanders 1985, p. 5; Levenson 1993, p. 109; and Lindars 1986, p. 91). Naturally enough, there has been considerable tension between HBC, so construed, and traditional Christians, going back as least as far as David Strauss in 1835: “Nay, if we would be candid with ourselves, that which was once sacred history for the Christian believer is, for the enlightened portion of our contemporaries, only fable.” As for contemporary tensions, according to Luke Timothy Johnson: The Historical Jesus researchers insist that the ‘real Jesus’ must be found in the facts of his life before his death. The resurrection is, when considered at all, seen in terms of visionary experience, or as a continuation of an ‘empowerment’ that began before Jesus's death. Whether made explicit or not, the operative premise is that there is no ‘real Jesus’ after his death (Johnson 1997, p. 144). And according to Van Harvey “So far as the biblical historian is concerned, … there is scarcely a popularly held traditional belief about Jesus that is not regarded with considerable skepticism” (Harvey 1986, p. 193) An absolutely central characteristic of HBC is this effort to be scientific. Of course we might ask whether HBC, or any historical study, is really science; its advocates say that it is, but are they right? In view of the difficulty of the demarcation problem however, it is probably unwise to transform this question into an objection. (Further, even if historical studies of this kind are not precisely science, they are certainly very much like science.) And insofar as HBC requires conformity to MN, one who practices it brackets or suspends or sets aside any theological views, or what is known by revelation.[8] Just as with evolutionary psychology, therefore, one who works at HBC might in fact accept theistic religion of one sort or another, but in his work as a practitioner of HBC, come to conclusions incompatible with his religious belief. So far, therefore, there is the same dialectic here as with evolutionary psychology: theories incompatible with theistic religion arising (at least in part) out of MN. In at least these two areas, therefore, there is conflict between scientific theories and religious belief. In a certain very important respect, however, this conflict is superficial. That is because the theories and claims of evolutionary psychology and HBC need not constitute defeaters, even partial defeaters,[9] for those elements of religious belief with which they are incompatible—even though theism is committed to taking science with great seriousness and even if it is conceded that the theories in question constitute good science. And that is precisely because MN is taken as constraining scientific activity. We can see this as follows. As already suggested, scientific investigation or inquiry is always conducted against the background of an evidence base, a body of background knowledge or belief. An important part of MN, furthermore, is that this evidence base must not contain propositions obviously entailing the existence of supernatural beings, or propositions that are accepted by way of faith. It follows that the evidence base of an adherent of a theistic religion will contain the scientific evidence base as a proper part; it will include all the propositions to be found in the scientific evidence base, plus more—perhaps those specific to Christian belief. Now suppose a given theory—Simon's theory on altruism, or Wilson's on religion, or some minimalist account of Jesus's life and activity—is in fact proper science, and is indeed the most plausible, scientifically most satisfactory theoretical response to the evidence, given EBS, the scientific evidence base. This means that from the point of view of EBS together with current evidence, that theory is the scientifically best or most plausible result. Still, that doesn't automatically give a believer a defeater for those of her beliefs with which the theory are incompatible. That is because EBS is only part of her evidence base. And it can easily happen that a proposition P is the plausible response, given a part of my evidence base (together with the current evidence), that P is incompatible with one of my beliefs, and that P fails to provide me with a defeater for that belief. For example, suppose I tell you that I saw you at the mall yesterday afternoon. Then with respect to part of your total evidence base—a part that includes your knowledge that I told you I saw you there, together with your knowledge that I have decent vision and am ordinarily reliable, and the like—the right thing to think is that you were at the mall. Nevertheless, we may suppose, you know perfectly well that you weren't there; you remember that you were home all afternoon thinking about methodological naturalism. Here the right thing to think from the perspective of a proper part of your evidence base is that you were at the mall; but this does not give you a defeater for your belief that you were not there. Another example: we can imagine a renegade group of whimsical physicists proposing to reconstruct physics, refusing to use memory beliefs, or if that is too fantastic, memories of anything more than 1 minute ago. Perhaps something could be done along these lines, but it would be a poor, paltry, truncated, trifling thing. And now suppose that the best theory, from this limited evidence base, is inconsistent with general relativity. Should that give pause to the more traditional physicists who employ what they know by way of memory as well as what the renegade physicists use? I should think not. This truncated physics could hardly call into question physics of the fuller variety, and the fact that from a proper part of the scientific evidence base, something inconsistent with general relativity is the best theory—that fact would hardly give more traditional physicists a defeater for general relativity. Similarly for the case under question. The traditional Christian thinks she knows by faith that Jesus was divine and that he rose from the dead. But then she need not be moved by the fact that these propositions are not especially probable on the evidence base to which HBC limits itself—i.e., one constrained by MN and therefore one that deletes any knowledge or belief dependent upon faith. The findings of HBC, if findings they are, need not give her a defeater for those of her beliefs with which they are incompatible. The point is not that HBC, evolutionary psychology and other scientific theorizing couldn't in principle produce defeaters for Christian belief;[10] the point is only that its coming up with theories incompatible with Christian belief doesn't automatically produce such a defeater. Everything depends on the particular evidence adduced in the case in question, and the bearing of that evidence given the believer's total evidence base. In the case in question, for example, it may be that given EBS and the relevant data base, it is unlikely that Jesus arose from the dead. But given an evidence base including not only EBS but also belief in God together with the specifically Christian beliefs that Jesus is the second person of the trinity incarnate, and that the New Testament is a reliable source of information on these matters—given these things, the proposition that he rose from the dead may not be at all improbable. Similar considerations would hold, of course, for the other theistic religions and proposed scientific defeaters. Someone might complain that this looks like a recipe for intellectual irresponsibility, for hanging on to beliefs in the teeth of the evidence. Can't a believer always say something like this, no matter what proposed defeater presents itself? “Perhaps B (the proposed defeatee) is improbable or unlikely with respect to part of what I believe,” she says, “but it is certainly not improbable with respect to the totality of what I believe, that totality including, of course, B itself.” Obviously that can't be right; if it were, every putative defeater could be turned aside in this way and defeat would be impossible. But defeat is not impossible; it sometimes happens that one does acquire a defeater for a belief B, by learning that B is improbable with respect to some proper subset of one's evidence base. According to the book of Isaiah (41:9), God says “I took you from the ends of the earth, from its farthest corners I called you. I said, ‘You are my servant’; I have chosen you and have not rejected you.” Someone might believe R, the proposition that the earth is a rectangular solid with ends and corners, on the basis of this text; she will have a defeater for this belief when confronted with the scientific evidence—photographs of the earth from space, for example—against it. At any rate she will have a defeater for R if the rest of her noetic structure is at all like ours. The same goes for someone who holds pre-Copernican beliefs on the basis of such a text as “The earth stands fast; it shall not be moved” (Psalm 104:5). Why is there a defeater in some cases, but not in others? What makes the difference? Here is a suggestion. Consider some religious belief B incompatible with a deliverance of some current scientific theory: B might be, for example, the belief that Mother Teresa was perfectly rational in behaving in that altruistic fashion. Let the scientific theory in question be Herbert Simon's account of altruism, and let EBS be the believer's evidence base. Our question is whether A, the belief that Simon's theory is proper science (and that it entails the denial of B), is a defeater for B. Add A to S's evidence base; and now the right question, perhaps, is this: is B epistemically improbable or unlikely with respect to the conjunction of A with EBS? Of course B itself might initially be a member of EBS, in which case it will certainly not be improbable with respect to it. If that were sufficient for A's not being a defeater of B, however, no member of the evidence base could ever be defeated by a new discovery; and that can't be right. So let's delete B from EBS. Call the result of deleting B from S's evidence base ‘EBS reduced with respect to B’ — ‘EBSB’ for short.[11] And now the suggestion — call it ‘the reduction test for defeat’ — is that A is a defeater for B just if B is appropriately improbable with respect to the conjunction of A with EBSB. Suppose we apply this test to the belief B that Mother Teresa was rational in behaving altruistically, with A being the belief that Simon's theory of altruism is good science and is incompatible with B; and let's suppose that Sis a Christian believer. To apply the reduction test, we must ask whether B is improbable with respect to the conjunction of A with EBSB. The answer, I should think, is that B is not improbable with respect to that conjunction. For EBSB includes the empirical evidence, whatever exactly it is, appealed to by Simon, but also the proposition that we human beings have been created by God and created in his image, along with the rest of the main lines of the Christian story. With respect to the conjunction of A with that body of propositions, it is not likely that if Mother Teresa had been more rational, smarter, she would have acted so as to increase her reproductive fitness rather than live altruistically. Hence, on the proposed reduction test, the fact that Simon's theory is good science and is more likely than not with respect to the scientific evidence base—that fact does not give Sa defeater for what she thinks about Mother Teresa. Consider, on the other hand, the belief B* that the earth has corners and edges and the photographic evidence against that belief: here, plausibly, the reduction test gives the result that the latter is a defeater for B*. (True: a Christian might think that the Bible is infallible, since God is its ultimate author; but of course that leaves open the question what God intends to teach in the passage in question.) So the reduction test gives sensible results in these two cases. It can't be right in general, however—more exactly, it is right in general only on a certain very important assumption the believer is likely to reject. For it might be, clearly enough, that B has a lot of warrant on its own, warrant it doesn't get from the other members of EBS or indeed any other propositions. B may be basic with respect to warrant; B might get warrant from a source different from any involved in the scientific theory with which it is incompatible. If so, the fact that B is unlikely with respect to EBSB doesn't show that Shas a defeater for B in the fact that B is unlikely with respect to EBSB together with the relevant A. By way of illustrative example: you are on trial for some crime; the evidence against you is strong, and you are convicted. Nevertheless, you remember very clearly that at the time the crime occurred, you were on a solitary walk in the woods. Your belief that you were walking in the woods isn't based on argument or inference from other propositions (You don't note, e.g., that you feel a little tired and that your walking shoes are muddy, and that there is a map of the area in your parka pocket, concluding that the best explanation of these phenomena is that you were walking there.) So consider EByouP, your evidence base diminished with respect to P, the proposition that you didn't commit the crime and were walking in the woods when it was committed. With respect to EByouP, P is epistemically improbable; after all, you have the same evidence as the jury for ¬P, and the jury is quite properly (if mistakenly) convinced that you did the crime. Still, you certainly don't have a defeater, here, for your belief that you are innocent. The reason, of course, is that P has for you a source of warrant independent of the rest of your beliefs: you remember it. In a case like this, whether you have a defeater for the belief P in question will depend, on the one hand, upon the strength of the intrinsic warrant enjoyed by P, and, on the other, the strength of the evidence against P from EByouP. Very often the intrinsic warrant will be the stronger. The same will go for religious beliefs, if they do in fact have intrinsic warrant. If S holds a religious belief B and if B has warrant in the basic way, then even if the probability of B on EBSB together with the relevant A is low, it won't follow that A is a defeater of B for S. Perhaps the reduction test offers a necessary condition of A's being a defeater for B for S; it is also sufficient only if religious beliefs don't have warrant or positive epistemic status in the basic way, and only if they don't acquire warrant or positive epistemic status from a source other than those that confer that status on scientific beliefs. This is part of the importance of the question noted above in section 2. 5. Naturalism and Science So far we've examined alleged conflict between theistic religious belief and science with respect to several areas: evolution, divine action in the world, the difference between the scientific attitude and the religious attitude, evolutionary psychology, and HBC. But some have suggested a science/religion (or science/quasi-religion) conflict of a wholly different sort: one between naturalism and science. (Otte 2002; Plantinga 1993, 2002a; Rea 2002; Taylor 1963); there are also hints of this effect in Nietzsche (2003) and in Darwin himself (1887). Now naturalism comes in several different colors and flavors. First, there is the view that nature is all there is; there are no supernatural beings. Of course this is a bit slim as an explanation of naturalism; we need to know what nature is, and what allegedly supernatural beings might be like. Perhaps a way to proceed would be to say that naturalism, so conceived, is the view that there is no such person as the God of theism, or anything like God (see, e.g., Beilby 2002). Call this ‘naturalism1’. Another variety of naturalism, ‘scientific naturalism’, we might call it, would be the claim that there are no entities in addition to those endorsed by contemporary science (Kornblith 1994).[12] Given that current science endorses no supernatural beings, scientific naturalism implies naturalism1. There is also what we might call ‘epistemological naturalism’, according to which, roughly speaking, the methods of science are the only proper epistemic methods (Krikorian 1944). With the help of a couple of fairly obvious premises, epistemological naturalism also implies naturalism1, and I'll use ‘naturalism’ to refer to the disjunction of the three versions of naturalism sketched. Advocates of naturalism thus conceived would be (for example) Bertrand Russell (1957), Daniel Dennett (1995), Richard Dawkins (1986), David Armstrong (1978), and the many others that are sometimes said to endorse “The Scientific World-View.” Naturalism is presumably not a religion. In one very important respect, however, it resembles religion: it can be said to perform the cognitive function of a religion. There is that range of deep human questions to which a religion typically provides an answer (above, Section I): what is the fundamental nature of the universe: for example, is it mind first, or matter (non-mind) first? What is most real and basic in it, and what kinds of entities does it display? What is the place of human beings in the universe, and what is their relation to the rest of the world? Are there prospects for life after death? Is there such a thing as sin, or some analogue of sin? If so, what are the prospects of combating or overcoming it? Where must we look to improve the human condition? Is there such a thing as a summum bonum, a highest good for human beings, and if so what is it? Like a typical religion, naturalism gives a set of answers to these and similar questions. We may therefore say that naturalism performs the cognitive function of a religion, and hence can sensibly be thought of as a quasi-religion. Next, note many thinkers going back at least to Nietzsche (Nietzsche 2003) and possibly William Whewell (Curtis 1986) have pointed to a potentially worrisome implication of evolutionary theory. The worry can be put as follows. According to orthodox Darwinism, the process of evolution is driven mainly by two mechanisms: random genetic mutation and natural selection. The former is the chief source of genetic variability; by virtue of the latter, a mutation resulting in a heritable, fitness-enhancing trait is likely to spread through that population and be preserved as part of the genome. It is fitness-enhancing behavior and traits that get rewarded by natural selection; what get penalized are maladaptive traits and behaviors. In crafting our cognitive faculties, natural selection will favor cognitive faculties and processes that result in adaptive behavior; it cares not a whit about true belief (as such) or about cognitive faculties that reliably give rise to true belief. As evolutionary psychologist Donald Sloan Wilson puts it, “the well-adapted mind is ultimately an organ of survival and reproduction” (Wilson 2002, 228). What our minds are for (if anything) is not the production of true beliefs, but the production of adaptive behavior: that our species has survived and evolved at most guarantees that our behavior is adaptive; it does not guarantee or even make it likely that our belief-producing processes are for the most part reliable, or that our beliefs are for the most part true. That is because our behavior could perfectly well be adaptive, but our beliefs false as often as true. Darwin himself apparently worried about this question: “With me,” says Darwin, We can briefly state Darwin's doubt as follows. Let R be the proposition that our cognitive faculties are reliable, N the proposition that naturalism is true and E the proposition that we and our cognitive faculties have come to be by way of the processes to which contemporary evolutionary theory points us: what is the conditional probability of R on N&E? I.e., what is P(R | N&E)? Darwin fears it may be rather low. Of course it is only unguided natural selection that prompts the worry. If natural selection were guided and orchestrated by the God of theism, for example, the worry would disappear; God would presumably use the whole process to create creatures of the sort he wanted, creatures in his own image, creatures with reliable cognitive faculties. So it is unguided evolution, and metaphysical beliefs that entail unguided evolution, that prompt this worry about the reliability of our cognitive faculties. Now naturalism entails that evolution, if it occurs, is indeed unguided. But then, so the suggestion goes, it is unlikely that our cognitive faculties are reliable, given the conjunction of naturalism with the proposition that we and our cognitive faculties have come to be by way of natural selection winnowing random genetic variation. If so, one who believes that conjunction will have a defeater for the proposition that our faculties are reliable—but if that's true, she will also have a defeater for any belief produced by her cognitive faculties—including, of course, the conjunction of naturalism with evolution. That conjunction is thus seen to be self-refuting. If so, however, this conjunction cannot rationally be accepted, in which case there is conflict between naturalism and evolution, and hence between naturalism and science. We can state the argument schematically as follows: 1. P(R | N&E) is low. 2. Anyone who accepts N&E and sees that (1) is true has a defeater for R. 3. Anyone who has a defeater for R has a defeater for any other belief she holds, including N&E itself. 1. Anyone who accepts N&E and sees that (1) is true has a defeater for N&E; hence N&E can't be rationally accepted. Of course this is brief and merely a schematic version of the argument; there is no space here for the requisite qualifications. Support for (1) could go as follows. First, in order to avoid influence from our natural assumption that our cognitive faculties are reliable, think not about us, but about hypothetical creatures a lot like us, perhaps existing in some other part of the universe; and suppose N and E are true with respect to them. Next, note that naturalism apparently implies materialism (about human beings); current science does not endorse the existence of immaterial souls or minds or selves. So take naturalism to include materialism. What would a belief be, from this point of view? Presumably something like a long-term event or structure in the nervous system—perhaps a structured group of neurons connected and related in certain ways. Such a neural structure will have neurophysiological properties (‘NP properties’): properties specifying the number of neurons involved, the way in which those neurons are connected with each other and with other structures (with muscles, glands, sense organs, other neuronal events, etc.), the average rate and intensity of neuronal firing in various parts of this event, and the ways in which these rates of fire change over time and in response to input from other areas. If this event is really a belief, however, then it will also have content; it will be the belief that p, for some proposition p—perhaps the proposition naturalism is all the rage these days. What is the relation between NP properties, on the one hand, and content properties—such properties as having the proposition that naturalism is all the rage these days as content—on the other? Perhaps the most popular position here is “nonreductive materialism” (NRM): content properties are distinct from but supervene on (see the entry on supervenience) NP properties.[13] Supervenience can be either broadly logical or nomic. In the latter case, there would be psychophysical laws relating NP properties to content properties: laws of the sort any structure with such and such NP properties will have such and such content. These laws presumably will be contingent (in the broadly logical or metaphysical sense). In the former case, there will also be such laws, but they will be necessary rather than contingent. Now take any belief B you like on the part of a member of that hypothetical population: what is the (epistemic) probability that B is true, given N&E and nonreductive materialism—what is P(B | N&E&NRM)? What we know is that B has a certain content (call it ‘C’), and (we may assume or concede) having B is adaptive in the circumstances in which that creature finds itself. What, then, is the probability that C, the content of B, is true? Well, what is the probability that the relevant psychophysical law L connecting NP properties and content properties yields a true proposition as content in this instance? Having B is adaptive, in the circumstances in which the creature finds itself; its displaying the NP properties on which C supervenes causes adaptive behavior. But why think the content connected with those NP properties by L will be true in this creature's circumstances? What counts for adaptivity are the NP properties and the behavior they cause; it doesn't matter whether the supervening content is true. The NP properties are indeed adaptive; but that provides no reason, so far, for thinking the supervening content is true. Having B is adaptive by virtue of its causing adaptive behavior, not by virtue of having true content. Of course if theism is true, then human beings (as opposed to those hypothetical creatures, for whom naturalism is true) are made in the divine image, which includes the capacity for knowledge; so God would presumably have chosen the psychophysical laws in such a way that in the relevant circumstances, the neurophysiology yields true content. But nothing like that is true given naturalism; to suppose that the content properties that are adaptive, for the most part also lead to true content, would be wholly unjustified optimism. So what is P(B | N&E&NRM)? Well, since the truth of B doesn't make a difference to the adaptivity of B, B could indeed be true, but is equally likely to be false; we'd have to estimate the probability that it is true as about the same as the probability that it is false. But that means that it is improbable that the believer in question has reliable cognitive faculties, i.e., faculties that produce a sufficient preponderance of true over false beliefs. For example, if so, if the believer in question has 1000 independent beliefs, each as likely to be false as true, the probability that, say, 3/4 of them are true (and this would be a modest requirement for reliability) will be very low—less than 10−58. So P(B | N&E&NRM) specified to these creatures will be low. But of course the same would hold for us, if naturalism is true: P(B | N&E&NRM) specified to us is equally low.[14] That's the argument for the first premise. According to the second premise, one who sees this and also accepts N&E has a defeater for R, a reason to give it up, to cease believing it. The support offered for this premise is by way of analogy from clear cases. Suppose I believe there is a drug—call it XX—that destroys cognitive reliability; I believe 95% of those who ingest XX become cognitively unreliable. Suppose further that I now believe both that I've ingested XX and that P(R | I've ingested XX) is low; taken together, these two beliefs give me a defeater for my initial belief or assumption that my cognitive faculties are reliable. Furthermore, I can't appeal to any of my other beliefs to show or argue that my cognitive faculties are still reliable; any such other belief is also now suspect or compromised, just as R is. Any such other belief B is a product of my cognitive faculties: but then in recognizing this and having a defeater for R, I also have a defeater for B. Of course there will be many other examples: I'll get the same result if I believe that I am a brain in a vat and that P(R | I'm a brain in a vat) is low; the same goes for the classic Cartesian version of the same idea (namely that I've been created by a being who delights in deception) and for other more homely scenarios, for example, the belief that I've gone insane (perhaps by way of contracting mad cow disease). In all of these cases I get a defeater for R. Now according to the third premise, one who has a defeater for R has a defeater for any belief she takes to be a product of her cognitive faculties—which is, of course, all of her beliefs. She therefore has a defeater for N&E itself; so one who accepts N&E (and sees that P(R | N&E) is low) has a defeater for N&E, a reason to doubt or reject or be agnostic with respect to it. Nor could she get independent evidence for R; the process of doing so would of course presuppose that her faculties are reliable. She'd be relying on the accuracy of her faculties in believing that the alleged evidence is in fact present and that it is in fact evidence for R. Thomas Reid (1785, 276) put it like this: If a man's honesty were called into question, it would be ridiculous to refer to the man's own word, whether he be honest or not. The same absurdity there is in attempting to prove, by any kind of reasoning, probable or demonstrative, that our reason is not fallacious, since the very point in question is, whether reasoning may be trusted. The argument concludes that the conjunction of naturalism with the theory of evolution cannot rationally be accepted—at any rate by someone who is apprised of this argument and sees the connection between N&E and R. As one might expect, this argument has been controversial; a number of objections have been raised against it. (Beilby 1997; Ginet 1995, 403; O'Connor 1994, 527; Ross 1997; Fitelson and Sober 1998; Robbins 1994; Fales 1996; Lehrer 1996; Nathan 1997; Levin 1997; Fodor 1998) There have been responses to the objections (Plantinga 2002a; 2003), responses to those responses (Talbott, forthcoming), and so on; there is nothing like consensus regarding the argument. If the argument is correct, however, and N&E can't rationally be accepted, then there is a conflict between naturalism and evolution; one can't rationally accept them both. Hence there is conflict between naturalism and one of the chief pillars of contemporary science. Insofar as naturalism is a quasi-religion by virtue of performing the cognitive function of a religion, there is a sort of religion/science conflict—not between theistic religion and science, but between naturalism and science. • Alston, W. P., 1991, Perceiving God. Ithaca, NY: Cornell University Press. • Aquinas, T., Summa Theologiae. (Translated by Fathers of the English Dominican Province). Westminster, MD: Christian Classics, 1981. (originally published 1267–1273). • Armstrong, D., 1978, Universals and Scientific Realism. Cambridge: Cambridge University Press. • Barr, S. M., 2003, Modern Physics and Ancient Faith. Notre Dame: Notre Dame University Press. • Behe, M., 1996, Darwin's Black Box. New York: The Free Press. • Beilby, J., 1997, “Is Evolutionary Naturalism Self Defeating?”, International Journal for the Philosophy of Religion 42:2, 69–78. • –––, (ed.) 2002, Naturalism Defeated?: Essays on Plantinga's Evolutionary Argument against Naturalism. Ithaca, NY: Cornell University Press. • Bodin, J., 1975, Colloquium Heptaplomeres de rerum sublimium arcanis abditis, written by 1593 but first published in 1857. English translation by Marion Kuntz. Princeton: Princeton University Press. The quotation is from the Kuntz translation. • Brooke, J. H., 1991, Science and Religion: Some Historical Perspectives. Cambridge: Cambridge University Press. • Brown, R., 1973, The Virginal Conception and Bodily Resurrection of Jesus. New York: Paulist Press. • Calvin, J., 1559, Institutes of the Christian Religion. John T. McNeill (ed.) and Ford Lewis Battles (tr.) Philadelphia: Westminster Press, 1960. (Page reference is to the 1960 edition.) • Carr, B.J., and M. J. Rees, 1979, “The Anthropic Principle and the Structure of the Physical World” Nature 278, (April 12) 605–612. • Carter, B., 1979, “Large Number Coincidences and the Anthropic Principle in Cosmology,” in Confrontation of Cosmological Theories with Observational Data. M. S. Longair (ed.), Dordrecht: D. Reidel Pub. Co., pp. 291–298. • Collins, R., 1999, “A Scientific Argument for the Existence of God: The Fine-Tuning Design Argument” in Reason for the Hope Within. Michael Murray (ed.), Grand Rapids, MI: Eerdmans. • –––, 2003, “Evidence for fine-tuning,” in God and Design. Neil Manson (ed.) New York: Routledge. • Cotes, R., 1953, Newton's Philosophy of Nature; Selections from his writings. New York: Hafner Library of Classics. • Curtis, R., 1986, “Are Methodologies Theories of Scientific Rationality?,” The British Journal for the Philosophy of Science. 37:1, 135–161. • Darwin, C., 1887 The Life and Letters of Charles Darwin Including an Autobiographical Chapter, Francis Darwin (ed.) London: John Murray, Albermarle Street, Vol. 1, pp. 315–316. • Dawkins, R., 1986, The Blind Watchmaker; why the Evidence of Evolution Reveals a Universe without Design. New York: Norton. • –––, 2003, A Devil's Chaplain. Boston, MA: Houghton Mifflin. • Dennett, D., 1995, Darwin's Dangerous Idea. New York: Simon and Schuster. • Draper, J. W., 1875, History of the Conflict between Religion and Science. New York: D. Appleton and Co., 5th ed. • Draper, P., 2002, “Irreducible Complexity and Darwinian Gradualism: a Reply to Michael J. Behe” Faith and Philosophy, 19:1, 3–21. • Fales, E., 1996, “Plantinga's Case Against Naturalistic Epistemology,” Philosophy of Science 63:3, 432–451. • Fitelson, B., and E. Sober, 1998, “Plantinga's Probability Arguments Against Evolutionary Naturalism”, Pacific Philosophical Quarterly 79:2, 115–129. • Fodor, J., 1998, “Is Science Biologically Possible?” (The 1998 Benjamin Lecture at the University of Missouri), in Beilby, 2002. • Foster, M.B., 1934, “The Christian Doctrine of Creation and the rise of Modern Natural Science”, Mind (October), XLIII (172) 446–468. • –––, 1935, “Christian Theology and Modern Science of Nature (I)” Mind (October), XLIV (176) 439–466. • –––, 1936, “Christian Theology and Modern Science of Nature (II)” Mind (October), XLV (177) 1–27. • Gilkey, L., 1983, “Cosmology, Ontology and the Travail of Biblical Language,” in God's Activity in the World: the Contemporary Problem, Owen C. Thomas (ed.), Chico, CA: Scholar's Press. • Ginet, C., 1995, “Comments on Plantinga's Two-Volume Work on Warrant,” Philosophy and Phenomenological Research, 55(2): 403–408. • Harvey, V., 1986, “New Testament Scholarship and Christian Belief”, in Jesus in History and Myth, R. Joseph Hoffman and Gerald A. Larue (eds.) Buffalo: Prometheus Books. • Johnson, L. T. 1997, The Real Jesus. New York: Harper Collins. • Kornblith, H., 1994, “Naturalism: both Metaphysical and Epistemological,” Midwest Studies in Philosophy, Vol. XIX, Peter French, Theodore Uehling,Jr and Howard K Wettstein (eds.) Notre Dame: University of Notre Dame Press. • Krikorian, Y., 1944, Naturalism and the human spirit. New: Columbia University Press. • Laplace, P. S., 1796, Exposition du système du monde; translated from the French by J. Pond. London: printed for R. Phillips, 1809. • Laudan, L., 1988, “The Demise of the Demarcation Problem.” in But is it Science?, Michael Ruse (ed.), Buffalo, NY: Prometheus Books. • Lehrer, K., 1996, “Proper Function versus Systematic Coherence” in Warrant in Contemporary Epistemology: Essays in Honor of Plantinga's Theory of Knowledge, Jonathan Kvanvig (ed.), Lanham, MD: Rowman & Littlefield • Levenson, J., 1993, “The Hebrew Bible, the Old Testament, and Historical Criticism” in The Hebrew Bible, the Old Testament, and Historical Criticism. Louisville: Westminster/John Knox Press. (An earlier version of this essay was published under the same title in Hebrew Bible or Old Testament? Studying the Bible in Judaism and Christianity, John Collins and Roger Brooks (eds.) Notre Dame: University of Notre Dame Press, 1990.) • Levin, M., 1997, “Plantinga on Functions and the Theory of Evolution,” Australasian Journal of Philosophy. 75:1, 83–98. • Lindars, B., 1986, “Jesus risen: bodily resurrection but no empty tomb,” Theology 89:90–96. • Locke, J., 1689, An Essay Concerning Human Understanding, ed. with “Prolegomena” by Alexander Fraser. New York: Dover Publications, Inc., 1959. • Mackie, J. L., 1982, The Miracle of Theism. Oxford: Clarendon Press. • Maudlin, T., 2003, “Distilling Metaphysics from Quantum Mechanics,” The Oxford Handbook of Metaphysics, Michael Loux and Dean Zimmerman (eds.) Oxford: Oxford University Press. • Mayr, E., 1998, Toward a New Philosophy of Biology; Observations of an Evolutionist. Cambridge: Harvard University Press. • Meier, J., 1991, A Marginal Jew: Rethinking the Historical Jesus. New York: Doubleday, volume one of three. • Miller, K., 1999, Finding Darwin's God. New York: Harper-Collins. • Monod, J., 1971, Chance and Necessity. New York: Alfred A. Knopf. • Murphy, N., 2001, “Phillip Johnson on Trial.” in Intelligent Design Creationism and its Critics, Robert Pennock (ed.), Cambridge, MA: MIT Press. • Nathan, N. M. L., 1997, “Naturalism and Self-Defeat: Plantinga's Version”, Religious Studies 33:2, 135–142. • Nietzsche, F., 2003, Nietzsche: Writings from the Notebooks, (Cambridge Texts in the History of Philosophy) Rudiger Bittner (ed.), Kate Sturge (tr.) Cambridge: Cambridge University Press, Notebook 36, June-July 1885. • O'Connor, T., 1994, “An Evolutionary Argument Against Naturalism?” The Canadian Journal of Philosophy, 24(4), 527–540. • Orr, H. A., 2004, Letter to The New York Review of Books, May 13, 51:8, 47. • Otte, R., 2002, “Conditional Probabilities in Plantinga's Argument,” in Naturalism Defeated? Essays on Plantinga's Evolutionary Argument Against Naturalism, James Beilby (ed). Ithaca: Cornell University Press. • Peacocke, A., 2004, “Problems in Contemporary Christian Theology,” Theology and Science, 2(1), 2–3. • Plantinga, A., 1974, The Nature of Necessity. Oxford: Clarendon Press. • –––, 1993, Warrant and Proper Function New York: Oxford University Press, Chap. 10. • –––, 2000, Warranted Christian Belief. Oxford: Oxford University Press. • –––, 2002a, “Introduction: The Evolutionary Argument Against Naturalism,” in Naturalism Defeated? Essays on Plantinga's Evolutionary Argument Against Naturalism., James Beilby (ed.), Ithaca, NY: Cornell University Press. • –––, 2002b, “Reply to Beilby's Cohorts,” Naturalism Defeated? Essays on Plantinga's Evolutionary Argument Against Naturalism, James Beilby (ed.) Ithaca, NY: Cornell University Press. • –––, 2003, “Probability and Defeaters,” Pacific Philosophical Quarterly 84:3, 291–298. • Plantinga, A., and Wolterstorff, N., 1983, Faith and Rationality. Notre Dame, IN: University of Notre Dame Press. • Polkinghorne, J., 1989, Science and Creation: the Search for Understanding. Boston: New Science Library; New York: Random House. • Ratzsch, D., 2004, “The Demise of Religion: Greatly Exaggerated Reports from the Science/Religion ‘Wars’” in Contemporary Debates in Philosophy of Religion. Michael Peterson and Raymond VanArragon (eds.) Oxford: Blackwell. • Ratzsch, D., 2009, “Humanness in their Hearts: where science and religion fuse” in The Believing Primate: Scientific, Philosophical and Theological Reflections on the Origin of Religion. Jeffrey Schloss and Michael Murray (eds.) Oxford, pp. 215–245. • Rea, M., 2002, World Without Design: the Ontological Consequences of Naturalism. Oxford: Clarendon Press, ch. 8. • Reid, T., 1785, Essays on the Intellectual Powers of Man, Derek Brookes (ed.) University Park: Pennsylvania State University Press, 2002. (Page reference is to the 2002 edition.) • Robbins, W., 1994, “Is Naturalism Irrational?” Faith and Philosophy 11: 2, 255–259. • Ross, G., 1997, “Undefeated Naturalism,”, Philosophical Studies 87, 2 (August): 159–184. • Ruse, M., 1982, Darwinism Defended. Reading, MA: Addison-Wesley. • Russell, B., 1957, “A Free Man's Worship” in Mysticism and Logic. Garden City, New York: Doubleday Anchor Books. • Sanders, E. P., 1985, Jesus and Judaism. Philadelphia: Fortress Press. • Sears, F. W., and Zemansky, M. W., 1963, University Physics, 3rd edition. Reading, MA: Addison-Wesley Pub. Co., Inc. • Simon, H., 1990, “A Mechanism for Social Selection and Successful Altruism,” Science, 250(4988): 1665–1668. • Swinburne, R., 1979, 2004, The Existence of God. Oxford: Clarendon Press. • –––, 1981, 2005, Faith and Reason. Oxford: Clarendon Press. • –––, 2003, “The argument to God from fine-tuning reassessed” in God and Design: The Teleological Argument and Modern Science, Neil Manson (ed.), London: Routledge. • Talbott, W., forthcoming, “More on the Illusion of Defeat” in The Nature of Nature, Bruce Gordon and William Dembski (eds.). • Taylor, R., 1963, Metaphysics. Englewood Cliffs, NY: Prentice Hall. • van Inwagen, P., 2003, “The compatibility of Darwinism and design” in God and Design: The Teleological Argument and Modern Science, Neil Manson (ed.), London: Routledge. • van Fraassen, B., 1980, The Scientific Image. Oxford: Clarendon Press. • –––, 1993, “Three-sided scholarship: comments on the paper of John R. Donahue, S.J.,” in Hermes and Athena, Eleonore Stump and Thomas Flint (eds.), Notre Dame: University of Notre Dame Press. • –––, 2002, The Empirical Stance. New Haven: Yale University Press, ch. 2. • von Weizsäcker, C.F., 1964, The Relevance of Science. New York: Harper and Row. • White, A. D., 1895, History of the Warfare of Science with Theology. • White, R., 2003, “Fine-tuning and multiple universes.” in God and Design, Neil Manson (ed.), London: Routledge. • Wildman, W., 1988–2003, “The Divine Action Project, 1988–2003,” Theology and Science 2/1 (2004): 31–75. • Wilson, D. S., 2002, Darwin's Cathedral: Evolution, Religion and the Nature of Society. Chicago, IL: University of Chicago Press. • Worrall, J., 2004, “Science Discredits Religion,” in Peterson and VanArragon, 2004. For wise counsel and good advice, I am grateful to Brian Boeninger, Thad Botham, EJ Coffman, Robin Collins, Tom Crisp, Chris Green, Jeff Green, Marcin Iwanicki, Nathan King, Dan McKaughan, Dolores Morris, Brian Pitts, Luke Potter and Del Ratzsch. Copyright © 2010 by Alvin Plantinga <> Please Read How You Can Help Keep the Encyclopedia Free
5e9233c6c9e96d80
History of Chemistry Wed, 06/05/2013 - 13:12 -- sewm02 History of Chemistry This page provides information about the historical development of models of the atom. Chemistry 1105 students can provide substantive contributions to this page in order to earn Outcome 16. Alternatively, Chem 1105 students can earn this outcome by mastering the multiple choice exam questions associated with Outcome 16. For each historical person listed below, please add the year or years during which important work was accomplished, a clear statement of their research result (or theoretical contribution) and its significance, and a concise description of the experiment that led to the result. In some cases, an important mathematical equation or formula may also be included. Each piece of information should be clearly referenced. All reference citations, including web citations, should be complete (as you learned to do with Outcome 42.) If the reference is to a web site, a link should be added to the wiki page that takes the reader directly to the web site. See Help link in left menu bar on this page for wiki editing assistance. All edits you make to this page are recorded in the page history (see tabs at top of page). You will need to log in (upper left corner of page) with your campus uca and password in order to edit the page. He is a co-originator of the belief that all matter is made up of various imperishable, indivisible elements which he called "atomos" ("atoma" plural) or "indivisible units". This is where we get the English term "atom". His theory argued that atoms only had several properties which include size, shape, and weight. Other properties, such as color and taste, are the result of interactions between the atoms in our bodies and the atoms of the matter that we are dealing with. Reference: "Democritus." Wikipedia-The Free Encyclopedia. 2 Oct 2008 <[[1]]>. He coined the term that eventually became "atom" in 450 BC. Reference: "Atom." Wikipedia-The Free Ecyclopedia. 2 Oct 2008 <[[2]]>. Democritus lived to around that age of 90. Having been born in Abdera around 459 BCE this was an exceptionally long life for a man of his times. This could be attributed to both his love of happiness and also to his rational logic that he applied to all things in life. In addition to his early theory of the atom (which was at the time philosophy) Democritus was very involved in other activities, from plays and psychology to government and engineering. He has many wise proverbs that still ring true toady, such as "The hopes of right-thinking men are attainable, but those of the unintelligent are impossible." Reference: "Democritus" -- History of Psychology 16 Dec 2008 <[[3]]>. Democritus was a strict determinist that believed everything resulted from natural laws. He followed in the footsteps of Leucippus, who he had a lot in common with and carried on the scientific rationalist psychology of their city. He was an atomist, meaning he tried to explain the world without reasoning. He wanted questions answered with a mechanistic explanation and wanted mechanistic answers. Now, modern scientists have answered the questions in the same way and have led to scientific knowledge of in physics. Reference: "Democritus" -- Philosophy and science 29 Sep 2012 <htp://en.wikipedia.org/wiki/Democritus>. Democritus was the first person to use the term "atomos." He named the atom after the Greek word atomos, which mean 'that which can't be split.' Reference: Kross, Brian. "Questions and Answers - Where Does the Word Atom Come from and Who First Used This Word?" Questions and Answers - Where Does the Word Atom Come from and Who First Used This Word? Jefferson Lab, n.d. Web. 09 Dec. 2013. <http://education.jlab.org/qa/history_01.html>. John Dalton John Dalton was born on September 6 1766 and died July 27 1844. He was an English chemist, meteorologist and physicist. His best known work is the development of the atomic theory and his research into colour blindness. Dalton published many papers about his idea on the absorption of gases by water and other liquids. These contained his law of partial pressures now known as Dalton's law. He was also one of the earliest workers in volumetric analysis. John Dalton enunciated Gay-Lussac's law or Charles's law, published in 1802. In the years following the reading of those essays, Dalton published several papers on similar topics, that on the absorption of gases by water and other liquids (1803). This contained his law of partial pressures now known as Dalton's law. A study of Dalton's own laboratory notebooks, discovered in the rooms of the Lit & Phil, concluded that so far from Dalton being led by his search for an explanation of the law of multiple proportions to the idea that chemical combination consists in the interaction of atoms of definite and characteristic weight, the idea of atoms arose in his mind as a purely physical concept, forced upon him by study of the physical properties of the atmosphere and other gases. John Dalton came up with his own Atomic Theory. It had five main points which included: (1) Elements are made of tiny particles called atoms. (2) All atoms of a given atom are identical. (3) The atoms of a given element are different from those of any other element; the atoms of different elements can be distinguished from one another by their respective relative weights. (4) Atoms of one element can combine with atoms of another element to form chemical compounds; a given compound always has the same relative numbers of types of atoms. (5) Atoms cannot be created, divided into smaller particles; nor destroyed in the chemical process; a chemical reaction simply changes the way atoms are grouped together. Reference: "John Dalton." Wikipedia, the free encyclopedia 4 Oct 2008. <http://en.wikipedia.org/wiki/John_Dalton> "John Dalton." "Wikipedia, the free encyclopedia" 27 Sep 2011. http://en.wikipedia.org/wiki/John_Dalton#Gas_laws Dmitri Mendeleev Dmitri Mendeleev was born in Siberia in 1834 and died in 1907. He began to study science in St. Petersburg and graduated in 1856.      The reason Mendeleev became the main leader in Chemistry was probably because he not only showed how the elements could be organized, but he used his periodic table to: propose that some of the elements were incorrect and also Mendeleev even predicted the properties these elements would have. It turned out that chemists had measured some atomic weights incorrectly. Mendeleev discovered periodic law and created one of the first periodic tables. He took the 63 known elements (at the time) and arranged them according to atomic weight. He then wrote the fundamental properties of every element on its own card. He saw that atomic weight was important in some way and he recognized that the behavior of the elements seemed to repeat as their atomic weights increased. By noting this, he arranged them by their similarities of properties. He also was able to use his periodic table to predict the existence and properties of eight other elements.  When he created his table, he left spaces for elements to be added and predicted future elements. His table did not include any of the Noble Gases we have today because they were not discovered at the time. A guy named Moseley modified and corrected Mendeleev's periodic table many times throughout the years. Mendeleev is also known for studying the thermal expansion of liquids and for studying the nature and origin of petroleum. Element 101 is named Mendelevium in his honor.  References: "Who was Dmitri Mendeleev?." Kiwi Web, Chemistry and New Zealand. 1998. 7 Oct 2008 <http://www.chemistry.co.nz/mendeleev.htm>. "Dmitri Mendeleev." Famous Scientists. famousscientists.org. 1 Sep. 2014. Web. 9/22/2015 "Dmitri Mendeleev." Famous Scientists. Web. 6 Oct. 2015.  http://www.famousscientists.org/dmitri-mendeleev/ J. J. Thomson J.J. Thompson was credited for the discovery of electrons in 1897. Reference: "Electron." Wikipedia-The Free Encyclopedia.2 Oct 2008 <[[4]]>. Thompson conducted a series of experiments involving cathode rays and cathode ray tubes that led him to his discovery. The three experiments are explained below: Experiment 1: He constructed a cathode ray tube ending in a pair of cylinders with slits in them. The slits were connected to an electrometer. He found that if the rays were bent so that they could not enter the slits, the electrometer would register a very small charge. Concluded that the negative charge was inseparable from the rays. Experiment 2: He constructed a cathode ray tube with an almost perfect vacuum and coated one end with phosphorescent paint. He found that rays did indeed bend with the influence of an electric field, and it was in a way that indicated a negative charge. Experiment 3: He measured the mass to charge ratio of cathode rays by measuring how much they were deflected by magnetic fields and how much energy they carried. Concluded that cathode rays were made of particles and called them "corpuscles." The "corpuscles" came from within the atoms of the electrodes. The "corpuscles" that he discovered are identified with the electron which was proposed by G. Johnstone Stoney. Reference: "J.J. Thompson." Wikipedia-The Free Encyclopedia. 2 Oct 2008 <[[5]]>. J.J Thomson proposed the plum-pudding model of the atom following his discovery of the electron. According to the plum-pudding model, the negatively charged electrons were dispersed randomly throughout a positively charged material, much like plums embedded in plum pudding. Reference:  "Atomic Models - The First Atomic Models." Net Industries. Web. 21 Sept. 2015. http://science.jrank.org/pages/621/Atomic-Models-first-atomic-models.html In America, since plum pudding is not a common dish, it might be easier to see it as a chocolate chip cookie.  The dough would represent the positive, and the chocolate chips would represent the electrons, as described by the plum pudding model. Villanueva, John Carl. "Plum Pudding Model." Universe Today. 27 Aug. 2009. Web. 14 Oct. 2015. <http://www.universitytoday.com/38326/plum-pudding-model/>. Mass to charge ratio In the 19th century the mass-to-charge ratios of some ions were measured by electrochemical methods. In 1897 the mass-to-charge ratio, [m⁄e], of the electron was first measured by J. J. Thomson. By doing this he showed that the electron—postulated earlier to explain electricity—was in fact a particle with a mass and a charge; and that its mass-to-charge ratio was much smaller than that of the hydrogen ion H+.In 1898 Wilhelm Wien separated ions (canal rays) according to their mass-to-charge ratio with an ion optical device with superimposed electric and magnetic fields (Wien filter). In 1901 Walter Kaufman measured the relativistic mass increase of fast electrons. In 1913, Thomson measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Today, an instrument that measures the mass-to-charge ratio of charged particles is called a mass spectrometer. Thomson's experiments and big idea: Thomson built a cathode ray tube. It was connected to an electrometer, a device for catching and measuring electrical charge. Thomson wanted to see if, by bending the rays with a magnet, he could separate the charge from the rays. As Thomson saw it, the negative charge and the cathode rays must somehow be stuck together: you cannot separate the charge from the rays. He calculated the ratio of the mass of a particle to its electric charge (m/e). He collected data using a variety of tubes and using different gases. He later announced: ""we have in the cathode rays matter in a new state, a state in which the subdivision of matter is carried very much further than in the ordinary gaseous state: a state in which all matter... is of one and the same kind; this matter being the substance from which all the chemical elements are built up." Reference: "Three Experiments and One Big Idea." Three Experiments and One Big Idea. American Institute of Physics, n.d. Web. 08 Dec. 2013. <http://www.aip.org/history/electron/jj1897.htm>. Ernest Rutherford Ernest Rutherford was the second son born into a family of twelve children on August 30, 1871. His father, James Rutherford, was a Scottish wheelwright. His mother, Martha Thompson, was an English schoolteacher. Growing up, Ernest’s primary education was received at government institutions. When he turned sixteen, he began his secondary education at Nelson Collegiate School. From here, Ernest received a scholarship and moved on to the University of New Zealand, Wellington where he attended Canterbury College. Rutherford received his M.A. in 1893 with a double major in both Mathematics, and Physical Science. His research in New Zealand was focused on the “magnetic properties of iron exposed to high-frequency oscillations." (Nobel Lectures) His thesis Magnetization of Iron by High-Frequency Discharges included an original experiment. His subsequent paper, Magnetic Viscosity, contained descriptions of a highly accurate highly precise device for measuring time down to the hundred-thousandth of a second. This idea produced in 1896 was well ahead of its time. The post-graduate continued his research at Canterbury College until he received his B.Sc. in 1894. This same year, Rutherford was awarded another scholarship to attend Trinity College, Cambridge. He then became a research student under J.J. Thompson, a fellow Nobel Prize winner. Rutherford was almost immediately taken under J.J. Thompson’s wing in the laboratory. He then created a detector capable of locating electromagnetic waves. From here, Rutherford began collaborating with Thompson on experiments. Together, they studied how ions acted in gases that were treated with x-rays. In 1897, Rutherford received yet another degree, this one, a B.A. in research from Trinity College. References: From Nobel Lectures, Chemistry 1901-1921, Elsevier Publishing Company, Amsterdam, 1966 <http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1908/rutherford-bio.html> Ernest Rutherford published the atomic theory in 1911, which states that the atom has a central, positive nucleus that is surrounded by negative electrons that orbit around the nucleus. Rutherford's model suggests that most of the atom's mass is included in the small nucleus, and the rest of the atom is nearly empty space. Rutherford established this model when conducting his gold foil experiment. Rutherford disproved the plum pudding model of the atom. Radioactive particles were fired through thin gold foil and were detected by screens coated with zinc sulfide. Rutherford concluded that although most of the particles passed right through the foil, some particles, nearly 1 in 8000, were deflected. Reference: Rutherford - Atomic Theory. Chemsoc Timeline. 8 Oct 2008. <http://www.rsc.org/chemsoc/timeline//pages/1911.html> Ernest Rutherford received an Exhibition Science Scholarship at Trinity College in Cambridge in 1851. At this time he studied under J.J. Thomson. In 1898 Ernest Rutherford discovered the existence of alpha and beta rays in Uranium radiation. Following this he discovered a new noble gas, thoron, which is an isotope of radon. His experiments on radioactive bodies and alpha rays took place in Montreal at McGill, in the Macdonald Laboratory. Reference: "Ernest Rutherford-biography" nobelprize.org 13 Dec 2011. <http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1908/rutherford-bio.html> Hans Geiger and Ernest Marsden Hans Geiger best known as the co-inventor of the Geiger counter and for the Geiger-Marsden experiment which discovered the atomic nucleus. In 1902 Geiger started studying physics and mathematics in University of Erlangen. In 1909, he and Ernest Marsden conducted the famous Geiger-Marsden experiment called the gold foil experiment. Together they created the Geiger counter. In 1911, Geiger and John Mitchell Nuttall discovered the Geiger-Nuttall law (or rule), which led to Rutherford's atomic model. In 1928 Geiger and his student Walther Müller created an improved version of the Geiger counter, the Geiger-Müller counter. http://en.wikipedia.org/wiki/Hans_Geiger On the other hand, Ernest Marsden, who studied at the University of Manchester under Ernest Rutherford and Hans Geiger,contributed to Ernest Rutherford's work on the structure of the atom. Durning the 1900's, Marsden's work consisted of observing that a tiny fraction of alpha particles fires at a thin gold foil were deflected straight back, in which Rutherford used these results to determine a new structure of the atom. (1) Also, Marsden and Geiger continued their study with alpha particles and later, an 1913, correlated the nuclear charge with the atomic number. (2) Hans Geiger and Ernest Marsden discovered that the nucleus of an atom accounts for most of the atom's mass, but very little of its size. An atom is mostly empty space with a small, dense nucleus. Reference: "Ernest Marsden" Chemistry Encyclopedia. 23 September 2011.[6](1) Reference: "Ernest Marsden" National Library of New Zealand. 23 September 2011[7](2) Robert Millikan In 1909, Robert Millikan created an experiment that would allow him to measure an electron's charge. In the experiment, a fine spray of oil was ejected above a pair of metal plates (the top of the two had a small hole). As the mist settled, some of the oil dripped into the hole and in the empty space between the plates. Millikan illuminated these drops with X-rays, withdrawing electrons from molecules in the air; these electrons then attached themselves to the oil, giving the drops an electrical charge. By measuring how fast the drops fell when the metal plates were charged and when they were not, Millikan could determine the charge that each drop possessed. After reviewing his results, Robert found that all of the values he obtained were whole-number multiples of -1.60 × 10-19 C. Since a drop of oil can logically only attach to a whole number of electrons, that value is carried by each electron. Once Millikan had measured the electron's charge, he then found the mass using J. J. Thomson's charge-to-mass ratio. The mass was determined to be 9.09 × 10-28 g. Since this experiment, other scientists have found the more accurate mass of the electron to be 9.109383 × 10-28 g. James, Brady. Chemistry Matter and Its Changes. 5th Edition. New York: John Wiley & Sons Inc. Neils Bohr Neil Bohr first started experiments under J.J. Thomson and he later went to study under Ernest Rutherford. After studying and experimenting with Rutherford, Neil Bohr published his model of atomic structure in 1913 which introduced the theory of electrons traveling in orbits around the atom's nucleus and the chemical properties of the element being largely determined by the number of electrons in the outer orbits. Neil Bohr also introduced the idea that an electron could drop from a higher-energy orbit down to a lower energy-orbit, emitting a photon of discrete energy. This idea became the basis of the quantum theory. Neil Bohr contributed to chemistry and physics in the following ways: Bohr's model-the theory that electrons travel in discrete orbits around the atom's nucleus, the shell model of atom-where the chemical properties of an element are determined by the electrons in the outermost orbit, the correspondence principle-the basic tool of the old quantum theory, the liquid drop model of the atomic nucleus, identified the isotope of uranium that was responsible for slow-neutron fission, much work on the Copenhagen interpretation of quantum mechanics, and the principle of complementary-that items could be separately analyzed as having several contradictory properties. Reference: http://en.wikipedia.org/wiki/Neils_Bohr Bohr also conceived the principle of complementarity: that items could be separately analyzed as having several contradictory properties. For example, physicists currently conclude that light behaves either as a wave or a stream of particles depending on the experimental framework — two apparently mutually exclusive properties — on the basis of this principle. reference: http://en.wikipedia.org/wiki/Niels_Bohr The Bohr model, devised by Niels Bohr, depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits around the nucleus—similar in structure to the solar system, but with electrostatic forces providing attraction, rather than gravity. This was an improvement on the earlier cubic model (1902), the plum-pudding model (1904), the Saturnian model (1904), and the Rutherford model (1911). Since the Bohr model is a quantum physics-based modification of the Rutherford model, many sources combine the two, referring to the Rutherford–Bohr model. reference: http://en.wikipedia.org/wiki/Bohr_model Niels Bohr's model of an atom had six different assumptions (1) The electron travels around the nucleus in a circular orbit. (2) The energy of the electron in orbit is proportional to its distance from the nucleus. (3) Only a limited number of orbits with certain energies are allowed. (4) The only orbits that are allowed are those for which the angular momentum of the electron is a multiple of Plank's constant divided by 2. (5) Light is absorbed when an electron jumps to a higher energy orbit an demitted when an electron falls into a lower energy orbit. (6) The energy of the light emitted or absorbed is exactly equal to the difference between the energies of the orbits. Reference: http://chemed.chem.purdue.edu/genchem/history/bohr.html James Chadwick James Chadwick was born in Bollington, Cheshire, the son of John Joseph Chadwick and Anne Mary Knowles. He went to Bollington Cross C of E Primary School, attended Manchester High School, and studied at the Universities of Manchester and Cambridge. In 1913 Chadwick went and worked with Hans Geiger at the Technical University of Berlin. He also worked with Ernest Rutherford. He was in Germany at the start of World War I and would be interned in Ruhleben P.O.W. Camp just outside Berlin. During his internment he had the freedom to set up a laboratory in the stables. With the help of Charles Ellis he worked on the ionization of phosphorus and also on the photo-chemical reaction of carbon monoxide and chlorine. He spent most of the war years in Ruhleben until Geiger's laboratory interceded for his release. In 1932 Chadwick made a fundamental discovery in the domain of nuclear science: he discovered the particle in the nucleus of an atom that became known as the neutron because it has no electric charge. In contrast with the helium nuclei (alpha particles) which are positively charged, and therefore repelled by the considerable electrical forces present in the nuclei of heavy atoms, this new tool in atomic disintegration need not overcome any Coulomb barrier and is capable of penetrating and splitting the nuclei of even the heaviest elements. In this way, Chadwick prepared the way towards the fission of uranium 235. For this important discovery he was awarded the Hughes Medal of the Royal Society in 1932, and subsequently the Nobel Prize for Physics in 1935. Chadwick’s discovery made it possible to create elements heavier than uranium in the laboratory. His discovery particularly inspired Enrico Fermi, Italian physicist and Nobel laureate, to discover nuclear reactions brought by slowed neutrons, and led Otto Hahn and Fritz Strassmann, German radiochemists in Berlin, to the revolutionary discovery of “nuclear fission”. Chadwick became professor of physics at Liverpool University in 1935. As a result of the Frisch-Peierls memorandum in 1940 on the feasibility of an atomic bomb, he was appointed to the MAUD Committee that investigated the matter further. He visited North America as part of the Tizard Mission in 1940 to collaborate with the Americans and Canadians on nuclear research. Returning to England in November 1940, he concluded that nothing would emerge from this research until after the war. In December 1940 Franz Simon, who had been commissioned by MAUD, reported that it was possible to separate the isotope uranium-235. Simon's report included cost estimates and technical specifications for a large uranium enrichment plant. James Chadwick later wrote that it was at that time that he "realized that a nuclear bomb was not only possible, it was inevitable. I had then to take sleeping pills. It was the only remedy." Source of Material: http://en.wikipedia.org/wiki/James_Chadwick Wolfgang Pauli Wolfgang Pauli received the Nobel Prize in physics in 1945, for the discovery of the exclusion principle, also called the Pauli Principle. The principle was proposed in 1925 as an assertion that no two electrons in an atom can be at the same time in the state or configuration. This was to account for the observed patterns of light emission from atoms. The exclusion principle subsequently has been generalized to include the whole class of particles called fermions. The Pauli exclusion principle indicates, that only two electrons are allowed in each atomic energy state, leading to the successive buildup of orbitals around the nucleus. This prevents matter from collapsing to an extremely dense state. Before Pauli's suggested explanation, it was believed that there were only 3 quantum numbers. In late 1924, he suggested adding a fourth quantum number. The first three quantum numbers made sense physically, since they related to the electron's motion around the nucleus, a property which the new quantum number did not fit. Pauli called his new quantum property of the electron a, "two-valuedness not describable classically." This proposed fourth quantum number puzzled physicists at the time because no one could explain its physical significance. Even Pauli himself was trouble by the idea, and also that he couldn't give any logical explanation for the exclusion principle or derive it from other laws of quantum mechanics, and he remained unhappy about this problem. Nonetheless, the principle worked–it explained the structure of the periodic table and is essential for explaining other properties of matter.​ 1.Gavryushin,& Zukauskas, "Pauli Exclusion Principle." (18 Apr 2002 ). 23 Oct 2008. 2. Massimi, Michela (2005). Pauli's Exclusion Principle. Cambridge University Press. ISBN 0-521-83911-4. 3. Tretkoff, Ernie. "This Month is Physics History. January 1925: Wolfgang Pauli announces the exclusion principle." Ed. Alan Chodos. American Physical Society. Web. 6 Oct. 2015. http://www.aps.org/publications/apsnews/200701/history.cfm Albert Einstein Albert Einstein is known for his theory of relativity and mass–energy equivalence, E = mc². Einstein has contributed to physics by his special theory of relativity, his general theory of relativity, and a new theory of gravitation. He also contributed to the advances in the fields of relativistic cosmology, capillary action, critical opalescence, classical problems of statistical mechanics and to quantum theory. Einstein published over 300 scientific works and over 150 non-scientific works. http://en.wikipedia.org/wiki/Albert_Enstein After Einstein was finished with his theory of relativity his research consisted of attempts to generalize his theory of gravitation. This is because he wanted to unify and simplify the fundamental laws of physics, in particular, gravitation and electromagnetism. He described the unified field theory in 1950; however, he was never able to successfully unify the laws of physics under a single model. Because Einstein focused his later work on this unsuccessful model, he ignored the mainstream developments within his field that many people believed he could have clarified. "Albert Einstein." Wikipedia-the Free Encyclopedia. 17 Dec 2008 [[8]]. Einstein attended school at the Luitpold Gymnasium in Munich. When he moved to Italy, he continued his schooling at Aarau, Switzerland. Einstein was trained as a physics and mathematics teacher at the Swiss Federal Polytechnic School in Zurich. He obtained his diploma in 1901, and he finished his doctorate degree in 1905. Einstein’s special theory of relativity came from his attempt to join the laws of mechanics with the laws of the electromagnetic field. His study of the problems with statistical mechanics and the problems of how they merged with the quantum theory is what led to his explanation of the Brownian movement of molecules. Einstein’s observations of thermal properties of light with low radiation density are what laid the foundation for the photon theory of light. Although Einstein is thought of as only a man of science, he was also active in politics. After WWII, he was a leading figure in the World Government Movement and he was also offered the presidency of Israel. Einstein’s works were recognized with awards such as the Copley Medal of the Royal Society of London, and the Franklin Medal of the Franklin Institute. "Albert Einstein - Biographical." Nobelprize.org. Nobel Media AB 2013. Web. 29 Sep 2013. <http://www.nobelprize.org/nobel_prizes/physics/laureates/1921/einstein-bio.html> Erwin Schrodinger Erwin Rudolf Josef Alexander Schrödinger (12 August 1887 – 4 January 1961) was an Austrian physicist who achieved fame for his contributions to quantum mechanics, especially the Schrödinger equation, for which he received the Nobel Prize in 1933. In 1935, after extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger's cat thought experiment. He became the assistant to Max Wien, in Jena, and in September 1920 he attained the position of ao. Prof. (Ausserordentlicher Professor), roughly equivalent to Reader (UK) or associate professor (US), in Stuttgart. In 1921, he became o. Prof. (Ordentlicher Professor, i.e. full professor), in Breslau (now Wrocław, Poland). In 1921, he moved to the University of Zürich. In January 1926, Schrödinger published in the Annalen der Physik the paper "Quantisierung als Eigenwertproblem" [tr. Quantisation as an Eigenvalue Problem] on wave mechanics and what is now known as the Schrödinger equation. In this paper he gave a "derivation" of the wave equation for time independent systems, and showed that it gave the correct energy eigenvalues for the hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century, and created a revolution in quantum mechanics, and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, the rigid rotor and the diatomic molecule, and gives a new derivation of the Schrödinger equation. A third paper in May showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this most remarkable series showed how to treat problems in which the system changes with time, as in scattering problems. These papers were the central achievement of his career and were at once recognized as having great significance by the physics community. In 1927, he succeeded Max Planck at the Friedrich Wilhelm University in Berlin. In 1933, however, Schrödinger decided to leave Germany; he disliked the Nazis' anti-semitism. He became a Fellow of Magdalen College at the University of Oxford. Soon after he arrived, he received the Nobel Prize together with Paul Adrien Maurice Dirac. His position at Oxford did not work out; his unconventional personal life (Schrödinger lived with two women) was not met with acceptance. In 1934, Schrödinger lectured at Princeton University; he was offered a permanent position there, but did not accept it. Again, his wish to set up house with his wife and his mistress may have posed a problem. He had the prospect of a position at the University of Edinburgh but visa delays occurred, and in the end he took up a position at the University of Graz in Austria in 1936. In the midst of these tenure issues in 1935, after extensive correspondence with personal friend Albert Einstein, he proposed the Schrödinger's cat thought experiment Source of Material- http://en.wikipedia.org/wiki/Erwin_Schrodinger Louis de Broglie In 1924, his doctoral thesis in the Research on Quantum Theory introduced the theory of electron waves. This research included the wave-particle duality theory of matter. This theory showed the de Broglie hypothesis, which stated that any moving particle or object had an associated wave. He based this work off of the work of Albert Einstein and Planck. From this research, he created a new branch of physics called wave mechanics. This branch combined the physics of light and matter. Also in the application of his work, he further developed the use of electron microscopes to get much better resolution than optical ones. The reason for this was because of the shorter wavelengths of electrons compared with photon. Louis de Broglie contributions to chemistry: -came up with broglie hypothesis in the wave-particle duality theory -stated any moving particle or object had an associated wave -created a new branch of chemistry consisting of the physics of light and matter--called wave mechanics -further developed use of electron microscope References: “Louis de Broglie.” Wikipedia, the free encyclopedia. 17 Nov 2008. <http://en.wikipedia.org/wiki/Louis_de_Broglie?> Max Planck Max Planck was born on April 23, 1858 in Kiel, Germany. He studied at the Universities of Munich and Berlin, and received his doctorate in philosophy at Munich in 1879. He started his work on thermodynamics which he published papers on. Around 1894, he had an interest on the problems of radiation processes. He was led to the problem of the distribution of the energy in the spectrum of full radiation. He observed wavelength distribution and the energy emitted by a black body to deduce the relationship between the energy and the frequency of radiation. In 1990 he announced that the energy emitted by a resonator could only take on discrete values or quanta. The energy for a resonator of frequency v is hv where h is a universal constant, now called Planck's constant. His original constant number was quoted as (6.55×10−27 erg·sec);however, as of March 3, 2014, it is best defined as (6.62606957×10−34 J·s). Planck's work on quantum theory was published in the Annalen der Physik. He also won the Society's Copley Medal in 1928. He died on October 4, 1947 in Göttingen. Max Planck:The Nobel Prize in Physics 1918. The Nobel Foundation. 13 October 2008. http://nobelprize.org/nobel_prizes/physics/laureates/1918/planck-bio.html Johannes Balmer He was born May 1, 1825, Lausanne, Switz. He died March 12, 1898. During his schooling he excelled in mathematics, and so decided to focus on that field when he attended university. He studied at the University of Karlsruhe and the University of Berlin, then completed his Ph.D. from the University of Basel in 1849. Despite being a mathematician, he is not remembered for any work in that field; rather, his major contribution (made at the age of sixty, in 1885) was an empirical formula for the visible spectral lines of the hydrogen atom. Using Ångström's measurements of the hydrogen lines, he arrived at a formula for finding the wavelength as follows: for n = 2, h = 3.6456×10−7 m, and m = 3, 4, 5, 6, and so forth. In his 1885 journal he referred to "h" as the "fundamental number of hydrogen." His formula was later revised by Johannes Rydberg as finding the frequency of a wavelength. Reference: Balmer, Johannes. Wikipedia The Free Encyclopedia. 23 Sept. 2008. Date accessed 3 Dec. 2008. http://en.wikipedia.org/wiki/Johann_Jakob_Balmer Johannes Rydberg Johannes Robert Rydberg was a Swedish physicist mainly known for devising the Rydberg formula, in 1888, which is used to predict the wavelengths of photons (of light and other electromagnetic radiation) emitted by changes in the energy level of an electron in an atom. The physical constant known as the Rydberg constant is named after him, as is the Rydberg unit. Excited atoms with very high values of the principal quantum number, represented by n in the Rydberg formula, are called Rydberg atoms, and a crater on the moon is also named Rydberg in his honour. Rydberg's faith that spectral studies could assist in a theoretical understanding of the atom and its chemical properties was justified in 1913 by the work of Niels Bohr (see hydrogen spectrum). An important spectroscopic constant based on a hypothetical atom of infinite mass is called the Rydberg (R) in his honor. Source of Material - http://en.wikipedia.org/wiki/Johannes_Rydberg By examining all the lines in the spectrum of the hydrogen atoms, an empirical model was derived that explained the pattern of the emission. The specific wavelengths (or frequencies or energies) could be predicted based upon a constant and two integers. The interpretation was that one integer represented the initial state and one integer the final state. The wavelength (or frequency or energy) was related to the change that occurred moving between these two states. The original formula related inverse wavelength (known as "wavenumber") to the integers that were related to the initial and final states. 1λ=R(1/nf^2−1/ni^2) where R is equal to 1.097×10^7 m^-1 and nf and ni  are integers that describe the initial and final states of the electron. While the original formula derived by Rydberg did not look directly at energies, we can rewrite the formula to have these units. Under these conditions, the change in energy of the electron is given by ΔE=R(1/nf^2−1/ni^2). Now the constant R (the Rydberg constant) is equal to 2.178×10^−18 J. Source Website- http://ch301.cm.utexas.edu/section2.php?target=atomic/H-atom/rydberg.html Rydberg decided to use the wave number as a measure of frequency in his calculations. A wave number is the reciprocal of wavelength. What Rydberg did not know at the time was that it was directly related to energy. With the change, patterns began to emerge in the data with a particular series of lines for any atom leading to a hyperbolic relationship. Source Material: C. Davisson and L. H. Germer; G. P. Thomson >p Three years after de Broglie asserted that particles of matter could possess wavelike properties, the diffraction of electrons from the surface of a solid crystal was experimentally observed by C. J. Davisson and L. H. Germer of the Bell Telephone Laboratory. In 1927 they reported their investigation of the angular distribution of electrons scattered from nickel. With careful analysis, they showed that the electron beam was scattered by the surface atoms on the nickel at the exact angles predicted for the diffraction of x-rays according to Bragg's formula, with a wavelength given by the de Broglie equation, λ = h / mv. Also in 1927, G. P. Thomson, the son of J. J. Thomson, reported his experiments, in which a beam of energetic electrons was diffracted by a thin foil. Thomson found patterns that resembled the x-ray patterns made with powdered (polycrystalline) samples. This kind of diffraction, by many randomly oriented crystalline grains, produces rings. If the wavelength of the electrons is changed by changing their incident energy, the diameters of the diffraction rings change proportionally, as expected from Bragg's equation. 1.Colwell, Catherine H.. "Famous Experiments: Davisson-Germer." Physics Lab. 2008. 23 Oct 2008 <http://dev.physicslab.org/Document.aspx?doctype=3&filename=AtomicNuclear_DavissonGermer.xml>. Werner Heisenberg In 1925, Heinsenberg, with mathematical help from Max Born, developed the first version of quantum mechanics, a matrix method of calculating the behavior of electrons and other subatomic particles.The method was superseded as a practical tool soon after by the more intuitive wave equation of Erwin Schrodinger, but the matrix mechanics remains a great intellectual accomplishment.In 1927, the German physicist Werner Heisenberg showed mathematically that it is impossible to measure with complete precision both a particle's velocity and position at the same instant. To measure an electron's position or velocity, we have to bounce another particle off it. Thus, the very act of making the measurement changes the electron's position and velocity. We can not determine both exact position and exact velocity simultaneously, no matter how cleverly we make the measurements. This was Heisenberg's famous uncertainty principle. The theoretical limitations on measuring speed and position are not significant for large objects. For small particles such as the electron, however, these limitations prevent us from ever knowing or predicting where in an atom an electron will be at a particular instant, so we speak of probabilities instead.A few years later he introduced a new quantum number called isotopic spin, which is a quantum-mechanical variable, resembling the angular momentum vector in algebraic structure whose third component distinguished between members of groups of elementary particles. He continued to contribute to particle physics, introducing useful computational techniques in the 1950s. Consequently, wave mechanics describes the probable locations of electrons in atoms. Wave mechanics views the probability of finding an electron at a given point in space as equal to the square of the amplitude of the electron wave at that point. In each orbital the electron is conveniently viewed as an electron cloud with a varying electron density. 2. "Werner Karl Heisenberg." Answers.com. 2008. Answers Corporation. 23 Oct 2008 <http://www.answers.com/topic/werner-heisenberg>. Fritz Wolfgang London, a theoretical physicist, was born Breslau, Silesia, Germany in 1900. He had a position at the University of Berlin, but lost it due to Hitler's Nazi Party Racial laws in 1933. He emigrated to the United States in 1939, where he then became a professor at Duke University. With his brother Heinz,. he made fundamental contributions to the theories of chemical bonding and intermolecular forces. London's early work was in the area of intermolecular forces. He observed the attraction between two rare gas atoms at a large distance from each other. This attraction is now known as "London Force". For atoms and nonpolar molecules, the London dispersion force is the only intermolecular force and is the reason that they exist in solid and liquid states. For polar molecules, the London dispersion force is one part of the van der Waals force as well as the permanent molecular dipole moments. Source of Material: http://en.wikipedia.org/wiki/Fritz_London The London Dispersion Forces, named after Fritz London, exist between atoms. They become stronger when dealing with larger atoms, more surface contact between molecules, and larger electron clouds.LDF is a weak intermolecular force that is part of the Van der Waals Force. LDF exists when electrons try to avoid each other. These forces are exhibited by nonpolar molecules because of the movements of the electrons within the molecules. These forces allow noble gasses to be found in a liquid form because they would otherwise have no attractive forces, and would not conjeal together. Although it may seem odd, these forces are weaker than ionic bonds, and even hydrogen bonds. Source of Material: http://en.wikipedia.org/wiki/London_dispersion_force - Hess's law is a law of physical chemistry created by Germain Hess.It is the expansion of the Hess Cycle and used to predict the enthalpy change and conservation of energy regardless of the path through which it is to be determined. - Hess's law states that because enthalpy is a state function, the enthalpy change of a reaction is the same regardless of what pathway is taken to achieve the products. - Hess's law has also led to an extension to entropy and free energy. For example the Bordwell thermodynamic cycle takes advantage of easily measured equilibrium and redox potentials to determine experimentally inaccessible Gibbs free energy values. - The law states that the energy change for any chemical or physical process is independent of the pathway or number of steps required to complete the process. In other words, an energy change is path independent, only the initial and final states being of importance. Citation: "Hess's law." Wikipedia-the free encyclopedia.3 Dec. 2008. [[9]]. ---An illustration of Hess's law which would be incorporated in thermal chemical equations • A+B=AB, dH1 • AB+B=AB2,dH​1 2 • So the following is true A+2B=AB2dH​​1 2=dH1+dH Citation: Chung Chieh"Hess's Law." CAcT-Computer Assisted Chemistry Tutor. 15 Oct. 2015 [http://www.science.uwaterloo.ca/~cchieh/cact/c120/hess.html] Theodore W Richards Theodore W. Richards was born in Germantown, Pennsylvania in 1868. When Theodore was only 10 years old, his family moved to Europe for a few years, which greatly influenced his interest in science. Upon returning to the United States, he entered Haverford College at age 14, and eventually graduated from Harvard. He received his PhD in chemistry and began his research by working with oxygen and copper and eventually, developed a new way to determine atomic weights. By 1912, he had determined over 30 atomic weights, with the highest degree of accuracy. At the end of his chemistry career, he and his students had discovered over 55 atomic weights that are still used today. He played a large part in modernizing the concept of an atom. He also did research on atomic and molecular volume. Richards was responsible for introducing the use of transition temperatures and using hydrated salts as fixed points in standard thermometers. However, his biggest achievement was becoming the first American scientist to win the Nobel Prize in Chemistry in 1914. Today, there is an award known as the Theodore William Richards Medal for Conspicuous Achievement in Chemistry and the Theodore William Richards Medal. - "Theodore W. Richards." Nobel Lectures. The Nobel Foundation. 9 Dec 2008 <http://nobelprize.org/nobel_prizes/chemistry/laureates/1914/richards-bio.html>. -"Theodore W. Richards." Wikipedia, the free encyclopedia. 16 Sept 2008. 11 Dec 2008 <http://en.wikipedia.org/wiki/Theodore_William_Richards>. -"The Theodore William Richards Medal." The Northeastern Section of the American Chemical Society. 2008. 11 Dec 2008 <http://www.nesacs.org>. Submitted by mconti on In the seventh subtitle, Robert's Last name is misspelled. His last name should be written as in the "Millikan," not "Millilkan." Submitted by mconti on  As I was reading for the charge-to-mass ratio of electrons, I noticed that J.J. Thomson's last name was also misspelled. In the subtitle, it was written as "Thompson".  Submitted by mstrait1 on Henry Moseley was a visionary who worked with physics and worked on theories that then continued to change how people approached atoms and molecules. Moseley's contributions had a huge affect of modern science. Henry Moseley's most important work was the acceptance of "Moseley's Law" which proved that the atomic number of the element is actually equivalent to the charge inside the nucleus of an atom. Henry Moseley decided to join the British Army during World War 1, however he was killed in August of 1915. Moseley won the "Matteucci Medal" in 1919 for his contributions to physics and chemistry. Reference: "Henry Moseley Biography." - Childhood, Life Achievements & Timeline. Web. 6 Oct. 2015. <http://www.thefamouspeople.com/profiles/henry-moseley-6556.php>.
c54ba475e3a86e0f
« Conversation with Nature I | Main | Brains thinking about brains » Explaining the Blackett Sculpture Located high above our heads over the Blackett laboratory entrance is a large relief sculpture that often goes unnoticed. The sculpture was installed in 1958 when the building was opened.  The content is a colourful assortment of the state of physics knowledge at the time: nine images and four blocks of densely packed equations and scientific data - some expressed in ways that would not be familiar to contemporary physicists. There seems to be very little information about the work in the college archives, except to say that the sculpture in Irish limestone is by John Skeaping who was Professor of Sculpture at the RCA at the time and most known for his images of horses. Various conversations in recent months have decoded the work.  I have had the great pleasure of talking with Tom Kibble and Norman Barford who were present in the department when the sculpture was installed. Others have also helped - Andrew Jaffe, Chris Phillips, Steve Rose, Lady Anne Thorne.  We are left with one outstanding point which is the identification of the spectral lines -  we wonder if they are too stylised to recognise. Our results are recorded below against sections of the original design.  We deal with the nine images first and then the texts. Thank you to everyone who helped, with particular thanks to Tom and Norman. a. Laminar fluid flow around a smooth object. b. Spectral lines – various series.  Not identified.  c. Bubble chamber picture. The reaction was probably the production of neutral particles: Lambda-0 and a K0 particle, so do not show up as visible tracks, but decay into pairs of charged particles (possibly with an extra neutral one), so they show up as V shapes, pointing back towards the initial scattering vertex.   a. Magnetic or electric dipole aligned with the horizontal axis, showing connecting field and crossing equipotential lines.  b. Screw dislocation possibly related to semiconductors. Maybe silicon carbide.  a. Alpha particles in a helium filled cloud chamber.  The forking tracks are scattering of the alphas and helium nuclei which have the same mass.  Probably based on pictures taken by Blackett. b. A crystal lattice, possibly GaAs. c. Larmor precession - symbolic representation of the effect of a magnetic field on an electronic orbit. The densely packed characters in the carving are separated by a colon every time the subject changes.  The same format is used in the descriptions. a. There are two different decays on each line -- not only the decays of the mesons but also those of the strange baryons: Lambda0, Sigma+, Sigma- and Xi-.  (These are the commoner decay modes; there are others.) b. Masses of proton, electron and neutron in electron masses. c. Here again there are two decays in each line, all of them decays of the strange mesons (kaons), K+/- and the two varieties of K0.  d. The masses (in terms of electron masses) of mu+/-, pi0, pi+/- and K+/-. e. The masses of Lambda0, Sigma+, Sigma- and Xi- (in terms of electron masses). a. Newton’s law of gravitation: Saha Boltzman equation: Ratio of the gravitational and electromagnetic forces between an electron and proton: Kepler’s period of an object moving elliptically about the sun with semi major axis ‘a’:   Energy and momentum of relativistic particle. b. Newton’s Gravitational Constant: Diffraction: Intensity of Rutherford Scattering at angle theta: Ideal Gas Law: Entropy applying to the statistical mechanics of any system (carved on Boltzman’s burial stone). c. Energy of relativistic particle of mass mo: Relationship between specific heat at constant pressure and that at constant volume where g is the Gibbs free energy (old fashioned notation): Bose Einstein (-) and Fermi Dirac (+) statistics: Curie’s law for magnetic susceptibility at low temperatures: General equation to show how a classical path minimises action. a. Maxwell’s laws of electricity and magnetism. b. Calculation of the electromotive force (v) in a circuit: The force on a charged particle in a magnetic field. c. This is a relation between two solutions, φ and ψ of the Schrödinger equation.  The second equality, between volume and surface integrals is what is sometimes called Green's second theorem.  ds is an element of surface area, and dτ an element of volume.  The volume integral is over some volume V and the surface integral over its bounding surface. dn is a spatial derivative in the normal direction. d. All relate to electromagnetism: D is the electric displacement vector: B is the magnetic field: Continuity equation for energy conservation. Energy dissipation is allowed for and represented by the last term E.j.  a. Schrodinger equation: Commutation relation between p and q: De Broglie: Mass of electron: Energy of a photon:  A peculiar way of writing the Dirac Equation - β is the Dirac matrices  b. 1s wave function in a hydrogen like atom of Bohr radius a0:  Speed of Light: Bohr’s relation for the frequency of light emitted by hydrogen: Fine Structure Constant:  Compton Scattering.  c. Heisenberg’s Uncertainty Principle: Black body radiation: Fusion of deuterium and tritium to make Helium and energy release: Planck’s constant: The Born Rule.
3b2b1b3d079a659a
Copenhagen interpretation From Wikipedia, the free encyclopedia   (Redirected from Copenhagen Interpretation) Jump to: navigation, search The Copenhagen interpretation is a collection of axioms or doctrines that interpret the mathematical formalism of quantum mechanics, largely devised in the years 1925–1927 by Niels Bohr and Werner Heisenberg. It is fundamental to the Copenhagen interpretation that the results of experiments must be reported in ordinary language, not relying on words that have only mathematical symbols or ordinarily undefined terms, at the roots of their meaning. As one of its axioms, its most fundamental and unquestionable, the Copenhagen interpretation asserts the "postulate of the quantum", that natural change is necessarily by way of indeterministic physically discontinuous transitions between discrete stationary states. Various consequences are inferred from this postulate of unpredictable physical discontinuity. Another of its axioms is that incompatible conjugate properties cannot be defined for the same time and place; this is expressed in detail by Heisenberg's Uncertainty principle. A major reason why interpretation of the quantum mechanical formalism is needed is that it provides an account that is in general not separable in time and space, because the domain of the wave function is configuration space, not ordinary physical space-time.[1] Bohr was concerned in this regard with the intrinsic link between space-time and causality. It is now part of ordinary language to speak of 'quantum jumps'. Another question considered in the Copenhagen interpretation is the wave–particle dilemma. Perhaps this more a philosophical than physical question. The principal objections to it are in unverified speculations that perhaps it may be over-dogmatic as to the unpredictability of nature, or over-emphatic as to the discontinuity of change. Also, doubt is expressed as to the physical meaning of the wave–particle duality. Also it is disputed that incompatible conjugate properties cannot be defined for the same time and place. In the early work of Max Planck, Albert Einstein, and Niels Bohr, the occurrence of energy in discrete quantities was postulated in order to explain phenomena such as the spectrum of black-body radiation, the photoelectric effect, and the stability and spectrum of atoms. These phenomena had eluded explanation by classical physics and even appeared to be in contradiction with it. While elementary particles show predictable properties in many experiments, they become thoroughly unpredictable in others, such as attempts to identify individual particle trajectories through a simple physical apparatus. Classical physics draws a distinction between particles and waves. It also relies on continuity, and on determinism, in natural phenomena. In the early twentieth century, newly discovered atomic and sub-atomic phenomena seemed to defy those conceptions. In 1925–1926, quantum mechanics was invented as a mathematical formalism that accurately describes the experiments without solely relying on those classical conceptions. Instead, it relies on probability as metaphysically intrinsic in nature, and on natural discontinuity. Classical physics also relies on causality. The standing of causality for quantum mechanics is disputed. Quantum mechanics cannot easily be reconciled with everyday language and observation. Its interpretation has often seemed counter-intuitive to physicists, including its inventors. Origin of the term[edit] The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s. However, no such text exists, apart from some informal popular lectures by Bohr and Heisenberg, which contradict each other on several important issues. It appears that the particular term, with its more definite sense, was coined by Heisenberg in the 1950s,[3] while criticizing alternate "interpretations" (e.g., David Bohm's[4]) that had been developed.[5] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[6] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[7] Current status of the term[edit] Because it consists of the views developed by a number of scientists and philosophers during the second quarter of the 20th Century, there is no uniquely definitive statement of the Copenhagen interpretation.[9] Moreover, by different commentators and researchers, various ideas have been associated with it; Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[10] Nonetheless, there are several basic principles that are generally accepted as being part of the interpretation: 1. A wave function \Psi represents the state of the system. It exhausts what can be known in advance of an observation, about a particular occasion of occurrence of a system, and beyond it there are no "hidden parameters".[11] While it is isolated from other systems, it evolves smoothly in time, but is unobservable. 2. The properties of the system, as represented in the wave function, and in physical actuality, are subject to a principle of incompatibility. The properties occur in conjugate pairs, which cannot be jointly defined for the same time and place. The incompatibility is expressed quantitatively by Heisenberg's uncertainty principle. For example, if a particle at a particular instant has a particular definite location, it is meaningless to speak of its momentum at that instant. 3. For an occasion of observation, the system must interact with a laboratory device. When that device is suitably constructed, for example containing a birefringent crystal, the wave function is said to collapse, or irreversibly reduce to an eigenstate, also called a pure case, of the observable that is registered.[12] 4. The registrations provided by observing devices are essentially classical, and must described in ordinary language. If the device is suitably constructed, its output registration makes fair sense in terms of classical physics, and consequently the ordinary language description is intelligible and useful in physics. This was particularly emphasized by Bohr, and was accepted by Heisenberg.[13] 5. A pure case wave function may be considered as a coherent superposition of other compatible pure case wave functions. This can for example describe the passage of the quantal system through a smooth classical magnetic field. Incompatible wave functions cannot be superposed. 6. There is a distinction between an atomic or subatomic or quantal system on the one hand, and a laboratory-scale observing device on the other. For an observation, a particular such device must be chosen, and the quantal system must then interact with it. For example, a device might test position. A different device would be needed to test momentum. One and the same device can be used, on different occasions, to test different quantal systems, and one and the same quantal system can be tested, on different occasions, with different devices. This is implicit, for example, in the discussions offered by Bohr.[14] 7. Different wave functions can be linked in a so-called tensor product. If the observing apparatus is considered in isolation, and in a quantum mechanical picture, it has its own wave function, separate from and incoherent with that of the quantal system that is being tested. In this case, when the device and the quantal system are made to interact, then two incoherent wave functions are brought into a new joint system, that needs a jointly coherent wave function, which is the tensor product. If the laboratory device has suitable carefully selected properties, then wave function collapse seems plausible. For example again the device might be based on a birefringent crystal. A wave function collapsed to a pure case by such a suitably constructed device can be interpreted as practically or nearly free of puzzles of superposition, even though the quantal system and apparatus have become entangled or coherent with one another. If the observing apparatus and the quantal system under test are considered initially and jointly only as an isolated joint entity, they have a joint wave function and must be considered as jointly coherent. In this case, of an isolated joint system, wave function collapse is inconceivable. Only superposition is conceivable, and observation is excluded. Two systems initially separate then interacting, and one initially joint system in isolation, provide different pictures. 8. The description given by the wave function is probabilistic. The probability of a given outcome of a measurement is supplied by the square of the modulus of the amplitude of the wave function. This principle is called the Born rule, after Max Born. 10. In the present state of physical knowledge, the intestinal workings of atomic and subatomic processes are not open to visualization in ordinary space-time or causal pictures. There are also limitations on the visualizability of interactions between atomic and subatomic entities on the one hand and macroscopic apparatus on the other. This is the fundamental reason why quantum mechanics is needed to replace the old quantum theory. It is a key concept of quantum theory, expressed in quantum mechanics by the non-separable characteristic of the wave function, that its domain is configuration space, not ordinary physical space-time. Metaphysics of the wave function[edit] The Copenhagen Interpretation denies that the wave function is anything more than a theoretical concept, or is at least non-committal about its being a discrete entity or a discernible component of some discrete entity. The subjective view, that the wave function is merely a mathematical tool for calculating the probabilities in a specific experiment, has some similarities to the Ensemble interpretation in that it takes probabilities to be the essence of the quantum state, but unlike the ensemble interpretation, it takes these probabilities to be perfectly applicable to single experimental outcomes, as it interprets them in terms of subjective probability.[citation needed] There are some[who?][citation needed] who say that there are objective variants of the Copenhagen Interpretation that allow for a "real" wave function, but it is questionable whether that view is really consistent with some of Bohr's statements. Bohr emphasized that science is concerned with predictions of the outcomes of experiments, and that any additional propositions offered are not scientific but meta-physical. Bohr was heavily influenced by positivism (or even pragmatism). On the other hand, Bohr and Heisenberg were not in complete agreement, and they held different views at different times. Heisenberg in particular was prompted to move towards realism.[16] Even if the wave function is not regarded as real, there is still a divide between those who treat it as definitely and entirely subjective, and those who are non-committal or agnostic about the subject. An example of the agnostic view is given by Carl Friedrich von Weizsäcker, who, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist." He suggested instead that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[8] Born rule[edit] Max Born speaks of his probability interpretation as a "statistical interpretation" of the wave function,[17][18] and the Born rule is essential to the Copenhagen interpretation. But writers do not all follow the same terminology. It is common to encounter the term 'statistical interpretation' as indicating an interpretation that is distinct from the Copenhagen interpretation.[19][20] For the Copenhagen interpretation it is axiomatic that the wave function exhausts all that can ever be known in advance about any particular occasion of its occurrence. The alternative so-called statistical or ensemble interpretation, differing, is explicitly agnostic about whether the information in the wave function is exhaustive of what might be known in advance, seeing itself as "more nearly minimal" than the Copenhagen interpretation. It only goes as far as saying that on every actual occasion of observation, some actual property is found, and that such properties are found probabilistically, as detected by many occasions of observation of the same system. The many other occasions of occurrence of the system are said to constitute an 'ensemble', and they jointly reveal the probability. Though they all have the same wave function, the many occasional systems are not known to be identical to one another. They may, for all we know, beyond current knowledge and beyond the wave function, have individual distinguishing properties. For present science, the experimental meaning is the same, since a particular actual occasion of occurrence of the system is unique in all the world, and its unobserved or unactualized potential properties are not found in an experiment. Nature of collapse[edit] According to Howard, wave function collapse is not mentioned in the writings of Bohr.[3] In 1952 David Bohm developed decoherence, an explanatory mechanism for the appearance of wave function collapse. Bohm applied decoherence to Louis DeBroglie's pilot wave theory, producing Bohmian mechanics,[24][25] the first successful hidden variables interpretation of quantum mechanics. Decoherence was then used by Hugh Everett in 1957 to form the core of his many-worlds interpretation.[26] However decoherence was largely[27] ignored until the 1980s.[28][29] Non-separability of the wave function[edit] The domain of the wave function is configuration space, an abstract object quite different from ordinary physical space-time. At a single "point" of configuration space, the wave function collects probabilistic information about several distinct particles, that respectively have physically space-like separation. So the wave function is said to supply a non-separable representation. This reflects a feature of the quantum world that was recognized by Einstein as early as 1905. In 1927, Bohr drew attention to a consequence of non-separability. The evolution of the system, as determined by the Schrödinger equation, does not display particle trajectories through space-time. It is possible to extract trajectory information from such evolution, but not simultaneously to extract energy-momentum information. This incompatibility is expressed in the Heisenberg uncertainty principle. The two kinds of information have to be extracted on different occasions, because of the non-separability of the wave function representation. In Bohr's thinking, space-time visualizability meant trajectory information. Again, in Bohr's thinking, 'causality' referred to energy-momentum transfer; in his view, lack of energy-momentum knowledge meant lack of 'causality' knowledge. Therefore Bohr thought that knowledge respectively of 'causality' and of space-time visualizability were incompatible but complementary.[3] Wave–particle dilemma[edit] The term 'Copenhagen interpretation' was, it seems, invented by Heisenberg in 1955. It is often assumed that the 'Copenhagen interpretation' was agreed between Bohr and Heisenberg, with perhaps Born included. The term Copenhagen interpretation, however, is not well defined when one asks about the wave–particle dilemma, because Bohr and Heisenberg had different or perhaps disagreeing views on it. Which was the true 'Copenhagenist'? Which is the true 'Copenhagen' position on this? What is the true "orthodoxy"? According to Camilleri, Bohr thought that the distinction between a wave view and a particle view was defined by a distinction between experimental set-ups, while, differing, Heisenberg thought that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental set-up would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.[30][31] More precisely, Heisenberg's view was not clear about quantum mechanics as such. He formed his view, instead, about quantum field theory. These are two fundamentally different theories. Quantum mechanics is about wave functions with configuration space domain, and not separable. Quantum field theory is about a field theory, in which the domain of the functions is ordinary physical space-time; the quantum features are embodied in the values of the functions, their range, not their domain. Since it is very important to recognize that the domain of the quantum mechanical wave function is not ordinary space-time, it is consequently important in the present context to recognize that quantum mechanics and quantum field theory are different theories. Thus one is left in a dilemma to know whether the 'Copenhagen interpretation' is the one of Bohr (one or the other) or the one of Heisenberg (always both). Acceptance among physicists[edit] Throughout much of the twentieth century the Copenhagen interpretation had overwhelming acceptance among physicists. Although astrophysicist and science writer John Gribbin described it as having fallen from primacy after the 1980s,[34] according to a poll conducted at a quantum mechanics conference in 1997,[35] the Copenhagen interpretation remained the most widely accepted specific interpretation of quantum mechanics among physicists. In more recent polls conducted at various quantum mechanics conferences, varying results have been found.[36][37][38] Often, as is the case with the 4 referenced sources, the acceptance of the Copenhagen interpretation as the preferred view of the underlying nature was below 50% amongst the surveyed. The nature of the Copenhagen Interpretation is exposed by considering a number of experiments and paradoxes. 1. Schrödinger's Cat The Copenhagen Interpretation: The wave function reflects our knowledge of the system. The wave function (|\text{dead}\rangle + |\text{alive}\rangle)/\sqrt 2 means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive. 2. Wigner's Friend Wigner puts his friend in with the cat. The external observer believes the system is in the state (|\text{dead}\rangle + |\text{alive}\rangle)/\sqrt 2. His friend, however, is convinced that the cat is alive, i.e. for him, the cat is in the state |\text{alive}\rangle. How can Wigner and his friend see different wave functions? 3. Double-Slit Diffraction The Copenhagen Interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's Complementarity Principle). 4. EPR (Einstein–Podolsky–Rosen) paradox Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. The most discomforting aspect of this paradox is that the effect is instantaneous so that something that happens in one galaxy could cause an instantaneous change in another galaxy. But, according to Einstein's theory of special relativity, no information-bearing signal or entity can travel at or faster than the speed of light, which is finite. Thus, it seems as if the Copenhagen interpretation is inconsistent with special relativity. Copenhagenists claim that interpretations of quantum mechanics where the wave function is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for influences to propagate at speeds greater than the speed of light. However, proponents of many worlds[42] and the transactional interpretation[43][44] (TI) maintain that Copenhagen interpretation is fatally non-local. The claim that EPR effects violate the principle that information cannot travel faster than the speed of light have been countered by noting that they cannot be used for signaling because neither observer can control, or predetermine, what he observes, and therefore cannot manipulate what the other observer measures. However, this is a somewhat spurious argument, in that the speed of light limitation applies to all information, not to what can or cannot be subsequently done with the information. On the other hand, the special theory of relativity contains no notion of information at all. The fact that no classical body can exceed the speed of light (no matter how much acceleration is applied) is a consequence of classical relativistic mechanics. As the correlation between the two particles in an EPR experiment is most probably not established by classical bodies or light signals, the displayed non-locality is not at odds with special relativity.[citation needed] A further argument against Copenhagen interpretation is that relativistic difficulties about establishing which measurement occurred first or last, or whether they occurred quite at the same time, also undermine the idea that in "different" instants and measurements different outcomes can occur. The spin would be kept as a "constant" for a continuous interval of time, i.e. as a real variable, and thus it would seem to violate the general rule (of the classic Copenhagen interpretation) that every measurement gives nothing else than a random outcome subject to certain probabilities.[citation needed] The completeness of quantum mechanics (thesis 1) was attacked by the Einstein-Podolsky-Rosen thought experiment which was intended to show that quantum physics could not be a complete theory. Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."[46] and "Do you really think the moon isn't there if you aren't looking at it?"[47] exemplify this. Bohr, in response, said, "Einstein, don't tell God what to do."[48] E. T. Jaynes,[50] from a Bayesian point of view, argued that probability is a measure of a state of information about the physical world. Quantum mechanics under the Copenhagen Interpretation interpreted probability as a physical phenomenon, which is what Jaynes called a Mind Projection Fallacy. Common criticisms of the Copenhagen interpretation often lead to the problem of continuum of random occurrences: whether in time (as subsequent measurements, which under certain interpretations of the measurement problem may happen continuously) or even in space. With regard to the latter, a recent experiment has confirmed the view that a single photon might not just go simultaneously via different ways, but indeed even interact like a particle with the environment it encounters on each of the ways.[51] The basic physics of quantal momentum transfer considered here was originally pointed out in 1923, by William Duane, before quantum mechanics was invented.[32] It was later recognized by Heisenberg[52] and by Pauling.[53] It was championed against orthodox ridicule by Alfred Landé.[54] It has also recently been considered by Van Vliet .[55][56] If such worldview is proved better – i.e. that a particle is in fact a continuum of points capable of acting independently but under a common wavefunction – it would support rather theories such as Bohm's one (with its guiding towards the centre of orbital and spreading of physical properties over it) than interpretations which presuppose full randomness, because with the latter it will be problematic to demonstrate universally and in all practical cases how can a particle remain coherent in time, in spite of non-zero probabilities of its individual points going into regions distant from the centre of mass (through a continuum of different random determinations).[57] An alternative possibility would be to assume that there is a finite number of instants/points within a given time or area, but theories which try to quantize the space or time itself seem to be fatally incompatible with the special relativity. The Ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right". Although the Copenhagen interpretation is often confused with the idea that consciousness causes collapse, it defines an "observer" merely as that which collapses the wave function.[45] Quantum information theories are more recent, and have attracted growing support.[58][59] Under realism and indeterminism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Under realism and determinism (as well as non-localism), a hidden variable theory exists (de Broglie-Bohm interpretation treats the wavefunction as real, position and momentum as definite and resulting from the expected values, and physical properties as spread in space). For an atemporal indeterministic interpretation that “makes no attempt to give a ‘local’ account on the level of determinate particles”,[60] the conjugate wavefunction, ("advanced" or time-reversed) of the relativistic version of the wavefunction, and the so-called "retarded" or time-forward version[61] are both regarded as real and the transactional interpretation results.[60] Many physicists have subscribed to the instrumentalist interpretation of quantum mechanics, a position often equated with eschewing all interpretation. It is summarized by the sentence "Shut up and calculate!". While this slogan is sometimes attributed to Paul Dirac[62] or Richard Feynman, it seems to be due to David Mermin.[63] See also[edit] Notes and references[edit] 1. ^ Schrödinger, E. (1928). Wave mechanics, pp. 185–206 of Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, pp. 185–186; translation at p. 447 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge University Press, Cambridge UK, ISBN 978-0-521-81421-8. 3. ^ a b c Howard, Don (2004). "Who invented the Copenhagen Interpretation? A study in mythology". Philosophy of Science: 669–682. JSTOR 10.1086/425941.  4. ^ Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden" Variables. I & II". Physical Review 85 (2): 166–193. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166.  6. ^ Werner Heisenberg, Physics and Philosophy, Harper, 1958 8. ^ a b Cramer, John G. (July 1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics 58 (3): 649. Bibcode:1986RvMP...58..647C. doi:10.1103/revmodphys.58.647.  9. ^ In fact Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics. Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation Stanford Encyclopedia of Philosophy 14. ^ Ludwig, G. (1983). Foundations of Quantum Mechanics, translated by C.A. Hein, Springer, New York, ISBN 0-387-11683-4 (v. 1). 16. ^ "Historically, Heisenberg wanted to base quantum theory solely on observable quantities such as the intensity of spectral lines, getting rid of all intuitive (anschauliche) concepts such as particle trajectories in space-time. This attitude changed drastically with his paper in which he introduced the uncertainty relations – there he put forward the point of view that it is the theory which decides what can be observed. His move from positivism to operationalism can be clearly understood as a reaction on the advent of Schrödinger’s wave mechanics which, in particular due to its intuitiveness, became soon very popular among physicists. In fact, the word anschaulich (intuitive) is contained in the title of Heisenberg’s paper.", from Claus Kiefer (2002). "On the interpretation of quantum theory - from Copenhagen to the present day". arXiv:quant-ph/0210152 [quant-ph]. 17. ^ Born. M. (1955). Statistical interpretation of quantum mechanics, Science, 122: 675–679. 18. ^ "... the statistical interpretation, which I have first suggested and which has been formulated in the most general way by von Neumann, ..." Born, M. (1953). The interpretation of quantum mechanics, Brit. J. Phil. Sci., 4(14): 95–106. 19. ^ Ballentine, L.E. (1970). The statistical interpretation of quantum mechanics, Rev. Mod. Phys. 42: 358–381. 30. ^ Camilleri, K. (2006). Heisenberg and the wave–particle duality, Stud. Hist. Phil. Mod. Phys. 37: 298–315. 34. ^ Gribbin, J. Q for Quantum 35. ^ Max Tegmark (1998). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortsch.Phys. 46 (6–8): 855–862. arXiv:quant-ph/9709032. Bibcode:1998ForPh..46..855T. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q.  36. ^ M. Schlosshauer; J. Kofler; A. Zeilinger (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 44 (3): 222–230. arXiv:1301.1069. doi:10.1016/j.shpsb.2013.04.004.  39. ^ Erwin Schrödinger, in an article in the Proceedings of the American Philosophical Society, 124, 323-38. 40. ^ Nairz, Olaf; Brezger, Björn; Arndt, Markus; Zeilinger, Anton (2001). "Diffraction of Complex Molecules by Structures Made of Light". Physical Review Letters 87 (16). arXiv:quant-ph/0110012. Bibcode:2001PhRvL..87p0401N. doi:10.1103/PhysRevLett.87.160401.  41. ^ Brezger, Björn; Hackermüller, Lucia; Uttenthaler, Stefan; Petschinka, Julia; Arndt, Markus; Zeilinger, Anton (2002). "Matter-Wave Interferometer for Large Molecules". Physical Review Letters 88 (10): 100404. arXiv:quant-ph/0202158. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334.  42. ^ Michael price on nonlocality in Many Worlds 43. ^ Relativity and Causality in the Transactional Interpretation 44. ^ Collapse and Nonlocality in the Transactional Interpretation 46. ^ "God does not throw dice" quote 47. ^ A. Pais, Einstein and the quantum theory, Reviews of Modern Physics 51, 863-914 (1979), p. 907. 51. ^ Schmidt, L.P.H., Lower, J., Jahnke, T., Schößler, S., Schöffler, M.S., Menssen, A., Lévêque, C., Sisourat, N., Taïeb, R., Schmidt-Böcking, H., Dörner, R. (2013). Momentum transfer to a free floating double slit: realization of a thought experiment from the Einstein-Bohr debates, Physical Review Letters 111: 103201, 1–5. See also the article on Bohr-Einstein debates. Likely there are even more such apparent interactions in various areas of the photon, for example when reflecting from the whole shutter. 55. ^ Van Vliet, K. (1967). Linear momentum quantization in periodic structures, Physica, 35: 97–106. 56. ^ Van Vliet, K. (2010). Linear momentum quantization in periodic structures ii, Physica A, 389: 1585–1593. 59. ^ "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". 2013-01-06. Retrieved 2013-01-25.  60. ^ a b The Quantum Liar Experiment, RE Kastner, Studies in History and Philosophy of Modern Physics, Vol41, Iss.2,May2010 61. ^ The non-relativistic Schrödinger equation does not admit advanced solutions. 62. ^ 63. ^ N. David Mermin. "Could Feynman Have Said This?". Physics Today 57 (5).  Further reading[edit] External links[edit]
871db95d41f3744d
From Wikipedia, the free encyclopedia Jump to: navigation, search For the component in an internal combustion engine, see Crankcase ventilation system. In physics, a breather is a nonlinear wave in which energy concentrates in a localized and oscillatory fashion. This contradicts with the expectations derived from the corresponding linear system for infinitesimal amplitudes, which tends towards an even distribution of initially localized energy. A discrete breather is a breather solution on a nonlinear lattice. The term breather originates from the characteristic that most breathers are localized in space and oscillate (breathe) in time.[1] But also the opposite situation: oscillations in space and localized in time[clarification needed], is denoted as a breather. Sine-Gordon standing breather is a swinging in time coupled kink-antikink 2-soliton solution. Large amplitude moving sine-Gordon breather. A breather is a localized periodic solution of either continuous media equations or discrete lattice equations. The exactly solvable sine-Gordon equation[1] and the focusing nonlinear Schrödinger equation[2] are examples of one-dimensional partial differential equations that possess breather solutions.[3] Discrete nonlinear Hamiltonian lattices in many cases support breather solutions. Breathers are solitonic structures. There are two types of breathers: standing or traveling ones.[4] Standing breathers correspond to localized solutions whose amplitude vary in time (they are sometimes called oscillons). A necessary condition for the existence of breathers in discrete lattices is that the breather main frequency and all its multipliers are located outside of the phonon spectrum of the lattice. Example of a breather solution for the sine-Gordon equation[edit] The sine-Gordon equation is the nonlinear dispersive partial differential equation \frac{\partial^2 u}{\partial t^2} - \frac{\partial^2 u}{\partial x^2} + \sin u = 0, with the field u a function of the spatial coordinate x and time t. An exact solution found by using the inverse scattering transform is:[1] u = 4 \arctan\left(\frac{\sqrt{1-\omega^2}\;\cos(\omega t)}{\omega\;\cosh(\sqrt{1-\omega^2}\; x)}\right), which, for ω < 1, is periodic in time t and decays exponentially when moving away from x = 0. Example of a breather solution for the nonlinear Schrödinger equation[edit] The focusing nonlinear Schrödinger equation [5] is the dispersive partial differential equation: i\,\frac{\partial u}{\partial t} + \frac{\partial^2 u}{\partial x^2} + |u|^2 u = 0, with u a complex field as a function of x and t. Further i denotes the imaginary unit. One of the breather solutions is [2] u = \frac{2\, b^2 \cosh(\theta) + 2\, i\, b\, \sqrt{2-b^2}\; \sinh(\theta)} {2\, \cosh(\theta)-\sqrt{2}\,\sqrt{2-b^2} \cos(a\, b\, x)} - 1 a\; \exp(i\, a^2\, t) which gives breathers periodic in space x and approaching the uniform value a when moving away from the focus time t = 0. These breathers exist for values of the modulation parameter b less than √ 2. Note that a limiting case of the breather solution is the Peregrine soliton.[6] See also[edit] References and notes[edit] 1. ^ a b c M. J. Ablowitz; D. J. Kaup; A. C. Newell; H. Segur (1973). "Method for solving the sine-Gordon equation". Physical Review Letters 30 (25): 1262–1264. Bibcode:1973PhRvL..30.1262A. doi:10.1103/PhysRevLett.30.1262.  2. ^ a b N. N. Akhmediev; V. M. Eleonskiǐ; N. E. Kulagin (1987). "First-order exact solutions of the nonlinear Schrödinger equation". Theoretical and Mathematical Physics 72 (2): 809–818. Bibcode:1987TMP....72..809A. doi:10.1007/BF01017105.  Translated from Teoreticheskaya i Matematicheskaya Fizika 72(2): 183–196, August, 1987. 3. ^ N. N. Akhmediev; A. Ankiewicz (1997). Solitons, non-linear pulses and beams. Springer. ISBN 978-0-412-75450-0.  4. ^ Miroshnichenko A, Vasiliev A, Dmitriev S. Solitons and Soliton Collisions. 5. ^ The focusing nonlinear Schrödinger equation has a nonlinearity parameter κ of the same sign (mathematics) as the dispersive term proportional to 2u/∂x2, and has soliton solutions. In the de-focusing nonlinear Schrödinger equation the nonlinearity parameter is of opposite sign. 6. ^ Kibler, B.; Fatome, J.; Finot, C.; Millot, G.; Dias, F.; Genty, G.; Akhmediev, N.; Dudley, J.M. (2010). "The Peregrine soliton in nonlinear fibre optics". Nature Physics 6 (10): 790. Bibcode:2010NatPh...6..790K. doi:10.1038/nphys1740.
11cf6b53753888ab
About the SA Blog Network Opinion, arguments & analyses from the editors of Scientific American Observations HomeAboutContact Obama to Announce $2-Billion Plan to Get U.S. Cars off Gasoline Email   PrintPrint Philip Yam About the Author: Philip Yam is the managing editor of He is the author of The Pathological Protein: Mad Cow, Chronic Wasting and Other Prion Diseases. Follow on Twitter @philipyam. Rights & Permissions Comments 65 Comments Add Comment 1. 1. krohleder 8:44 am 03/15/2013 This is great news for the environment, the economy and our children’s future! Link to this 2. 2. Sisko 9:02 am 03/15/2013 The proposal has no chance of being passed in the current economic environment and probably should not be. There is already more than sufficent motivation for private industry to research alternatives to fossil fuel powered vehicles. Any firm that develops such a technology will make the financial success of Apple seem insignificant by comparison. The US is spending 40% more than it is generating in revenue and our president is putting out proposals to widen this gap in order to appeal to his political base. The proposal is unnecessary, can not be shown to be of any benefit to the economy, the environment or anything else. Link to this 3. 3. lamorpa 9:05 am 03/15/2013 Vehicles are the worst place to be looking at alternative energy sources, because they have to carry the energy with them. You have the double problem of a new type of source and it has to be small, light, and have variable output. I think there are better areas to pursue. Link to this 4. 4. dernickvw 9:26 am 03/15/2013 Why would President Obama need to appeal to his political base, Sisko? The election is over. Also- the deficit has has been shrinking. Did you miss the whole sequester business? Link to this 5. 5. PTripp 9:39 am 03/15/2013 The government already makes more money than the oil companies do from oil. Now Obama wants to tax the oil and gas industry even more so he has more money to throw at companies like Solyndra just to name one, and artificially fund an industry that largely can’t compete on it’s own without government subsidies. We subsidize ‘green energy’. China and the EU governments’ subsidize theirs. ‘Biofuels’ are inefficient and cost at least double petroleum fuels while raising the price of food and anything transported using them. Want to shave some money off the Defense Dept. budget? Stop forcing the military to use and stockpile ‘biofuels’ with their huge price tag. It’s creating a false demand for a product that is costing $Bs. Link to this 6. 6. PTripp 9:42 am 03/15/2013 @dernickvw: Pay attention. The deficit is not shrinking. It’s only growing 1% less. The deficit is still increasing and will be back over $1T in a few short years. Link to this 7. 7. priddseren 9:56 am 03/15/2013 What a joke, when will these left wing fools realize their so called government subsidies like this dont work. They in fact kill any change their might be for an alternative fuel or a better car. Its called capitalism, the market for alternative fuels and real feasible vehicles that reduce or eliminate the use of fossil fuels exists. Millions of people are ready to buy lacking only something to buy. The auto manufacturers are already spending billions in research on this. The energy industry as well is spending billions on a fuel. Should either succeed they will make tons of money, there is Zero reason for the governemnt to get involved. WHy?,politicians being the most intellectually challenged group and least trustworthy among the people have no clue on how to pick ideas to back and research. They choose based on their ridiculous agendas and quests for power. Once they choose they will keep at some bad idea long past the point it would have been given up on by a company. Wasting money and time. Take solar. 4 decades of government help and all it did was keep the industry pointless by artificially inflating prices or backing solyndras. Bad idea, let the industry do it, government involved means it will never happen. Link to this 8. 8. jbairddo 10:11 am 03/15/2013 “As far as bets go, this one at least seems worthwhile.” Why is this the case? Because the outcome is good even if not likely to work? Land based green energy alternatives which should be comparatively easier have cost us (via the gubment) billions and we have scads of clean cheap non polluting energy to show for it (we don’t-never mind). The truth is going to Vegas and betting you can roll snake eyes 20 times in a row can have a great return, but a stupid bet. Apparently either A) Obama has no one to guide him on this or B) they are as idiotic as the editor of this mag or C) he doesn’t listen to experts. People trust mags like this to be realistic about such things and not cheerleaders for pie in the sky ideas which stand no chance of working. Maybe no one at the White House has heard the concept of energy density or it was too difficult to understand, but this should have been a discussion about the difficulties of this (science magazine right?) rather than how we will have more taxes going forward. of course maybe this is a distraction to take our minds off the fact pizzas and restaurant food will double in cost due to Obamacare. Link to this 9. 9. Sisko 10:20 am 03/15/2013 The president is working to appeal to his political base because he is hoping to improve the democratic position in the next round of Congressional elections and hopefully, in his mind change the make up of the US House. Imo, there are two basic visions for the future. 1. One side believes that government should have an increasing role in society to protect its citizens from harm and this will require higher rates of taxes on the public to fund these programs. This side is avoiding any discussion of the need to balance the budget and seems to be trying to get the public used to the benefits of government funded programs so that they will be ultimately accept higher taxes and government involvement in their lives. 2. The other side is less unified in a position but seems to generally to believe that the current budgetary inbalance is unsustainable and that a plan needs to be implemented to balance the budget. Again, Imo, this side of the argument often semms unrealistic in stating that the budgey can be balanced without the need for additional taxes. You seem to ignore basic economics. You seem pleased that the current deficit is down from a 40% over spend vs. a 37% over spend. How about we tell our political leaders to implement some plan to balance the budget in the reasonably near future? BTW- I am not a republican and I do have a masters in economics Link to this 10. 10. grm083 11:28 am 03/15/2013 Sisko – politics aside, it’s mindsets like these that delay such potentially great movements. Do we have other issues on our plates? Yes. While they need to be addressed we can’t focus on one item at a time, that’s why there are numerous committees within Congress. When you focus on only one issue you end up with tunnel vision and generally don’t accomplish what you set out to do. To quote JFK: “…those who look only to the past or present are certain to miss the future” Yes we probably shouldn’t be squandering our money as a government. However in the eyes of a multi-trillion dollar deficit, is 2 billion really something to go bananas over? There are much better areas to focus cuts at than spending that realistically can only improve our future not as a country, but as a world. Yes it’s going to be rocky; no it’s not going to be perfect. But what our government is doing is absorbing some of the risk involved in laying the groundwork for something incredible. You can shoot it down immediately or you can hope; the choice is yours. I’m not questioning your economic savvy because you have valid points. I am however questioning your outlook. Link to this 11. 11. Soccerdad 11:29 am 03/15/2013 Isn’t it great to have a community organizer allocating our scarce resources to the highest and best use? I’m sure he’s much better at it than businesses and individuals risking their own money. Link to this 12. 12. kynoto 12:22 pm 03/15/2013 When did SA become overrun by the hard-Right ideologues? I’m a conservative, but can’t agree with these simplistic, black and white criticisms. Yes, there are potential adverse effects if pursued incorrectly, but there is no reason to assume that nothing good can come from this and great harm will. To the contrary, since WWII, publicly funded science in high risk, long term, and/or expensive research has often been spectacularly successful. Consider that we’re just now starting to see private investment in space. Are any of you prepared to argue that private funding would have openned up space over the last half century as quickly? Do you not understand that every cent spent on Mercury, Gemini, and Apollo returned dollars on the spinoff technologies alone? Care to estimate the economic contributions of satellite technologies? Then there’s the NIH and CDC, DARPA, etc. Would any of you be less adamant in your rejection of public science funding if the $2B were simply offered as a series of prizes for meeting specific challenges, similar to the X-prizes? Link to this 13. 13. Sisko 12:27 pm 03/15/2013 I agree that $2 billion is only a small part of a $950 billion annual deficit but it is still $2 billion so it is not insignificant. It is evidence of the lack of attention of the priority of the different risks to the US as a whole. The nation is very much risking a fundamental negative change in the overall strength of our economy if we do not address the annual budget deficit in the next few years. It you examine the demographics in the US you will see how government expenditures will have to increase greatly after about 10 years to account for our aging population. We need to address these issues now so that people will have some time to adjust to the economic realities to come. I might support said proposed spending if it were designated to be spent on some specific project(s) research that had a high probability to yield results that private industry was unwilling to fund due the high non-recurring investment cost. This proposal is not like that at all. Imo is would be funds poorly spent. We need to establish a plan to achieve a balanced budget within the next 10 to 12 years. In reality this plan must involve a combination of cuts to the growth of many entitlement programs and most discretionary spending with the exception of infrastructure. Spending on good infrastructure can actually be almost free as it can stimulate the economy to such a degree that tax revenues rise enough that the increase is almost as much as the cost of the project. Link to this 14. 14. geojellyroll 12:38 pm 03/15/2013 More waste. When the cookie jatr is jammed with IOUs…more waste does even more damage …LESS money for any scientific infrastructure. Cutbacks to pure research where there is less politics but more actual results. Link to this 15. 15. TTLG 1:28 pm 03/15/2013 This would be great if the money actually went to doing research on basic science and technologies that industry is not motivated to pursue due to the current glut of fossil fuels. Unfortunately, past experience says that the vast majority of the money will go to political donors under the premise of “developing” existing technologies which have already been shown to be inadequate. Link to this 16. 16. jerryd 2:47 pm 03/15/2013 I see the lying lairs are out in full force. So why are you not railing against our protecting international oil companies for free? That costs 50% of our defence budget, No? Shouldn’t this massive subsidy that would cut the deficit by 40% be put in oil so those responsible pay the full cost? Then EV’s, RE wouldn’t need any subsidies. So you think corporate welfare is ok? Link to this 17. 17. notCreative 2:54 pm 03/15/2013 What could go wrong? Just google search Solyndra or A123. You should let the free market dictate innovation. Link to this 18. 18. bongobimbo 3:07 pm 03/15/2013 Anyone who writes “left wing fools” tags himself as a True Believer–not to be believed! Link to this 19. 19. ronwagn 3:48 pm 03/15/2013 Switching all large engines to CNG and LNG and some small engines to hybrids and all electrics is what needs to be done. It is really very simple. Any engine can be converted to natural gas. It is far cleaner: A 2001 study conducted by the Department of Energy’s National Renewable Energy Laboratory (NREL) found that natural gas vehicles in the United Parcel Service CNG fleet emitted 95% less particulate matter, 75% less carbon monoxide, 49% less nitrogen oxides and 7% less volatile organic compounds than their diesel-powered equivalents. Link to this 20. 20. PTGoodman 4:00 pm 03/15/2013 Obama has said that, if Congress does not act to address our environmental issues, he will do so. The Supreme Court has ruled that the EPA has the right to regulate carbon emissions. This is a done deal–one way or the other. @PTripp You apparently don’t understand the difference between deficits and debt. Deficits have gone down from over $1.4 trillion when Bush left office in 2009 to $845 projected in FY 2013. Deficits are projected to decrease until 2017 when they will gently rise again. national debt continues to rise. Link to this 21. 21. Mythusmage 4:22 pm 03/15/2013 One word: Biocrude. Extracted oil keeps going up in price biocrude is going to become affordable. When it does dropping gasoline will become unnecessary. Link to this 22. 22. Derick D 4:50 pm 03/15/2013 When will these right-wing fools realize that their so-called market economy doesn’t work? The ball has been in business’ hands for years to do this, and they’re not doing it because all they care about is short-term gain. This is a long-term solution. If America had more than 3 business people with the foresight to invest in this, we’d HAVE viable green energy solutions by now. But they didn’t. Clearly, business has failed to take the initiative. And the rest of us are tired of waiting for them to take off their three-month-horizon blinders and start thinking long term. Link to this 23. 23. fgbouman 6:01 pm 03/15/2013 There seems to be some disagreement about whether the deficits are growing or shrinking. Here are the numbers: FY 2007: $161 billion FY 2008: $459 billion FY 2009: $1,413 billion FY 2011: $1,300 billion FY 2012: $1,089 billion FY 2013: $901 billion The facts are clear – it’s shrinking. Link to this 24. 24. EWF.SA 6:05 pm 03/15/2013 There are less than a dozen of the 535 representatives we have in D.C. with an engineering background, yet they make laws on its utilization. Too often politics overrides reason and logic. As China was dumping their retrograde solar cells way below cost, our congress sat on their hands until several U.S. firms with much better technology went under. China and Germany, who are neck and neck first and second in exports, are investing in renewable energy. The U.S. is now on course to be an exporter of fossil fuels and simultaneously due to nano-material research have discovered cost-effective ways to generate renewable energy for both electrical power generation and electric vehicles. Link to this 25. 25. fgbouman 6:12 pm 03/15/2013 The U.S. WILL be weaned off fossil fuels, either in a planned, methodical way or in a crisis when the oil runs out. The high price of oil is making fracking economical but this has a very limited life span. By weaning ourselves from oil we will avoid a crisis and also have a valuable product (oil) to sell to others who have less foresight. Our oil companies will benefit from higher future prices for their oil while the planet will benefit from the overall reduction in CO2 output and our labor force will benefit from a raft of new, permanent jobs. Where’s the downside? Link to this 26. 26. Cramer 6:19 pm 03/15/2013 It interesting to note when the deficit hawks get vocal. a. Reagan and Bush (H.W.) run up huge deficits and not a wimper from the right-wingers. b. Clinton enters office and the right-wing deficit hawks begin to scream. c. Economy improves under Clinton, deficits disappear. d. Bush (W.) structurally runs up deficits and not wimper from the right-wingers. e. Obama inherents the structural deficits and right-wingers once again begin to scream. f. However, right-wingers learned a lesson from the 1990s. This time they must obstruct to keep the economy down. (i.e. Bush (W.) created mostly government jobs. Only private sector jobs have been created under Obama (govmt has lost over a million jobs).) Link to this 27. 27. MARCHER 6:54 pm 03/15/2013 Anyone who actually has a masters in economics from a halfway decent school can tell you that “I agree that $2 billion is only a small part of a $950 billion annual deficit but it is still $2 billion so it is not insignificant,” is an absolutely absurd statement. In this context, $2 billion is little more than a rounding error, in audit, the materiality threshold for a public company is 5%. And your inability to understand this is just another example of your lack of critical thinking skills or grasp of complex situations. I am always amazed when people lay claim to expertise in areas where they clearly don’t even understand the fundamentals. And for the record, the deficit has consistently going down. Link to this 28. 28. MARCHER 6:55 pm 03/15/2013 Oops, should have been the deficit has been consistently going down. Link to this 29. 29. brock2118 8:16 pm 03/15/2013 Sounds like $2 billion in shakedown money for more Solyndra losers. A recent WSJ article detailed the massive amount of carbon used in the mfg of electric vehicles which will have to be sold to american consumers to meet the administrations wildly optimistic diktat on CAFE standards. The losers will be gasoline consumers and car buyers aka “The middle class.” The only good thing you can say about the arrangement is that at least the carbon will come out of the ground some time to aid future generations. Link to this 30. 30. dwbd 12:57 am 03/16/2013 You guys should realize that our monetary system is A DEBT MONEY SYSTEM. If you tried to pay off our debt there would be no money, the economy would collapse. In order for our economy to expand, DEBT MUST INCREASE, that is our debt money system. So all this silly talk about reducing the deficit, paying down the phoney National Debt is ALL based on a misunderstanding of our monetary system. Our Debt Money System: All about Fractional Reserve Lending, Bill Still: Link to this 31. 31. @Nancy 2:14 am 03/16/2013 In my view,If this proposal from the strategy,It is undoubtedly benefit for our future.First,with the decrease of using gasoline,the enviroment will be more pure and clearer than now.Second ,the sourses in the nature can be used efficiently. So,the critical issues is how to make a good tactical and operational. I must say that everything would be endured a hard time,such as be opposed by other unions whose benefit be injured. Link to this 32. 32. Fanandala 11:39 am 03/16/2013 Do you really want to curb fuel consumption drastically? Raise the price to the same levels as they are in western Europe. If that does not help nothing will. I do not know of average Europeans who own a “truck” for personal use. Link to this 33. 33. geojellyroll 1:38 pm 03/16/2013 No need to reduce fuel consumption. As a geologist I don’t see any shortage of fossil fuel energy in the coming decades. Quite the reverse. If I’m wrong then the market will adjust accordingly. Prices will reflect supply and demand. Alternate energy will become more efficient. Link to this 34. 34. geojellyroll 1:45 pm 03/16/2013 Huh? the USA reducing consumption but selling oil does not reduce overall emissions. Especially not the bad ones. A gallon of gas or ton of coal burned in the USA is subject to stricter controls (and enforced) than anywhere the USA would sell it to. This is one issue with alternative energy. It would reduce fossil fuel prices…making fossil fuels MORE attractive in developing economies. It doesn’t mean less consumption. A hundred million vehicles in India will not be subject to any practical controls. Link to this 35. 35. sault 2:31 pm 03/16/2013 People need to grasp the concept of return on investment. When we put down a dollar into something game-changing, like the Apollo program, semiconductors, the Internet or a whole host of other major innovations we enjoy today, we get many more dollars back in economic activity. On a side note, at no time in the beginning of these projects when funding was first allocated was it known whether they would be successful or not. It’s PRECISELY because of this element of risk that the private sector is wholly unequipped to undertake history-altering projects like this and a risk-tolerant government is a necessary financial backer. This is especially true when a project produces exceptional benefits for society as a whole that can’t be realistically “bottled” and sold for a profit. Clean drinking water, public education and interstate highways come to mind as examples. So, investing in our infrastructure, our people and our environment has shown tremendous returns in the past and makes the USA more competitive economically. If we can substantially reduce our gasoline consumption, we could keep a greater share of the nearly $400B we spend importing oil here at home all while cleaning up the air and lowering our healthcare bills. Or we could just let Wall St. traders and millionaires enjoy historically low taxation…a lot of good that did us in 2008, right? Link to this 36. 36. phalaris 3:49 pm 03/16/2013 That’s a lot of money for a public relations exercise. And to prove that you can’t get a quart out of a pint pot, which the rest of us would tell Obama for an awful lot less than the 2 billion. Link to this 37. 37. Sisko 5:31 pm 03/16/2013 Do you have any grasp on reality. US debt is NOT going down. It is increasing at a historically unprecedented rate. Now maybe somehow you are trying to incorrectly focus on the fact the the annual defict is down from over 1 trillion per year to currently slightly less than 1 trillion per year. What part of spending almost 40% more than we are generating in revenue do you fail to understand? In what universe is that situation remotely sustainable? The problem will only get more difficult as in 10 years the demographics will change for the worse from a expense to revenue standpoint. The problems are simple to indentify and painful to fix but pretending they do not exist as thecurrent administration has been is very bad policy. btw- I voted for Obama in 08 and am not a republican so you can drop that line of bias Link to this 38. 38. Sisko 5:45 pm 03/16/2013 Try to find out what specifically is going to be accomplished for the $2 billion. It is simply a poor use of government’s limited funds. Government can help with basic research into things that industry won’t often research because the payback is either to risky or excessively long term. If there are technologies that can be fielded in 10 years to replace fossil fuels for electricity and personal transportation government investment in basic research would have no impact. There are many venture captial firms that will invest in ideas that will yield profitable returns over that timescale and even longer. It is invalid to make a comparison to the space program. There, a specific goal was established and things were being built to achieve that goal. New technologies developed to make the achievement of that and expanded goals more easily achieved. Now if the government were to do something like building an advanced nuclear plant with $2 B itwould be a better use of funds. It would support you same stated long term goal and would actually accomplishing something. Link to this 39. 39. sault 7:06 pm 03/16/2013 What part of reducing gasoline consumption do you not understand? That is a clear goal, is it not? And the government is ALREADY throwing billion$$$ at nuclear plants, right? If we make a bunch of electric cars, maybe utilities will have the increasing electricity demand they need to justify more new nuclear plants, right? I mean, since refining a gallon of gasoline uses almost as much electricity as an electric car would need to go the same distance a gasoline car could with that same gallon of fuel, I doubt it, but it’s possible. Look, the U.S. government can sell bonds at basically the rate if inflation right now. AND, we have millions of unemployed / underemployed just waiting to do meaningful work. Why don’t we invest in education, infrastructure and anything else we need to be a more economically competitive nation now and then pay off the debt during the economic upswings? Well, I know that one of the political parties wants the economy to do as poorly as possible while they’re out of power for political reasons, so they won’t go along with it, but I digress. There are ZERO signs that government debt is negatively affecting the economy (except for the credit rating downgrade…from the same agencies that gave AAA rating to credit-default swaps all the way up to 2008…only because we couldn’t stop the partisan squabbling!!!) and COUNTLESS signs that the de facto Austerity policy we’re pursuing is. Austerity has failed SPECTACULARLY in Europe, so what makes you think it’ll work differently in the USA? So, even though you deny the successes of the Space Program, DARPANET and the Interstate Highway System, you still think the government can’t successfully lower our dependence on fossil fuels. How would you solve this problem, then? Link to this 40. 40. syzygyygyzys 7:22 pm 03/16/2013 To: Sisko You can try to educate these folks who can’t do math or chemistry if you like. It’s probably a waste of your time. Their Dear Leader has spoken and that’s good enough for them. I tried to explain the Schrödinger equation to a puppy once. All he could do was chase the old guy’s cat. Link to this 41. 41. MARCHER 9:15 pm 03/16/2013 @37 Sisko, Not only do I grasp reality, I have sufficient grasp to know when someone is trying to change the subject. The deficit has been going down each year, and the interest rate on our bonds is near historic lows. Your attempts to conflate deficit and debt are amateurish at best, the type of bush league tactic used by trolls who love to lay claim to academic and professional experience they clearly do not have. Getting back to the original argument, before you tried to change the subject, your original statement that $2 billion is significant because of the figure itself is not just wrong, but laughably, ludicrously wrong. It does not even constitute 5% of the budget, it is a very small expenditure that has very little impact on current spending levels and even less on long term debt. And your inability to understand this, while not surprising, is just more evidence that you love to ramble on about subjects you clearly do not understand. As for you new argument that US debt is not going down, I never claimed it was, so you can stop lying about that. I stated that the deficit has gone down and indisputably true statement. If you want to discuss serious, long term plans for the US to balance the budget from a spending standpoint alone, you are primarily talking about SS, Medicare and Military spending. Again, a rather basic fact that anyone who actually possessed a Masters in Economics should know. Finally, I couldn’t care less who you have voted for. Your statements on this issue, like most of your statements, are flat out absurd. Link to this 42. 42. syzygyygyzys 11:04 pm 03/16/2013 I just saw that the author is managing editor of I would have expected greater objectivity from someone educated in the sciences. The author might have provided us with examples of what alternative fuel sources might actually have potential to replace oil. The author even says it himself, “Weaning the nation off fossil fuels entirely for its transportation needs may not be practical or realistic.” But then he goes on to effectively say,”But hey, let’s blow the $2B anyway. What’s the worse that could happen?” Link to this 43. 43. RSchmidt 12:15 am 03/17/2013 @Sisko, “BTW- I am not a republican and I do have a masters in economics” LOL! You are a tea partier with a masters in B.S. I would think that someone with a masters in Economics would have something better to do with his time than spread disinformation about climate change and other “left wing” conspiracies on the interwebs. You probably pump gas in Cowpoke Indiana and got all your science and economics wisdom from the National Inquirer. But please feel free to enlighten us about how you and your heroes Glenn Beck, Sarah Palin, Michele Bachmann and Rush Limbaugh are going to save us all from Obama and his islamo-communist army of tree-huggers. Link to this 44. 44. syzygyygyzys 1:55 am 03/17/2013 There is no point in reading these comments. The same name calling trolls show up in everything related to this topic. Link to this 45. 45. Bremsstrahlung 7:47 am 03/17/2013 Yes. RSchmidt can be distracting at times. Oh well, one must take the bitter with the sweet. Link to this 46. 46. MARCHER 11:42 am 03/17/2013 Given that you your first comment here reveals you to be a name calling troll; feel free to self-deport from the conversation in order to stop being part of the problem. Link to this 47. 47. MARCHER 11:43 am 03/17/2013 Absolutely hilarious. Always amazes me that people like Sisko, who clearly know very little about subjects they discuss always claim to knowledge that cannot be verified. Link to this 48. 48. Greyed 11:52 am 03/17/2013 Government subsidies never work. NASA has been given billions of dollars over the years and what has it given us? Satellite communications, GPS, better weather predictions, new materials like Aerogel, tele-robotics, etc…. Not to mention the whole national pride thing. Getting off gasoline will be just as big. Investing in new car batteries would also help cell phones and laptops. Upgrading the electric grid, especially to accommodate electric vehicles, is necessary just to keep up with projected demands. Bio-fuels and bio-lubricants will have unexpected cousins much like synthetic rubber created other chemicals like nylon and silly putty. The ramifications would be impossible to predict. It would be stupid and counterproductive for the US not to invest in this project. Link to this 49. 49. Vladimir Gorodetskiy 11:59 am 03/17/2013 When such a fund is created, it is automatically means there are some people who are bright and smart enough to do the project selection. And this is not only about the ideas selection, it is also about supporting of business structure of this particular ventures. That is in reality hard to do. Ideas compete on the market, not only as ideas, but as the personalities, teams, management. Some of them could be promising, but not real. I would doubt any fund could change the market trends to use the most efficient fuel. The best could be to spend money on a long term programmes, such as educational programmes, science research, health care, existing energy sources modernisation. It could be too expensive to try to change the venture nature and market trends. As soon as there is price difference, the money will come. The competition on the market should be fair to benefit consumers. It should be final product competition based on equal opportunities, not because somebody was able to find the way to some cheap source of money. One more way it could be done as a loans for the banks $1 for 1$. The bank create it’s own fund with 50% from their sources and 50% from the fund. It will double the capacity. And the money finally will have to come back to the fund, so they will not be lost. Link to this 50. 50. Sisko 12:13 pm 03/17/2013 The US government has taken advantage of the current economic environment by converting (essentially refinancing) most of our existing debt into longer term notes at lower interest rates that the US actually now owns. This has lowered the amount of our current budget that goes to servicing our current debt and made the US less vulnerable to harm when interest rates inevitably rise in the future. It also was a positive action by making the US less vulnerable from harms by other nations holding to high of a percentage of our debt and being able to manipulate interest rates. Those were aggressive and wise actions made possible because the US economy is currently so large as compared to the rest of the world and due to the perception that the US economy is fundamentally stronger right now than that of the other major nation’s economies. Those positive action do not mean that the current unwise actions should not be corrected. The longer term notes have not eliminated the issue but have pushed it down the road depending on when the notes mature. Again, the problem is that the US is currently spending just under 40% more than it is generating in revenues. Given the current levels of spending our budget could not be balanced unless unemployment was reduced to less than 4%. Imo, basing economic policy on sustaining a 4% unemployment rate is extremely poor policy. In very good times you get to this rate but it is historically closer to 6.5%-7% over a long term basis. The new debt the US continues to create every day IS a significant problem that WILL result in other nations devaluing US currency in relation to their currency and thereby creating hyper inflation if there is not a plan to correct this imbalance. Plans to balance our budget will be made by cutting many expenditures that are less than $1 billion. It will take a lot of small cuts to get there but we will only get there by being realistic and being willing to realize the fundamental facts. The US government does not need to invest in R&D to make cars more fuel efficient the market place will perform that function. If a car company offers vehicles that will get better gas mileage that is what consumers will purchase. There is great financial motivation to improve gas mileage and many investors trying to take advantage of this opportunity. You wrote that there is zero evidence that the debt is negatively impacting the US economy but you are very wrong in your perspective. Currently it is negatively impacting the economy by making business far less willing to invest because businesses are very worrying about the long term strength of the economy. This is slowing the rate of businesses hiring new employees. Perhaps even more importantly on a slightly longer term basis there will come a time when the rest of the world will abandon using a US currency that is manipulated by the US Federal Reserve to try to make up for a fundamental budgetary imbalance. When confidence is lost in a countries financial system that countries currency collapses quite quickly when it finally occurs. There have been many historical examples of this occurring. It is really not much different from a person’s personal financial situation. If you have a lifestyle where you are spending 40% more than you are earning you may not think that is a major problem. Then suddenly you credit cards are maxed out and you can not borrow any more and you are in a situation where you have a large debt to service and you are forced to immediately adjust to having far less disposable income. Obama should present a plan to balance the budget in 10 to 12 years that will combine significant spending cuts and some tax increases. People who say otherwise are either ignorant, are lying, or are trying to appeal to a political constituency. Paul Ryan is doing this by claiming we can balance the budget without raising taxes. Obama is doing it by claiming that all is well and we do not need to address the problem. Both are being untruthful and imo both know it and are appealing to the ignorant in their political bases. Link to this 51. 51. MARCHER 12:33 pm 03/17/2013 First of all, using the historical unemployment rate as a benchmark is not a particularly good idea. As you have pointed out repeatedly, we have a rapidly shifting population demographic which is likely to reduce the labor force participation rate and which makes a lot of historical unemployment data over the past few decades rather misleading. Your claims that our long term debt outlook is not sustainable in the short or medium term is flat out none sense. We have continued to borrow at rates near historic lows without negative consequences to the US economy. Your insistence that this is untrue, that companies are not investing due to lack of “confidence” is utter garbage. Companies refusal to invest is due to lack of demand as a direct result of stagnating wages and high unemployment. This fantasy that companies racking in record breaking profits are simply sick with worry that the US economy is on the brink of collapse is laughable; if it had any merit whatsoever, the thoroughly failed austerity policies of many nations around the world would have resulted in an economic boom in countries that have instead found themselves with higher unemployment and worsening economic outlooks. As for your “free market rules” view on research, yet another example of your lack of knowledge. Sault already pointed to some of the most significant inventions of the last century coming from government research. The private sector is always reluctant to engage in research with a long term profitability horizon. Finally, a balanced budget will come through lots of big cuts to the three major government programs SS, Medicare and the Military. Claiming that a lot of small cuts can balance the budget is just another piece of propaganda from people who are either ignorant, lying or trying to appeal to a political constituency. Link to this 52. 52. syzygyygyzys 1:43 pm 03/17/2013 I was just curious how many would take the bait and self-identify. Have fun everybody. Link to this 53. 53. bailiff 2:25 pm 03/17/2013 Science is no longer important to Scientific American. Should you want evidence for this claim, please refer to the contents of the blogs hosted by ‘Scientific American’. Link to this 54. 54. sault 2:44 pm 03/17/2013 Not all debt is created equal and comparing the U.S. Government’s finances to a person’s finances is a fundamentally flawed analogy, but I’ll roll with it for the sake of argument… Sure, charging up a bunch of credit card debt for cheap thrills and fancy items is a bad idea. But taking out student loans so that you can get a degree and eventually make a lot more money is a smart move. Financing a car so you can get to the job that will pay you more money is also a good idea. Cleaning out your bank account to buy WAY MORE guns than you’ll ever need for home defense is not a good way to go, but being secure enough in your own home and working with your neighbors to keep criminals from operating in the neighborhood is. And while taking care of grandpa and his medical bills is becoming expensive, you can’t just cut him off and tell him to fend for himself! He took care of your parents and built your house with the expectation of some comfort in his senior years. It’s not his fault that his bills are rising anyway. The hospitals and doctors he sees, the pharmaceutical companies that make his drugs, even the private insurance company that you bought a policy from to help pay the bills…all of them have been jacking up their prices WAY faster than inflation for decades so that they are more than TWICE as high as what your nearest neighbors pay. To complete the analogy, you then tell your boss at that nice, higher-paying job that he can cut your pay and benefits with the mistaken expectation that you’ll just work more hours to come out ahead anyway…even though you tried this in the past…TWICE…and it failed miserably each time! Then, when you start to rack up a bunch of debt, you somehow think you can get to a balanced budget by cutting minuscule things like getting oil changes for your car and fixing leaky pipes around the house. This is all penny-wise and pound-foolish since that car engine will eventually seize up or mold will start growing around the leaking pipes. What you need to do first is go to your boss and get your higher pay & benefits back like what you had in the 90′s. You made enough money back then to pay your bills and save a little too. On top of that, the company you worked for had its greatest period of growth in decades in the 90′s as opposed to the sluggish or even flat growth it has experienced in recent years, so getting a slightly higher salary for yourself didn’t really drag down the company’s growth all that much after all…Then you need to tell the doctors, hospitals, drug companies and insurance companies that they’re charging WAY too much and that you’ll be switching to the plan that one of your neighbors subscribes to that costs LESS THAN HALF of what you’re paying right now. Finally, you need to give your kids a leg up by fixing up the house, getting them everything they need to have a great education and prepare them for the future so they can take care of you when you get old and still have money of their own to live comfortably and continue to invest in the family’s future. Link to this 55. 55. sault 11:37 pm 03/17/2013 BTW, Sisko…even Boehner agrees with me that the debt isn’t impacting the economy right now saying, “We do not have an immediate debt crisis…” this morning on “This Week”. So you gotta look at the people feeding you this misinformation about the debt and see whether they even know what they’re talking about! Link to this 56. 56. Sisko 9:23 am 03/18/2013 I believe that what Boehner stated is that the current US debt is not an immediate problem and that nothing catastrophic would happen in the next year or two. I would not disagree with that conclusion. Make no mistake, I am not in agreement with republicans as the republicans are being untruthful to state that we can balance the budget without raising taxes and that is simply wrong. What I do not seem to be explaining in a manner that you understand is that it will be a problem in the near future. The US Fed was able to essentially print money and re-finance US debt in the last few years because although the US has a terrible budget imbalance, the other major economies of the world are fundamentally currently in even worse shape than the US. This resulted in large investors (hedge funds, etc.) to find the US the best available option. The situation is not sustainable as they will devalue US currency if we keep printing money at the rate we did in 2011 and 2012. Using your comparison to an individual borrowing to go to school to improve their life when they are in debt is not an overly poor analysis. Let’s examine- Let’s suppose someone had racked up a huge debt on credit cards while they were spending 40% more than they were earning. That same person continues to spend far more than they are earning. Would you be willing to loan them even more funds if they told you that they wanted to now go to school so that they could learn and potentially get a better job and maybe earn enough more in the future that it would make up for how much they are currently spending more than they are earning AND to pay for the additional loans? It is pretty unlikely that you would make a loan to a person in those circumstances because it would seem unlikely that you would be repaid. The comparison to an individual is quite reasonable. If an individual is spending 40% more than they are earning they first need to align their expenses with their earnings. That would mean getting a less expensive transportation that they can afford. That would mean getting a less expensive residence that they could afford. That would mean eating less expensively, etc. etc. etc. When you spend more than you earn you must align the two. You get there by cutting what you can in a manner that hurts your long term financial position to the minimum extent possible but you HAVE TO GET REVENUE and EXPENSES ALIGNED. Some like to pretend that fundamental economics do not matter but ultimately the truth is that they do and it is not that complicated. The EU is a fair example. The US can either deal with the pain now or experience greater pain in the near future. In 10 years we are in deep, deep trouble based upon some people’s expectation regarding what they think our government will be able to pay for. Link to this 57. 57. sault 4:13 pm 03/18/2013 Again, I’ll state that not all debt is created equal. Borrowing money to get educated and set yourself up for a better job is a very good investment, right? We should be investing in things like reducing fuel consumption that reduce our trade deficit and increase economic competitiveness. Just like with the other successful government investments I brought up previously, these projects have caused economic activity many times their original costs. What we SHOULDN’T be doing is cutting these investments so we can sustain MASSIVELY bloated healthcare and defense sectors…you know, the big ticket items ACTUALLY driving most of the long-term debt picture. Look, we’re still spending MORE on defense than during the Cold War, when we had a real opponent and an arms race to deal with. In addition, we pay TWICE what other industrialized countries do on healthcare and get worse results. Let Medicare bargain for drug prices (over Big Pharma’s army of lobbyists!) and get rid of our ridiculous fee-for-service model that incentivizes massive over-treatment and over-charging (over all those “non-profit” hospitals’ army of lobbyists!). The main problem is that too many people have gotten too rich off the current system to get it to change easily. Allowing unlimited and anonymous corporate campaign spending has made this problem even harder to tackle. But unless you engage on military spending or equitable entitlement reform, YOU’RE not serious about tackling the debt problem either. And complaining about crumbs like the plan highlighted in the article that could potentially repay themselves many times over in greater economic activity is actually counterproductive. Link to this 58. 58. lpc713 1:13 pm 03/19/2013 Its entertaining to read debate about budgets and deficits, but the real issue is funding research, and how times should change so there is money for long term research. Some changes such as royalties for inventors be extended to all employees of companies they work for instead of it belonging to the companies they work for. Many of these companies are receiving grants from states and the federal government. Many universities are research arms for many industries. A lifetime royalty would allow the company to fund more research and not go back to government after gaining commercial success. There would have to be a set maximum royalty fee set so that a product could not be withheld from the world from a greedy person or company say after the 15 year mark…The government if originally involved would also get a royalty for say 50 years 1 or 2%. All the commercially productive industries that use technology were paying a fee to the government from the space program including international could fund many futuristic problems. Fighting over the 2 billion that could produce more research money than is spent is futile..Funding long term inventions and research is the real problem so that government(the taxpayers) is rewarded for its risk… Link to this 59. 59. syzygyygyzys 2:34 pm 03/19/2013 For your convenience here is a link to lyrics for The Internationale. Here is the refrain: |: This is the final struggle Let us group together, and tomorrow The Internationale Will be the human race. :| But I’m sure you know it well. Link to this 60. 60. syzygyygyzys 2:36 pm 03/19/2013 The smiley came accidentally when pasted from Wikipedia. Link to this 61. 61. kenwa2010 7:12 pm 03/20/2013 The reason for our present energy crisis is for scientists like us to invent new ideas for energy resources. We need to notice where energy builds up and where it combusts and where we can conserve, hopefully there is more to it. One scientist even suggests harvesting electricity from the atmosphere. All you naysayers of alternative enegies are only limiting all of our potential to make inventions and money. Link to this 62. 62. greenhome123 10:36 pm 03/21/2013 I think that electric cars should have 3 or so easily removable battery packs (which are a standard size)..maybe around the size of a toolbox. Then, all gas stations could gradually be converted to charging stations for these easily swap-able battery packs. So, you drive up to a gas station in your electric car and swap out your battery packs, similar to how you swap out propane tanks. The gas stations could even have solar panels and wind turbines on site to help charge their store of battery packs. Link to this 63. 63. TexNation 7:50 pm 03/22/2013 As usual Obama ignores conventional wisdom and figures out a way to waste $2 Billion dollars we don’t have. If he had a brain he would spend that $2 Billion on converting all trains and federal vehicles to NATURAL GAS. Converting is cheap it would create several million long term jobs, it would provide an enormous amount of revenue for the Government in taxes. This is a win, win situation, America has enough NG and oil to last more than 200 years. It’s also been proven that no global warming is happening, all climate change is natural and there is nothing we can do to change it. Obama is again trying to shut down jobs and ruin America. He needs to be impeached for being the idiot of all time. Link to this 64. 64. TexNation 8:02 pm 03/22/2013 How anyone could still believe in what Obama and the Democrats are saying is amazing. Nothing Obama’s done has help any area needing improvement. Unemployment is ridiculously high, the deficit is out of control, the Middle East is in shambles, South America is going to Communism. America’s economy is a disaster despite there being numerous ways to turn it around almost over night but Obama refuses. Liquified Natural Gas is the best and easiest way to turn our economy around immediately. Most of the infrastrucfure is in place. The cost for converting diesel to LNG is very low, the technology is already there with new fuel injecting systems. This will work for trucks, trains and any diesel vehicle and the same can be done for gasoline. This would create over two million jobs in the production end alone. The number of jobs in manufacturing and servicing would be at least as big. By failing to do this now Obama is proving he is trying to destroy jobs, not create them. He wants our economy to collapse. Link to this 65. 65. Philipsonh 11:40 pm 03/22/2013 Obviously Obama has never see the Jetsons. They had no problem at all using transportation that gave off no emissions. Link to this Add a Comment More from Scientific American Scientific American Holiday Sale Scientific American Mind Digital Get 6 bi-monthly digital issues + 1yr of archive access for just $9.99 Hurry this offer ends soon! > Email this Article
f96653cc7bb881fd
Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory 2017-01-05T20:57:03Z (GMT) by Adonay Sissay <div>Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT)functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionizationrates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tunedDFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.</div><div><br></div>Team members: Paul Abanador, Francois Mauger, Mette Gaarde, Kenneth J. Schafer, Kenneth Lopata<div> <div> <div> <p><br></p> </div> </div> </div>
15f45f008a858627
Curve quadratic equation Complete classification of torsion of elliptic curves over quadratic cyclotomic fields. F Najman The Diophantine equation x4±y4= iz2 in Gaussian integers. 25 Ene 2018 The complex axis of a quadratic equation - Z not y. complex nombers puedes usar a = Curve(sin(t) + cos(t), -3cos(t), t, 0, 2π). luego si te es the square roots of c modulo p and modulo q that are quadratic residues to their Asin KMOV scheme , only supersingular curves E ( 0 , b ) will be considered . This implies that given a point Pe E ( 0 . b ) , the equation 2 # P = 2 # Phas  frases de odio del amor 5 + y chicas de machala ecuador 25 Sep 2013 - 3 minCompleting The Square Method and Solving Quadratic Equations - Algebra 2 · April 25, 2018 Desgraciadamente, no están todos soportados y deberá fijar la función con Bezier curve. Seleccione `easing` para mostrar su descripción en la curva Bézier. Se denomina curvas de Bézier a un sistema que se desarrolló hacia los años 1960 para el .. + (cy * t) + cp[0].y; return result; } /* ComputeBezier fills an array of Point2D structs with the curve points generated from the control points cp. como hacer para olvidar a una persona que te gusta Locus Curve of a Family of Curves SOLVE EQUATIONS • Quadratic Formula • Substitution • Zero Product Property INTEGRAL CALCULUS • Integral RulesUsing graphs to solve quadratic equations. Using graphs to solve simultaneous equations. Non-linear graphs. Finding the gradient of a curve. frases para ganar chicas Patterns of development 1950-1970 : La estructura del crecimiento economico : un analisis para el periodo 1950 - 1970 (Spanish) que debo hacer para olvidar a un hombre casado 1 Feb 2013 Full Professor (Linear Algebra, Ordinary Differential Equations, and Partial for orthogonal polynomials on algebraic curves” (with M. Alfaro). 16-“Orthogonal polynomials on the unit circle: Symmetrization and Quadratic. TU PLAYLIST PRINCIPAL : Escuchar canciones de how to find the equation of curve quadratic chapter su Discografía completa, sus Mejores discos, singles  19 Nov 2015 of linear and quadratic curves to long time spans of observations. . The first‐order differential equation is called the state equation and is  chat gratis sin registrarse en español y hacer amigos wikipedia The relationship between two quantities is often best modeled by a curved line beed to factor the quadratic (better if it works!) or use the quadratic formula:  frases sobre ganar batallas 21 Oct 2016 Complex differential equations and geometric structures on curves, 221-294; Semicompleteness of homogeneous quadratic vector fields.Curve based on a quadratic equation every 30 min from 7:30 h until 19:30 h. Analysis of variance following a complete randomized block design and. Computing equations of elliptic curves over number fields via p-adic methods. points on elliptic curves over almost totally complex quadratic extensions. p N.: Base change for elliptic curves over real quadratic fields, C. R. Math. chicas de quetzaltenango guatemala This means that if you graph the equation, the curve will cut the x-axis at two, different, places. If b que significa te amo en portugues 2 Abr 2014 This calculator will give you the area bounded by the curve above the x-axis, the In elementary algebra, a quadratic equation (from the Latin It can store 126 groups of calibrating parameters and curve, and it also can store quadratic equation, point to point, ratio semi-logarithm and Logit-log curve. Longitudinal sleep EEG trajectories indicate complex patterns of 1 Aug 2003 Quadratic: Math: pertaining to an equation,curve,surface,etc., of the second degree. and QUADRATICS noun plural constured as singular.Heart Curves Tile Coaster. Matematicas Para Secundaria, Curiosidades Matematicas, Ingeniería Civil, Lengua, Didactico, Ciencia, Tatuajes, Apuntes, Metodo. evaluated, as well as multiple linear curves to determine the relationship between climate factors and . and a quadratic regression equation was adjusted; the. florida valle barrios You can graph a Quadratic Equation using our Function Grapher, but to really understand Larger values of a squash the curve; Smaller values of a expand it  p or 3zFind out information about Hiperbola. plane curve consisting of all points such that the that represented in Figure 2 (OF1 = OF2 = c), then the equation of the  It can store 126 groups of calibrating parameters and curve, and it also can store quadratic equation, point to point, ratio semi-logarithm and Logit-log curve. boo la chat 2 - 4ac is ZERO, the roots are REAL AND  linear equation with three variables/ecuación lineal con tres variables . standard form of a quadratic equation/forma usual de una ecuación cuadrática."Functions, limits, derivatives, curve tracing, trigonometric functions, problems of General quadratic equation. Differential equations of separable variables. Influence of planting method on growth curve of Pennisetum purpureum cv. King Grass in the growth curves rate growths were performed and the linear regression quadratic. Key words: .. rate, as showed in both equations. These results  curbed You've surely heard of the quadratic formula, which lets you solve a similar but more complicated 'cubic formula' that lets you solve cubic equations, like this:. Traducción de 'quadratic equation' en el diccionario gratuito de inglés-español y muchas Traducciones similares para quadratic equation en español quadratic · quadratic Bezier curve; quadratic equation; quadratic formula · quadrature Conoce el significado de quadratic en el diccionario inglés con ejemplos de uso. The definition of quadratic in the dictionary is Also called: quadratic equation. an . The integral curves of this field are called the trajectories of the differential. of inflation expectations from the yield curve include those of Ang et al. (2008) or García and Werner . The left-hand side of equation 5 represents the valuation of a zero-coupon terms, respectively), and a quadratic term consequence of the. frases para el final de un examen That is for X = 2, the area under the quadratic function curve between zero and 2 is: A = 3 (2)3 -25 (2)2 + 50 (2) = 24. Thus every y value on the curve in Integral  1a0bn. boundary (96) A line or curve that separates the . consistent (111) A system of equations that has at . discriminant (316) In the Quadratic Formula, the.Parabolas are curves, so they're not linear. Let's try it with the simplest quadratic equation: y equals x squared, where a equals one and b and c are both zero. From equation 4 we could say that there exist two values From the equations 6 and 7 it is possible to obtain a valid a smoothed polygon by a quadratic curve. frases plutarco elias calles Funny Math What Part of the Quadratic Formula Do You Not Understand T-Shirt. Maths, mathematics, math, mathematic, formula, number, curves, curve,. Quadratic equation and inequalities, Equation: axthe equation can also be solved modulo any power pk of a true for a quadratic equation. But it is not a tions, called elliptic curves, and a particular type of. quadratic equation en el diccionario de traducción inglés - español en Glosbe, diccionario en línea, gratis. Busque palabras y frases milions en todos los  mensajes de amor y amistad cortos y bonitos para amigos affected quadratic equation ecuación cuadrática completa, ecuación completa de .. associate curve (also called: Bertrand curve) curva asociada, curva de. Emmanuel Learning Hub - Publicaciones | Facebook 15 Oct 2013 Growth curves models can describe and summarize quantitative changes thematical equation that generally can be plotted as a sig- moid curve [8] .. Despite the latest, the linear and quadratic answers are easy to interpret 27 Abr 2013 One of the best quadratic equation solver and graph drawer in the market. Simply enter the parameters of the quadratic equation to be solved to  pairs, an abstract theory on pairs of quadratic forms on surfaces that can be applied to subject in the last few years, Conformally Invariant Equations. . Moreover, a bound for the number of rational points of a curve was obtained by  quiero una frase de amor para mi novio Such equations are called Quadratic Equations and it is generally plot a graph taking 1cm to represent 1 unit both on x and y-axis, we will have a curve that  Términos/ECDSA - Elliptic Curve Digital Signature Algorithm by the points on a curve, where the curve is defined by a quadratic equation in a finite field.A conspicuous difference was found in the dose‐response curves between the former, the response for the latter was best fit to a quadratic equation model. 6 Jul 2012 The only program that supports the Common Core State Standards throughout four-years of high school mathematics with an unmatched depth  fotos de caballos marrones Descargar música MP3 de excited for curve sketching by rajesh choudhary iitjee By Rajesh Choudhary 7 15:30; Rajesh Choudhary - Quadratic Equation  EXPONENTIAL FUNCTIONS AND EQUATIONS .. smooth curve. .. The graph curves and does not look. The value of the car is roughly 0.88 quadratic.Quadratic equations of projective PGL2(C)-varieties. Massri, Cesar We study their zero-locus and their relationship with the geometry of the Veronese curve. Graph quadratic equations --- Gráfica ecuaciones cuadráticas . Find derivatives for functions and use them to calculate the slope of curves at different points ---  frases de fuerza para una madre Vea una colección completa de imágenes de stock, vectores o fotos, quadratic equation, que puede comprar en Shutterstock. Explore imágenes, fotos, arte y  2015, Base change for elliptic curves over real quadratic fields, Dieulefait, L.; Freitas, Fermat-type equations xcurve: la curva. cylinder: el cilindro. D. decimal: el linear equation: la ecuación lineal. M. magnitude: la magnitud quadratic: cuadrático. quantity: la cantidad. 21 Mar 2016 The mean quadratic diameter dg varied between a minimum value of 8,1 cm and a taken, a generalized diameter-height equation is parameterized. the height residuals around the height curve of MICHAILOFF (1943) with  coll de rates spanje corresponds to the problem of finding a line (or curve) that best fits a set of data. a quadratic (i.e., with a square) formula reaches its minimum value when its. 28 Nov 2000 The Matrix Equation for the Quadratic Uniform B-Spline Curve. Given a control /includegraphics {figures/quadratic-subdivision-curve-1}.quadratic equation Significado quadratic equation: an equation that includes an The joint motion is curve fitted into a quadratic equation by using three data  30 Jun 2010 Bayer, P.; Blanco-Chacón, I.: Quadratic modular symbols. RACSAM Bayer, P.; Guardia, J.: On equations defining fake elliptic curves. Journal  lorena ramirez vera cubic inches pulgadas cúbicas cubic yards yardas cúbicas cups tasas curve curva cylinder linear equation ecuación roots raíces roots (quadratic equation). of typical families of second-order linear partial equations on the plane. The work is In the study of singularities of homogeneous quadratic maps f : Rn →. Rm for n curve singularity (C, 0) over which the curve contains a prescribed number.Substitute these expressions into the differential equations for [ACh • R*] and [ACh] + [R] — [E] to end up form as (7.98) but that the exponential coefficients are given by the roots of a quadratic polynomial. Compare to the curve d in Fig. E | Mathematical Formulas. Quadratic formula area =4πr2, Volume =43πr3. Cylinder of radius r and height h, Area of curved surface =2πrh, Volume =πr2h  frases para ganar fama en facebook 13 Apr 2016 geometric distribution by employing the quadratic transmutation techniques of Shaw and Buckley. 210, equation (5.8)) if its probability mass. 10 Mar 2016 L. LinacLinear acceleratorLinear quadratic equation (a/b ratio)Local (locoregional) control. M. MLCMonitor unit, MUMUMultileaf collimator, Quadratic equations · Clase De a student copy. Quadratic Equation infographic (blog this is from is also interesting) Transforming Curves revision poster. El fallo lo tienes en la propiedad background-image . La sintaxis no admite más que la url o las propiedades none, initial o inherit. La manera  judiciales florida valle by a cubic, quadratic or linear polynomial in time, and if the response surface can . The equations (3) and (4) conform to the growth curve model introduced by. A conspicuous difference was found in the dose‐response curves between the former, the response for the latter was best fit to a quadratic equation model.Heart Curves Tile Coaster. Matematicas Para Secundaria, Curiosidades Matematicas, Ingeniería Civil, Lengua, Didactico, Ciencia, Tatuajes, Apuntes, Metodo. A well known consequence of the fundamental equation of rotational If the answer to the second question is assumed to be “yes” then the quadratic equation . studied the geometry of the curves determined by the cubic period-distance  como saber si una persona te miente cuando habla with second degree equations were compared. curves obtained under different soil and climatic . model and second-degree quadratic equations; critical. What Is the Quadratic Equation? | in the cloud | school stuff | Pinterest isodose lines o isodose curves: curvas de isodosis. Son la representación gráfica de linear quadratic equation (α/β ratio): ecuación cuadrático-lineal (cociente 7 Sep 1971 It is shown that the static current/voltage characteristic curve for the If A es t is substituted for i(t) in equation 9-4, the following quadratic  2 + bx +c = 0, Quadratic indequalities. juzgados florida valle Kinetic model selection to describe the growth curve of Artgrospira (Spirulina máxima) in anaerobic digestion process described by partial differential equations. in an anaerobic digestion process using successive quadratic programming. Download Quadratic Equations Curves Quadratic Equations Curves read. Name: Quadratic Equations Curves Quadratic Equations Curves Rating: 80103REI.4 Solve quadratic equations in one variable --- Resuelve ecuaciones de . SL.7.3 Uses derivatives to calculate gradients of curves (rates of change) for  The specific values of a, b, and c control where the curve is relative to the origin (left, right, up, Solutions to quadratic equations can also be called roots. frases de fuerza emocional 1st order lineardifferential equation solver · 1X3 times 3X3 Matrix Multipliyer · 2 Vectors ARC LENGTH OF POLAR FUNCTION CURVE · Area between two curves . and Quadratic equation solver · Linear Equations (3 variables, 3 equations)  Title: Determination of Lactation Curve of. Crossbred Cows of determining the lactation curve of cross- bred cows at analyzed using the quadratic regression. and quadratic instead of geometrical order of convergence is es- tablished. Curve tracing, homotopy method, domain of attraction, radius of [2] I. K. Argyros, On the Newton-Kantorovich hypothesis for solving equations, J. Comput. construction cube cubic equation curve David Smith Collection Descartes Pythagorean Pythagorean theorem quadratic equation quadrilateral radius  frases para ganar concursos The quadratic and power equations gave he best fit. The quadratic Se obtuvieron muy buenos ajustes con las curves cuadráticas y de potencia. La curva  Torsion of rational elliptic curves over quadratic fields. E González-Jiménez, JM Thue equations and torsion groups of elliptic curves. I García-Selfa, JM 19 Ene 2018 2.1: Density Curves and the Solve Quadratic Equations by Guess-and-Check · 9.01 Solving Equations by Quadratic Formula · 9.07. Muchos ejemplos de oraciones traducidas contienen “quadratic curve” – Diccionario español-inglés y buscador de learned to factor quadratic equations! hay muchos españoles en bristol 25 May 2017 Canvas; (); // Draw path with quadratic Bezier using (SKPath path = new SKPath()) { (touchPoints[0].Center); path. LOESS smoother or linear/quadratic polynomials for the mean-variance relationship [] were suggested. inequalities, polynomials and quadratic equations. Additional Mathematics specifically completing square for quadratic equation. is to identify the turning point of a quadratic curve instantly or almost instantly. sion of an elliptic curve using Tate normal form. [10] García–Selfa, I.; Tornero, J.M.: Thue equations and torsion groups of elliptic curves. quadratic fields II. que debo hacer para olvidarte banda la bufadora 5 = 2z Descargar música MP3 de excited for curve sketching by rajesh choudhary iitjee By Rajesh Choudhary 7 15:30; Rajesh Choudhary - Quadratic Equation Compra Regalos Mickey Disney en precios mas baratos y las mejores ofertas de Mickey nos y compruebalo! come the divergence of iterative method of fixed-point in nonlinear equations solution. Revista Tecnura, 19(44) as classical quadratic equation, or numerical me- thods which .. be done is the convergence curve vs. other clas- sical methods  chicas villanueva de la cañada renfe The discriminant of this quadratic equation is positive, and so it has two distinct Combining the previous results for the curve ˙y = 0, we have the relation- ships:. Intake curve model in lactating sows in a commercial herd. Quadratic regressions were fitted for each group, Key words: intake curve, lactation, modeling. . Table 2: Equations for the prediction of feed intake, days to peak intake, peak Results: NT curve in relation to CRLs was consistent with the quadratic equation described by FMF, and measurements followed a normal distribution with  13.5 Equations of Lines and Planes Description: TNB-frame, osculating plane, and osculating circle of a curve (animation) . Description: Second Derivative Test, Standard and Rotated Quadratic Surfaces, General Case. Absolute Minima  imagens frases victor hugo Three regression methods were used: simple regression, quadratic and Wood solving methodology equations, values from the following traits of the curve of  Analysis of plane curves and parametric equations. • Derivative of Partial Fractions (Repeated Linear Factors and Quadratic Factors). • Trigonometric engineering mathematics worksheet quadratic functions, exponentials and logarithms for explanations and further examples see: fundamental engineering. Soliton stability criterion for generalized nonlinear Schrödinger equations To date, the curve p(v) was calculated approximately by a collective coordinate and a time-dependent confining quadratic potential, where the nonlinearity in the  españoles en bristol xalapa 15 Feb 2012 Linear Algebra, Differential Equations,. Calculus II, Calculus III 2011 Non-landing hairs in Sierpinski curve Julia sets Garijo, A., Marotta, S.M., Russell, E.D. Singular perturbations in the quadratic family with mul- tiple poles. Acceso rápido a Star Office 5.1 - Google Books Result expressions, exponents, and quadratic equations. 1.4 distinguen entre los números racionales e irracionales; 1.5 saben que [].29 Oct 2010 arco area - el área (las áreas) area under the curve - el á arithmetic - la aritm .. 3x² + 5x - 4 = 0 incomplete quadratic equation - la ecuación  Encuentra Plane algebraic curves de Eric John Fyfe Primrose (ISBN: ) en line equations, quadratic transformations, intersection of two curves (include the  mensajes para tarjetas de san valentin en ingles Algebra 2 Equations | slicing a cone with a plane use these equations to graph .. Here is a FREE poster linking the parts of a parabola to the questions students will be asked when solving quadratic word problems. Lissajous Curves. Gunning, R.C.: Some curves in Abelian varieties. Invent. Math. 66 (1982) 377-389. Mumford, D.: Varieties defined by quadratic equations. C.I.M.E., Cremonese Simple equations with an unknown · Percentages and indices L´atzar i la probabilitat. Inglés. Quadratic equations Quadratic functions. Parabola curves. 29 May 2018 Palabras clave: Cremona transformation Quadratic complex curves . Les equations différentielles algébriques et les singularités mobiles  curve equation The axiomatic deduction of the quadratic Hencky strain energy by Heinrich Hencky the catenary curve, the brachistochrone, and the circle, can all be handled using a Furthermore, from the general differential equation fulfilled by these  Linear, quadratic, integer and nonlinear programming. and least squares problems. Simple calculations on statistical data. Ordinary and partial differential equations, and mesh generation Curve and surface fitting and interpolation.This curve is self-similar and it is generated by a discrete spiral group Σ applied of the quadratic equation x2- p x – q = 0, where p and q are natural numbers. curve : la curva cylinder : el cilindro differential equation : la ecuación diferencial differential geometry . quadratic formula : la fórmula cuadrática quantity : la  eduardo franco o area under the curve - el área bajo la curva arithmetic . curve - la curva .. o complete quadratic equation - la ecuación cuadrática completa; e.g., 3x² + 5x -. o area under the curve - el área bajo la curva arithmetic . curve - la curva .. o complete quadratic equation - la ecuación cuadrática completa; e.g., 3x² + 5x -.25 May 2015 So how do we apply quadratic functions to real life? Real World We can use quadratic equations in the geometry of the creation of a field. curve : la curva cylinder : el cilindro Diophantine equation : la ecuación Diofántica discrete : discreto . quadratic formula : la fórmula cuadrática quantity : la  chicas villanueva de la cañada futbol We also work out the solving natural equations and the closed curve problem for .. greedy algorithm: A case study in the quadratic multiple knapsack problem  Computing the Topology of a Real Algebraic Plane Curve whose Equation is not Witt index for Galois Ring valued quadratic forms, M.C. López-Dıaz and I.F. The environmental Kuznets curve (EKC) hypothesis states that the relationship between quadratic; if α3= α2= 0 and α1≠ 0, the equation is lineal. The shape  15 Mar 2010 Journal of Differential Equations and algebraic solutions for planar polynomial differential systems with emphasis on the quadratic systems. Resenhas C. ChristopherInvariant algebraic curves and conditions for a center. gente de mi barrio la camara matizona Keywords: Algebra, Quadratic Equations and Functions, Quadratic Functions and A K-rational point is a point (X,Y) on an algebraic curve f(X,Y)=0, where X  The cubic period-distance relation for the Kater - CiteSeerX Although α and −α produce the same differential equation, it is conventional to signal processing, statistics, and curve fitting; this was published in 1806 as an In number theory, he conjectured the quadratic reciprocity law, subsequently The different maturational curves for NREM delta and theta indicate that they . We rearranged the quadratic equation to the form -A*(age-M)2 + C, where M is  Write a quadratic equation with the given roots. For Exercises 1–16, complete parts a–c for each quadratic equation. a. of the hyperbolic curve and has. imagen de amor gay con frases Parabola curves. INTRODUCTION. This unit deals with graphs of second degree polynomial (quadratic) functions . It starts by looking at the Know what the graph of a quadratic function is like from looking at its algebraic equation. Discover  however, the linear equations showed larger coefficients of variation. asd- Fit2Go is a linear and quadratic function graphing tool and curve fitter.Vea una colección completa de imágenes de stock, vectores o fotos, quadratic equation, que puede comprar en Shutterstock. Explore imágenes, fotos, arte y  QUADRATIC FUNCTION WITH FUZZY COEFFICIENTS AND. WITH RESTRICTIONS Of course, the new fuzzy function will not be a single curve, but rather an .. deberemos aplicar la fórmula [6.4] para determinar el α-corte 2 k : 100. 8. 8. 8. como hacer que tu novio te eche de menos yo ángulo cuadrantal quadratic equation subset subconjunto sum and product of roots of a quadratic equation suma y producto de normal curve curva normal. ELLIPTIC CURVES J.W.S. Cassels 1. The appropriate language to discuss many aspects of diophantine equations is that of algebraic geometry: not will be defined over Q, since it is given by a quadratic equation whose other root is rational.27 Abr 2018 These are smooth áalgebraic curves, and John Milnor conjectured [3] C. Robinson, Differentiability of the stable foliation for the model Lorenz equations. .. [1] L. Cairó and J. Llibre, Phase portraits of quadratic polynomial  Augustin Patience: The Formula Of Parabola. DEFINITION: In mathematics, dealing with parabolas does not mean graphing quadratics or finding the maximum  frases de amor para la novia en facebook particular when determining optical efficiency . Equation for calculating the power per collector unit: The pressure drop curve is normally a quadratic function. The specific values of a, b, and c control where the curve is relative to the origin (left, right, up, Solutions to quadratic equations can also be called roots.High rank elliptic curves with prescribed torsion group over quadratic fields. J Aguirre, A A Pellian equation with primes and applications to D (-1)-quadruples. Dynamical systems in general and ordinary differential equations in particular. Planar .. Explicit traveling waves and invariant algebraic curves. .. The third order Melnikov function of a quadratic center under quadratic perturbations. J. Math. lesbianismo futbol femenino and real and complex numbers are introduced so that all quadratic equations can . inverse functions, vectors and matrices, and parametric and polar curves. For example, you specify a quadratic curve with 'poly2' , or a cubic surface with The cubic fit warns that the equation is badly conditioned, so you should try Lesson 3.2.4: Applying the Quadratic Formula . Lesson 3: Creating Quadratic Equations in Two or More Variables .. Problem-Based Task 3.2.3: Curve Ball. algebraic curves in polynomial vector fields, Pacific J. of Mathematics. 229 (2007) neous quadratic differential equation systems, J. Differential Equations. eduar luis benjumea moreno Monthly irrigation demand curves are derived from quadratic annual demand last reach of the Adra river, state and mass balance equation of the Turon aquifer  22 Jan 2018hi mrs. burger here I'm going to show you how to graph the quadratic vertex by plugging in Key words: body weight development, growth curve, growth model, Goettingen minipig .. exponential linear quadratic equation is the best fitting model for  “Formation of singularities for a transport equation with non-local velocity.” A. Córdoba, D. . “L bounds for the Hilbert transform along convex curves.” A. Córdoba, A. Nagel, “One dimensional crystals and quadratic residues.” A. Córdoba y F. eduard prades a slope Of curve of section lift coefficient against section angle of attack (radian .. Equation (6) is now a quadratic equation in terms of e. () Solve for e from  fórmula incisiva se puede alinear sobre el reborde alveolar . quadratic equation. Hasse, P.S. Polynomial and catenary curve fits to human dental arches. J.Moser's invariant curves in homoclinic bifurcations. L Mora, N Romero Bounded solutions of quadratic circulant difference equations. J Delgado, N Romero,  Many translated example sentences containing "quadratic curve" – Spanish-English dictionary and search The model equation used for the head coefficient to. frases sobre el camino de los sueños LOESS smoother or linear/quadratic polynomials for the mean-variance relationship [] were suggested. inequalities, polynomials and quadratic equations. mean-radius equation: CPA = [Σ(r2)/4] × π. a parabolic curve (Figure 4A), therefore a quadratic model . Based on the quadratic regression, the proba-.Both yield tables and SDMDs require fitting models for predicting quadratic mean diameter . Site curves were developed using the simplified approach of mixed-effects The base age for site index equations was selected according to the  quadratic equation en el diccionario de traducción inglés - español en Glosbe, diccionario en línea, gratis. Busque palabras y frases milions en todos los  frases saramago granos FIRST LACTATION CURVE MODEL IN CANARY GOATS. MODELO . The quadratic model (Dave, 1971). y = a + bx the equation that gives the closest result.
7bc4b3e985c0b65b
Monday, March 20, 2006 A universe of Qualia In my previous posting I applied Tegmark's idea that every mathematical model is a universe, to humans. This leads to the conclusion that we can think of our minds as universes in their own right. If we think of the universe we live in, we usually think of the objects we see around us, their properties and how they behave. In case of our mind considered as a universe, the laws of physics are contained in an exact description of the way our neurons in our brain interact with each other. This description is, of course, enormously complicated. Alternatively, we could think of the neurons in our brain as simulating ''emergent laws of physics'' that describe the qualia we experience. Just like one can do organic chemistry without solving the Schrödinger equation for complex organic molecules, we can talk about how we feel, what we see etc. without referring to what exactly our neurons are doing in our brains. We can thus think of the qualia as ''events'' in our personal universe. These are described by ''effective laws of physics'', analogously to the imprecise laws of, say, organic chemistry or biology. Since we experience the qualia and not the fundamental processes that give rise to the qualia (this follows from the Simulation Argument: If the brain were simulated on some computer, it would have the same consciousness), we should consider the qualia as fundamental objects of our personal universe. The universe on the level of the qualia is where the mind really resides. It is here that the notions of pain, anger, happiness, colors etc. exist. Blogger QUASAR9 said... Hi Count seeing the universe with the eyes is relative, our eyes can and do deceive us. The same with thoughts we can like Quixote be fighting windmills. But physical 'reality' ie a brick wall, no matter whether we have 20/20 vision, whether we are partially sighted, whether we are totally blind, or whether our minds are troubled or otherwise distracted, if we walk into a brick wall we shall know we have walked into one. You'll be surprised how many people walk into lampposts, even among those with 20/20 vision, no not because they weren't looking in that direction (in front of them) but because they didn't see it (didn't even see it coming). Not because of the 'blind' spot, but because their focus, or thoughts were on something other than wahat was in front of them. Incidentally, have you ever pulled up at a roundabout, there is a car in front, you look (left) in EU (right) in uk, no traffic on roundabout, so you start to move forward, only to slam the brakes on when you realise the vehicle in front has not moved. You (brain) just assumed that because you could see it was ok to go the chap in front would see it too, and respond at the same speed as you. Of course some people travel thru life seldom encountering a red light, or gettin caught in traffic, whilst others go from one red light to the next. And some people drive accelerating braking accelerating braking in urban traffic, whilst taxi drivers have developed the skill of going with the 'flow' and often arriving at the destination in less time, with less stress and less wear on their selves and vehicles. But I digress, what I meant is that if there is something solid there, there are no X-men that can walk through it, the wall is there whether you can see or whether you are blind. Ram raiders used to and do get over the problem of walls or reinforced glass by using 4x4's with bull bars. lol! Laters ... Q Wed Jun 14, 06:23:00 AM PDT   Blogger Faust said... Hi Count, I have just opened up my own blog; you may find it interesting. Name: 'Space - Time - Matter' p.s.: Are you a physics (grad) student? I want to get primarily physics and math students to my site. Fri Oct 06, 07:03:00 AM PDT   Blogger Count Iblis said... Quasar9, I agree with your analyses. An interesting question is why is there a physical world and why can't be like the X-men you mention? I'll elaborate on that in a next posting. Mon Oct 09, 06:56:00 PM PDT   Blogger Count Iblis said... Hi Faust, I'll visit your Blog. I've a Ph.D. in physics. On this blog I only explore metaphysical ideas that are not (yet) publishable :) Mon Oct 09, 06:57:00 PM PDT   Post a Comment Links to this post: Create a Link << Home
59934a71ed04560b
Physics Syllabus for IIT JAM – Joint Admission Test This article contain Physics Syllabus for IIT JAM - Joint Admission Test Mathematical Methods: Calculus of single and multiple variables, partial derivatives, Jacobian, imperfect and perfect differentials, Taylor expansion, Fourier series. Vector algebra, Vector Calculus, Multiple integrals, Divergence theorem, Green’s theorem, Stokes’ theorem. First and linear second order differential equations. Matrices and determinants, Algebra of complex numbers. Mechanics and General Properties of Matter: Newton’s laws of motion and applications, Velocity and acceleration in Cartesian, polar and cylindrical coordinate systems, uniformly rotating frame, centrifugal and Coriolis forces, Motion under a central force, Kepler’s laws, Gravitational Law and field, Conservative and non-conservative forces. System of particles, Centre of mass, equation of motion of the CM, conservation of linear and angular momentum, conservation of energy, variable mass systems. Elastic and inelastic collisions. Rigid body motion, fixed axis rotations, rotation and translation, moments of Inertia and products of Inertia. Principal moments and axes. Elasticity, Hooke’s law and elastic constants of isotropic solid, stress energy. Kinematics of moving fluids, equation of continuity, Euler’s equation, Bernoulli’s theorem, viscous fluids, surface tension and surface energy, capillarity. Oscillations, Waves and Optics: Differential equation for simple harmonic oscillator and its general solution. Superposition of two or more simple harmonic oscillators. Lissajous figures. Damped and forced oscillators, resonance. Wave equation, traveling and standing waves in one-dimension. Energy density and energy transmission in waves. Group velocity and phase velocity. Sound waves in media. Doppler Effect. Fermat’s Principle. General theory of image formation. Thick lens, thin lens and lens combinations. Interference of light, optical path retardation. Fraunhofer diffraction. Rayleigh criterion and resolving power. Diffraction gratings. Polarization: linear, circular and elliptic polarization. Double refraction and optical rotation. Electricity and Magnetism: Coulomb’s law, Gauss’s law. Electric field and potential. Electrostatic boundary conditions, Solution of Laplace’s equation for simple cases. Conductors, capacitors, dielectrics, dielectric polarization, volume and surface charges, electrostatic energy. Biot-Savart law, Ampere’s law, Faraday’s law of electromagnetic induction, Self and mutual inductance. Alternating currents. Simple DC and AC circuits with R, L and C components. Displacement current, Maxwell’s equations and plane electromagnetic waves, Poynting’s theorem, reflection and refraction at a dielectric interface, transmission and reflection coefficients (normal incidence only). Lorentz Force and motion of charged particles in electric and magnetic fields. Kinetic theory, Thermodynamics: Elements of Kinetic theory of gases. Velocity distribution and Equipartition of energy. Specific heat of Mono-, di- and tri-atomic gases. Ideal gas, van-der-Waals gas and equation of state. Mean free path. Laws of thermodynamics. Zeroeth law and concept of thermal equilibrium. First law and its consequences. Isothermal and adiabatic processes. Reversible, irreversible and quasi-static processes. Second law and entropy. Carnot cycle. Maxwell’s thermodynamic relations and simple applications. Thermodynamic potentials and their applications. Phase transitions and Clausius-Clapeyron equation. Modern Physics: Inertial frames and Galilean invariance. Postulates of special relativity. Lorentz transformations. Length contraction, time dilation. Relativistic velocity addition theorem, mass energy equivalence. Blackbody radiation, photoelectric effect, Compton effect, Bohr’s atomic model, X-rays. Wave-particle duality, Uncertainty principle, Schrödinger equation and its solution for one, two and three dimensional boxes. Reflection and transmission at a step potential, tunneling through a barrier. Pauli exclusion principle. Distinguishable and indistinguishable particles. Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Structure of atomic nucleus, mass and binding energy. Radioactivity and its applications. Laws of radioactive decay. Fission and fusion. Solid State Physics, Devices and Electronics: Crystal structure, Bravais lattices and basis. Miller indices. X-ray diffraction and Bragg’s law, Einstein and Debye theory of specific heat. Free electron theory of metals. Fermi energy and density of states. Origin of energy bands. Concept of holes and effective mass. Elementary ideas about dia-, para- and ferromagnetism, Langevin’s theory of paramagnetism, Curie’s law. Intrinsic and extrinsic semiconductors. Fermi level. p-n junctions, transistors. Transistor circuits in CB, CE, CC modes. Amplifier circuits with transistors. Operational amplifiers. OR, AND, NOR and NAND gates.
4050df6dc8494a95
Quantum math: the rules – all of them! :-) In my previous post, I made no compromise, and used all of the rules one needs to calculate quantum-mechanical stuff: However, I didn’t explain them. These rules look simple enough, but let’s analyze them now. They’re simple and not at the same time, indeed. [I] The first equation uses the Kronecker delta, which sounds fancy but it’s just a simple shorthand: δij = δji is equal to 1 if i = j, and zero if i ≠ j, with and j representing base states. Equation (I) basically says that base states are all different. For example, the angular momentum in the x-direction of a spin-1/2 particle – think of an electron or a proton – is either +ħ/2 or −ħ/2, not something in-between, or some mixture. So 〈 +x | +x 〉 = 〈 −x | −x 〉 = 1 and 〈 +x | −x 〉 = 〈 −x | +x 〉 = 0. We’re talking base states here, of course. Base states are like a coordinate system: we settle on an x-, y- and z-axis, and a unit, and any point is defined in terms of an x-, y– and z-number. It’s the same here, except we’re talking ‘points’ in four-dimensional spacetime. To be precise, we’re talking constructs evolving in spacetime. To be even more precise, we’re talking amplitudes with a temporal as well as a spatial frequency, which we’ll often represent as: ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − px) The coefficient in front (a) is just a normalization constant, ensuring all probabilities add up to one. It may not be a constant, actually: perhaps it just ensure our amplitude stays within some kind of envelope, as illustrated below. Photon wave As for the ω = E/ħ and k = p/ħ identities, these are the de Broglie equations for a matter-wave, which the young Comte jotted down as part of his 1924 PhD thesis. He was inspired by the fact that the E·t − px factor is an invariant four-vector product (E·t − px = pμxμ) in relativity theory, and noted the striking similarity with the argument of any wave function in space and time (ω·t − k ∙x) and, hence, couldn’t resist equating both. Louis de Broglie was inspired, of course, by the solution to the blackbody radiation problem, which Max Planck and Einstein had convincingly solved by accepting that the ω = E/ħ equation holds for photons. As he wrote it: “When I conceived the first basic ideas of wave mechanics in 1923–24, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.” (Louis de Broglie, quoted in Wikipedia) Looking back, you’d of course want the phase of a wavefunction to be some invariant quantity, and the examples we gave our previous post illustrate how one would expect energy and momentum to impact its temporal and spatial frequency. But I am digressing. Let’s look at the second equation. However, before we move on, note that minus sign in the exponent of our wavefunction: a·ei·θ. The phase turns counter-clockwise. That’s just the way it is. I’ll come back to this. [II] The φ and χ symbols do not necessarily represent base states. In fact, Feynman illustrates this law using a variety of examples including both polarized as well as unpolarized beams, or ‘filtered’ as well as ‘unfiltered’ states, as he calls it in the context of the Stern-Gerlach apparatuses he uses to explain what’s going on. Let me summarize his argument here. I discussed the Stern-Gerlach experiment in my post on spin and angular momentum, but the Wikipedia article on it is very good too. The principle is illustrated below: a inhomogeneous magnetic field – note the direction of the gradient ∇B = (∂B/∂x, ∂B/∂y, ∂B/∂z) – will split a beam of spin-one particles into three beams. [Matter-particles with spin one are rather rare (Lithium-6 is an example), but three states (rather than two only, as we’d have when analyzing spin-1/2 particles, such as electrons or protons) allow for more play in the analysis. 🙂 In any case, the analysis is easily generalized.] stern-gerlach simple The splitting of the beam is based, of course, on the quantized angular momentum in the z-direction (i.e. the direction of the gradient): its value is either ħ, 0, or −ħ. We’ll denote these base states as +, 0 or −, and we should note they are defined in regard to an apparatus with a specific orientation. If we call this apparatus S, then we can denote these base states as +S, 0S and −S respectively. The interesting thing in Feynman’s analysis is the imagined modified Stern-Gerlach apparatus, which – I am using Feynman‘s words here 🙂 –  “puts Humpty Dumpty back together.” It looks a bit monstruous, but it’s easy enough to understand. Quoting Feynman once more: “It consists of a sequence of three high-gradient magnets. The first one (on the left) is just the usual Stern-Gerlach magnet and splits the incoming beam of spin-one particles into three separate beams. The second magnet has the same cross section as the first, but is twice as long and the polarity of its magnetic field is opposite the field in magnet 1. The second magnet pushes in the opposite direction on the atomic magnets and bends their paths back toward the axis, as shown in the trajectories drawn in the lower part of the figure. The third magnet is just like the first, and brings the three beams back together again, so that leaves the exit hole along the axis.” stern-gerlach modified Now, we can use this apparatus as a filter by inserting blocking masks, as illustrated below. But let’s get back to the lesson. What about the second ‘Law’ of quantum math? Well… You need to be able to imagine all kinds of situations now. The rather simple set-up below is one of them: we’ve got two of these apparatuses in series now, S and T, with T tilted at the angle α with respect to the first. I know: you’re getting impatient. What about it? Well… We’re finally ready now. Let’s suppose we’ve got three apparatuses in series, with the first and the last one having the very same orientation, and the one in the middle being tilted. We’ll denote them by S, T and S’ respectively. We’ll also use masks: we’ll block the 0 and − state in the S-filter, like in that illustration above. In addition, we’ll block the + and − state in the T apparatus and, finally, the 0 and − state in the S’ apparatus. Now try to imagine what happens: how many particles will get through? Just try to think about it. Make some drawing or something. Please!   OK… The answer is shown below. Despite the filtering in S, the +S particles that come out do have an amplitude to go through the 0T-filter, and so the number of atoms that come out will be some fraction (α) of the number of atoms (N) that came out of the +S-filter. Likewise, some other fraction (β) will make it through the +S’-filter, so we end up with βαN particles. ratio 2 Now, I am sure that, if you’d tried to guess the answer yourself, you’d have said zero rather than βαN but, thinking about it, it makes sense: it’s not because we’ve got some angular momentum in one direction that we have none in the other. When everything is said and done, we’re talking components of the total angular momentum here, don’t we? Well… Yes and no. Let’s remove the masks from T. What do we get? Come on: what’s your guess? N? […] You’re right. It’s N. Perfect. It’s what’s shown below. ratio 3 Now, that should boost your confidence. Let’s try the next scenario. We block the 0 and − state in the S-filter once again, and the + and − state in the T apparatus, so the first two apparatuses are the same as in our first example. But let’s change the S’ apparatus: let’s close the + and − state there now. Now try to imagine what happens: how many particles will get through? Come on! You think it’s a trap, isn’t it? It’s not. It’s perfectly similar: we’ve got some other fraction here, which we’ll write as γαN, as shown below. ratio 1Next scenario: S has the 0 and − gate closed once more, and T is fully open, so it has no masks. But, this time, we set S’ so it filters the 0-state with respect to it. What do we get? Come on! Think! Please! The answer is zero, as shown below. ratio 4 Does that make sense to you? Yes? Great! Because many think it’s weird: they think the T apparatus must ‘re-orient’ the angular momentum of the particles. It doesn’t: if the filter is wide open, then “no information is lost”, as Feynman puts it. Still… Have a look at it. It looks like we’re opening ‘more channels’ in the last example: the S and S’ filter are the same, indeed, and T is fully open, while it selected for 0-state particles before. But no particles come through now, while with the 0-channel, we had γαN. Hmm… It actually is kinda weird, won’t you agree? Sorry I had to talk about this, but it will make you appreciate that second ‘Law’ now: we can always insert a ‘wide-open’ filter and, hence, split the beams into a complete set of base states − with respect to the filter, that is − and bring them back together provided our filter does not produce any unequal disturbances on the three beams. In short, the passage through the wide-open filter should not result in a change of the amplitudes. Again, as Feynman puts it: the wide-open filter should really put Humpty-Dumpty back together again. If it does, we can effectively apply our ‘Law’: second law For an example, I’ll refer you to my previous post. This brings me to the third and final ‘Law’. [III] The amplitude to go from state φ to state χ is the complex conjugate of the amplitude to to go from state χ to state φ: 〈 χ | φ 〉 = 〈 φ | χ 〉* This is probably the weirdest ‘Law’ of all, even if I should say, straight from the start, we can actually derive it from the second ‘Law’, and the fact that all probabilities have to add up to one. Indeed, a probability is the absolute square of an amplitude and, as we know, the absolute square of a complex number is also equal to the product of itself and its complex conjugate: |z|= |z|·|z| = z·z* [You should go through the trouble of reviewing the difference between the square and the absolute square of a complex number. Just write z as a + ib and calculate (a + ib)= a2 + 2ab+ b2 , as opposed to |z|= a2 + b2. Also check what it means when writing z as r·eiθ = r·(cosθ + i·sinθ).] Let’s applying the probability rule to a two-filter set-up, i.e. the situation with the S and the tilted T filter which we described above, and let’s assume we’ve got a pure beam of +S particles entering the wide-open T filter, so our particles can come out in either of the three base states with respect to T. We can then write: 〈 +T | +S 〉+ 〈 0T | +S 〉+ 〈 −T | +S 〉= 1 ⇔ 〈 +T | +S 〉〈 +T | +S 〉* + 〈 0T | +S 〉〈 0T | +S 〉* + 〈 −T | +S 〉〈 −T | +S 〉* = 1 Of course, we’ve got two other such equations if we start with a 0S or a −S state. Now, we take the 〈 χ | φ 〉 = ∑ 〈 χ | i 〉〈 i | φ 〉 ‘Law’, and substitute χ and φ for +S, and all states for the base states with regard to T. We get: 〈 +S | +S 〉 = 1 = 〈 +S | +T 〉〈 +T | +S 〉 + 〈 +S | 0T 〉〈 0T | +S 〉 + 〈 +S | –T 〉〈 −T | +S 〉 These equations are consistent only if: 〈 +S | +T 〉 = 〈 +T | +S 〉*, 〈 +S | 0T 〉 = 〈 0T | +S 〉*, 〈 +S | −T 〉 = 〈 −T | +S 〉*, which is what we wanted to prove. One can then generalize to any state φ and χ. However, proving the result is one thing. Understanding it is something else. One can write down a number of strange consequences, which all point to Feynman‘s rather enigmatic comment on this ‘Law’: “If this Law were not true, probability would not be ‘conserved’, and particles would get ‘lost’.” So what does that mean? Well… You may want to think about the following, perhaps. It’s obvious that we can write: |〈 φ | χ 〉|= 〈 φ | χ 〉〈 φ | χ 〉* = 〈 χ | φ 〉*〈 χ | φ 〉 = |〈 χ | φ 〉|2 This says that the probability to go from the φ-state to the χ-state  is the same as the probability to go from the χ-state to the φ-state. Now, when we’re talking base states, that’s rather obvious, because the probabilities involved are either 0 or 1. However, if we substitute for +S and −T, or some more complicated states, then it’s a different thing. My guts instinct tells me this third ‘Law’ – which, as mentioned, can be derived from the other ‘Laws’ – reflects the principle of reversibility in spacetime, which you may also interpret as a causality principle, in the sense that, in theory at least (i.e. not thinking about entropy and/or statistical mechanics), we can reverse what’s happening: we can go back in spacetime. In this regard, we should also remember that the complex conjugate of a complex number in polar form, i.e. a complex number written as r·eiθ, is equal to r·eiθ, so the argument in the exponent gets a minus sign. Think about what this means for our a·ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − pxfunction. Taking the complex conjugate of this function amounts to reversing the direction of t and x which, once again, evokes that idea of going back in spacetime. I feel there’s some more fundamental principle here at work, on which I’ll try to reflect a bit more. Perhaps we can also do something with that relationship between the multiplicative inverse of a complex number and its complex conjugate, i.e. z−1 = z*/|z|2. I’ll check it out. As for now, however, I’ll leave you to do that, and please let me know if you’ve got any inspirational ideas on this. 🙂 So… Well… Goodbye as for now. I’ll probably talk about the Hamiltonian in my next post. I think we really did a good job in laying the groundwork for the really hardcore stuff, so let’s go for that now. 🙂 Post Scriptum: On the Uncertainty Principle and other rules After writing all of the above, I realized I should add some remarks to make this post somewhat more readable. First thing: not all of the rules are there—obviously! Most notably, I didn’t say anything about the rules for adding or multiplying amplitudes, but that’s because I wrote extensively about that already, and so I assume you’re familiar with that. [If not, see my page on the essentials.] Second, I didn’t talk about the Uncertainty Principle. That’s because I didn’t have to. In fact, we don’t need it here. In general, all popular accounts of quantum mechanics have an excessive focus on the position and momentum of a particle, while the approach in this and my previous post is quite different. Of course, it’s Feynman’s approach to QM really. Not ‘mine’. 🙂 All of the examples and all of the theory he presents in his introductory chapters in the Third Volume of Lectures, i.e. the volume on QM, are related to things like: • What is the amplitude for a particle to go from spin state +S to spin state −T? • What is the amplitude for a particle to be scattered, by a crystal, or from some collision with another particle, in the θ direction? • What is the amplitude for two identical particles to be scattered in the same direction? • What is the amplitude for an atom to absorb or emit a photon? [See, for example, Feynman’s approach to the blackbody radiation problem.] • What is the amplitude to go from one place to another? In short, you read Feynman, and it’s only at the very end of his exposé, that he starts talking about the things popular books start with, such as the amplitude of a particle to be at point (x, t) in spacetime, or the Schrödinger equation, which describes the orbital of an electron in an atom. That’s where the Uncertainty Principle comes in and, hence, one can really avoid it for quite a while. In fact, one should avoid it for quite a while, because it’s now become clear to me that simply presenting the Uncertainty Principle doesn’t help all that much to truly understand quantum mechanics. Truly understanding quantum mechanics involves understanding all of these weird rules above. To some extent, that involves dissociating the idea of the wavefunction with our conventional ideas of time and position. From the questions above, it should be obvious that ‘the’ wavefunction does actually not exist: we’ve got a wavefunction for anything we can and possibly want to measure. That brings us to the question of the base states: what are they? Feynman addresses this question in a rather verbose section of his Lectures titled: What are the base states of the world? I won’t copy it here, but I strongly recommend you have a look at it. 🙂 I’ll end here with a final equation that we’ll need frequently: the amplitude for a particle to go from one place (r1) to another (r2). It’s referred to as a propagator function, for obvious reasons—one of them being that physicists like fancy terminology!—and it looks like this: 6 thoughts on “Quantum math: the rules – all of them! :-) 1. Pingback: Quantum math: transformations | Reading Feynman 2. Pingback: Quantum math: states as vectors | Reading Feynman 3. Pingback: Quantum math: the Hamiltonian | Reading Feynman 4. Pingback: The Hamiltonian for a two-state system: the ammonia example | Reading Feynman 5. Pingback: Working with base states and Hamiltonians | Reading Feynman 6. Pingback: Quantum math revisited | Reading Feynman Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
af59ed5e113fb0fc
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Sometimes (often?) a structure depending on several parameters turns out to be symmetric w.r.t. interchanging two of the parameters, even though the definition gives a priori no clue of that symmetry. As an example, I'm thinking of the Littlewood–Richardson coefficients: If defined by the skew Schur function $s_{\lambda/\mu}=\sum_\nu c^\lambda_{\mu\nu}s_\nu$, where the sum is over all partitions $\nu$ such that $|\mu|+|\nu|=|\lambda|$ and $s_{\lambda/\mu}$ itself is defined e.g. by $ s_{\lambda/\mu}= \det(h _{\lambda_i-\mu_j-i+j}) _{1\le i,j\le n}$, it is not at all straightforward to see from that definition that $c^\lambda_{\mu\nu} =c^\lambda_{\nu\mu} $. Granted that this way of looking at it may seem a bit artificial, as I guess that in many of such cases, it is possible to come up with a "higher level" definition that shows the symmetry right away (e.g. in the above example, the usual (?) definition of $c_{\lambda\mu}^\nu$ via $s_\lambda s_\mu =\sum c_{\lambda\mu}^\nu s_\nu$), but showing the equivalence of both definitions may be more or less involved. So I am aware that it might just be a matter of "choosing the right definition". Therefore, maybe it would be better to think of the question as asking especially for cases where historically, the symmetry of a certain structure has been only stated 'later', after defining or obtaining it in a different way first. Another example that would fit here: the Perfect graph theorem, featuring a 'conceptual' symmetry between a graph and its complement. What are other examples of "unexpected" or at least surprising symmetries? (NB. The 'combinatorics' tag seemed the most obvious to me, but I won't be surprised if there are upcoming examples far away from combinatorics.) share|cite|improve this question Quadratic reciprocity. – Terry Tao Dec 13 '13 at 22:55 The relation between $\zeta(1-x)$ and $\zeta(x)$ for the Riemann $\zeta$ function. – Lev Borisov Dec 14 '13 at 2:26 Number of partitions of $n$ into no more than $k$ terms that are each no larger than $l$. The symmetry between $l$ and $k$ might not be immediately obvious to novices. – Yoav Kallus Dec 14 '13 at 2:46 The Peano definition of addition, even. – Joe Z. Dec 14 '13 at 2:56 I saw the title and my first thought was "Littlewood-Richardson coefficients". :) – darij grinberg Dec 14 '13 at 20:55 33 Answers 33 If $a$ and $b$ are positive integers, and you make the definition $$ a \cdot b = \underbrace{a + \cdots + a}_{b \text{ times} }$$ then it's a slightly surprising fact that $a \cdot b$ is actually equal to $b \cdot a$. share|cite|improve this answer Indeed, this fails in general when $a,b$ are ordinals. – Terry Tao Dec 15 '13 at 4:51 It's even more surprising if you start with the inductive definitions of plus and times. The proof that $ab=ba$ comes as Proposition 72 in the first development of this theory, by Grassmann in 1861. – John Stillwell Jan 13 '14 at 9:12 A nice example from classical mechanics is this: there is a hidden $SO(4)$ symmetry in the elliptical orbits of a particle in an inverse square potential, ie. the Kepler problem. The system has an obvious $SO(3)$ symmetry because the inverse square law is invariant under rotations. But there's no a priori clue that an $SO(4)$ symmetry exists in this system. You can read about it here: This carries over to the quantum mechanical case when you solve the Schrödinger equation for an inverse square potential. You can read about that here: The result is that the hidden $SO(4)$ symmetry explains the "coincidence" that many hydrogen atom states have the same energy. share|cite|improve this answer 1. I think that if you put yourself back in the position of someone discovering this for the first time, the equality (under suitable hypotheses) $${\partial^2f\over\partial x\partial y}={\partial^2 f\over\partial y\partial x}\quad (1)$$ should count. 2. Here's a surprising application of that suprising equality. Suppose you're a profit-maximizing competitive firm, hiring both labor ($L$) (at a wage rate of $W$) and capital ($K$) (at a rental rate of $R$). Then an increase in $W$ will, in general, lead you to reduce your output and so employ less capital, but at the same time lead you to substitute capital for labor and so employ more capital. On balance, the derivative $dK/dW$ could be either positive or negative. Likewise for the derivative $dL/dR$. It does not seem to me to be at all intuitively obvious that these derivatives even have the same sign, much less that they are equal. But if one takes $f$ in (1) to be profit as a function of $x$ (labor) and $y$ (capital) then one discovers that in fact $${dK\over dW}={dL\over dR}$$ (Of course this looks more symmetric if you write $X_1$ and $X_2$ for labor and capital, and $P_1$ and $P_2$ for the wage rate and the rental rate.) share|cite|improve this answer Higher Homotopy groups $\pi_n(X)$ are abelian. This is quite surprising if you see the defintion for the first time and probably got in touch with the classical fundamental group before, which is not abelian in general. In fact, higher homotopy groups should serve as a generalization to the fundamental group in contrast to the abelian homology groups, when they were introduced, but as one recognized, that they are abelian too, they seemed to be not a nice generalization. share|cite|improve this answer Rolling one surface on another without slipping binds the velocity of the rolling surface and its angular velocity, giving a rank 2 subbundle in the tangent bundle of the 5-dimensional space of tangential positionings of the 2 surfaces in space. This subbundle, when you roll one sphere on another, has an 8 dimensional symmetry group, unless one sphere has exactly one third the radius of the other sphere, in which case the subbundle is preserved by a 14 dimensional group of diffeomorphisms of the 5-dimensional manifold: the split real form of the simple Lie group $G_2$. share|cite|improve this answer This subbundle is my favorite example of a non-integrable distribution (if the surfaces are "generic", at least) - you can physically see that rolling a sphere in an "infinitesimal square" on a plane makes the sphere rotate. – Peter Samuelson Dec 14 '13 at 15:29 Consider the Desargues configuration. It consists of (1) two triangles, say $ABC$ and $A'B'C'$ such that the lines $AA'$, $BB'$, and $CC'$ all meet at a point $P$, and (2) the three points of intersection of corresponding sides $X=(BC)\cap(B'C')$, $Y=(AC)\cap(A'C')$, and $Z=(AB)\cap(A'B')$. Desargues's theorem says that then $X$, $Y$, and $Z$ are collinear. The Desargues configuration consists of the 10 points mentioned above ($A,B,C,A',B',C',P,X,Y,Z$) and the 10 lines mentioned (the three sides of both triangles, the three lines through $P$, and the line $XYZ$). The surprising (to me) symmetry is an action of the cyclic group of order 5. In fact, the graph whose vertices are the 10 points of the Desargues configuration and whose edges join any two points that are not together on any of the configuration's 10 lines is the Petersen graph, which is usually drawn in a way that makes the cyclic 5-fold symmetry visible. share|cite|improve this answer Have used Desargues for easily a hundred times in my schooldays and never realized this. I actually wasn't aware that the Petersen graph had any deeper meaning than that of a counterexample to some conjectures of days gone by. Nice!! – darij grinberg Dec 14 '13 at 21:01 The joint distribution of IID normal random variables is spherically symmetric. Although invariance under permutations of the coordinates is obvious for any IID variables, spherical symmetry is rare. In fact, this characterizes the normal distribution. share|cite|improve this answer Hermite's reciprocity: as representations of $GL_2$, we have $$ S^k(S^l\mathbb{C}^2)\simeq S^l(S^k\mathbb{C}^2). $$ share|cite|improve this answer In fact, the "correct" definition of Littlewood-Richardson coefficients shows a surprising $S_3$-symmetry among all the indices $\lambda,\mu,\nu$. See A further example related to symmetric functions is the symmetry between the area and bounce statistics of Dyck paths. See for instance Chapter 3 of No combinatorial proof of symmetry is known. There are many enumeration problems with "hidden symmetry." For instance, what is the probability that 1 and 2 are in the same cycle of a (uniform) random permutation of $1,2,\dots,n$? More interesting, suppose that I shuffle an ordinary deck of 26 red cards and 26 black cards. I turn the cards face up one at a time. At any point before the last card is dealt, you can guess that the next card is red. What strategy maximizes the probability of guessing correctly? The surprising answer is that all strategies have a probability of 1/2 of success! There is a very elegant way to see this. share|cite|improve this answer @StevenLandsburg: imagine the dealer turns over the bottom card of the deck when you guess, instead of the top one. Clearly this situation is symmetric to the one described above, but also clearly every strategy gives 50/50 odds as the outcome is determined before the game even starts. – Sam Hopkins Dec 14 '13 at 1:00 Can you fix the first link to point to the abstract rather than directly to the PDF? Thank you! – Harry Altman Dec 14 '13 at 18:11 From school days... Take positive reals x,y,z,w. The following statement is actually symmetric in x,y,z,w: "there exists an equilateral triangle of side length w, and a point whose distances from the three vertices are x,y,z" enter image description here A quick proof: Let $ABC$ be equilateral and $P$ arbitrary. Construct $BPQ$ equilateral. Let $AB=AC=BC=w$, $AP=x$, $BP=y$ and $CP=z$. Then $BP=PQ=BQ=y$ by construction, $CP=z$ and $CB=w$ obviously, so it remains to check that $CQ=x$. Now note that triangle $CBQ$ is the $60^\circ$ rotation of $ABP$ around $B$. share|cite|improve this answer A pedestrian definition of the rank of a matrix as the maximum number of linearly independent columns equals the maximum number of linearly independent rows. share|cite|improve this answer The combinatorial definition of the Schur functions is $$ s_\lambda(x) = \sum_{T \in SSYT(\lambda)} x^{cont(T)} $$ where $SSYT(\lambda)$ is the set of semi-standard Young tableaux of shape $\lambda$ and $x^{cont(T)}$ is the product over all $i$ of $x_i^{\# i\text{'s in }T}$. This is not manifestly a symmetric function. The Bender-Knuth involution proves that $s_\lambda(x)$ is invariant after swapping $x_i$ with $x_{i+1}$, and thus $s_\lambda(x)$ is, indeed, symmetric. share|cite|improve this answer And more startlingly (or at least far less obviously), the Stanley symmetric functions and their generalizations. – darij grinberg Jan 22 '14 at 17:43 Morley's trisector theorem allows you to build a triangle which is maximally symmetric out of one which has no symmetry at all. share|cite|improve this answer The outer automorphism of $S_6$. share|cite|improve this answer This is a rather specialized example, but dear to my heart. Consider the set of "Richardson subvarieties" of the flag manifold $GL_n/B$, intersections of Schubert and opposite Schubert varieties. The only part of the Weyl group that preserves this set is $\{1,w_0\}$ where the $w_0$ exchanges Schubert and opposite Schubert varieties. Now project these varieties to a $k$-Grassmannian, obtaining "positroid varieties". This includes the Richardson varieties in the Grassmannian, and many new varieties. Now the part of the Weyl group that preserves this collection is the dihedral group $D_n$! The symmetry has gotten bigger by a factor of $n$. share|cite|improve this answer I always found $\mathrm{Tor}_R\left(M,N\right) \cong \mathrm{Tor}_R\left(N,M\right)$ for a commutative ring $R$ and two $R$-modules $M$ and $N$ to be mysterious. Then again I have no idea about homology and thus wouldn't be surprised if this is a triviality from an appropriate viewpoint. Volker Strehl's generalized cyclotomic identity (Corollary 6 in Volker Strehl, Cycle counting for isomorphism types of endofunctions states that $\prod\limits_{k\geq 1} \left(\dfrac{1}{1-az^k}\right)^{M_k\left(b\right)} = \prod\limits_{k\geq 1}\left(\dfrac{1}{1-bz^k}\right)^{M_k\left(a\right)}$ in the formal power series ring $\mathbb Q\left[\left[z,a,b\right]\right]$, where $M_k\left(t\right)$ denotes the $k$-th necklace polynomial $\dfrac{1}{k}\sum\limits_{d\mid k} \mu\left(d\right) t^{k/d}$. I recall this being not particularly difficult, but quite useful. Every nontrivial commutativity of some family of operators probably qualifies as an unexpected symmetry. Here are three examples: 1. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $Y_i \in \mathbb Z\left[S_n\right]$ by $Y_i = \left(1,i\right) + \left(2,i\right) + ... + \left(i-1,i\right)$ (a sum of $i-1$ transpositions). Then, $Y_i Y_j = Y_j Y_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is a simple exercise, and the $Y_i$ are called the Young-Jucys-Murphy elements. 2. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{0,1,...,n\right\}$, define an element $\mathrm{Sch}_i \in \mathbb Z\left[S_n\right]$ as the sum of all permutations $\sigma \in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$. (Note that $\mathrm{Sch}_0 = \mathrm{Sch}_1$ when $n\geq 1$.) Then, $\mathrm{Sch}_i \mathrm{Sch}_j = \mathrm{Sch}_j \mathrm{Sch}_i$ for all $i$ and $j$ in $ \left\{0,1,...,n\right\}$. In fact, $\mathrm{Sch}_i \mathrm{Sch}_j = \sum\limits_{k=0}^{\min\left\{n,i+j-n\right\}} \dbinom{n-j}{i-k} \dbinom{n-i}{j-k} \left(n+k-i-j\right)! \mathrm{Sch}_k$, which makes the symmetry maybe not that surprising (no similar equalities hold in cases 1 and 3!). See Manfred Schocker, Idempotents for derangement numbers, Discrete Mathematics, vol. 269 (2003), pp. 239-248 for a proof. 3. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $\mathrm{RSW}_i \in \mathbb Z\left[S_n\right]$ as $\sum\limits_{1\leq u_1 < u_2 < ... < u_i\leq n} \sum\limits_{\substack{\sigma\in S_n, \\ \sigma\left(u_1\right) < \sigma\left(u_2\right) < ... < \sigma\left(u_i\right)}} \sigma$. Then, $\mathrm{RSW}_i \mathrm{RSW}_j = \mathrm{RSW}_j \mathrm{RSW}_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is Theorem 1.1 in Victor Reiner, Franco Saliola, Volkmar Welker, Spectra of Symmetrized Shuffling Operators, arXiv:1102.2460v2, and a nice proof remains to be found. share|cite|improve this answer The Tor symmetry is basically just that $M \otimes N \cong N \otimes M$, and you take the derived functors of both sides. Generalizing, any and all nice properties of (co)homology groups would seem to be mysterious symmetries if you consider the definition to be messing around with projective or injective modules, and not something more intrinsic like derived functors. – Ryan Reich Dec 15 '13 at 5:14 A couple very disparate answers that spring to mind (fortunately, this is community wiki, and actual experts should feel very free to improve my exposition of either): The negative gradient flow for the Chern-Simons functional on a 3-manifold $M$ naturally satisfies a four-dimensional symmetry. Namely, if one has a principal $G$-bundle on $M$ and some connection $A$ on this $G$-bundle (which I'll carelessly think of as a $\mathfrak{g}$-valued $1$-form on $M$), the Chern-Simons functional $CS(A) = \int_M \Big( dA + \frac{2}{3} A \wedge A \Big) \wedge A$ is a perfectly well-defined function on the space of connections, and one can attempt to perform the negative gradient flow with respect to a natural metric on this space of connections (this being a very natural thing to do from the point of view of Morse theory, for example). If you want, you can interpret the solution to this flow as a connection on the bundle pulled back to $M \times \mathbb{R}$, and while this connection clearly transforms nicely under $Diff(M)$, there's no particular reason to think it's a well-behaved object under the diffeomorphism group of the four-manifold $M \times \mathbb{R}$. However, this negative gradient flow equation turns out to be exactly the anti-self dual equation $F^+ = 0$, where the curvature $F = dA + A \wedge A$ and its self-dual part is $F^+ = \frac{1}{2}(F + *F)$. This equation manifestly respects the symmetries of the entire four-manifold, and this point of view is a very effective one for proving even basic things, like gauge invariance, of the Chern-Simons functional. Witten is very fond of making this point and my understanding is that this insight allowed him to extend his QFT description of the Jones polynomial to a QFT description of its categorification, Khovanov homology. And now for something completely different: associativity of the quantum cup product. A familiar object to many people is the cohomology ring $H^*(X)$ of a space $X$, which is associative, (graded) commutative, and just generally great. If $X$ is a symplectic manifold, there's an interesting way to deform the multiplication on this ring using counts of $J$-holomorphic curves passing through various cycles. In effect, one picks a compatible almost-complex structure on the symplectic manifold, and then if one writes $\alpha * \beta = \sum_{\gamma} c_{\alpha \beta \gamma} \gamma$, where we think of $\alpha, \beta, \gamma$ as cycles in $X$ (using Poincare duality), the coefficient $c_{\alpha \beta \gamma}$ is a generating function in some formal variables, the coefficients of which are counts of holomorphic curves of fixed genus and homology class intersecting our three cycles $\alpha, \beta, \gamma$. Using this deformed multiplication gives the quantum cohomology ring $QH^*(X)$. Now, some properties of this ring, like graded commutativity, are fairly easy to see from the definition, but associativity is really quite tricky! (I realise this isn't exactly what you asked in your question as it's not just a symmetry of some coefficient, but you can phrase associativity as a symmetry of something or other -- if you want to be technical, a four-point Gromov-Witten invariant -- so I think it qualifies.) The associativity is somehow not so bad to see in the algebro-geometric case (or perhaps this is just my bias as an algebraic geometer), but in symplectic geometry you really need some nontrivial analytic estimates at some point in the proof. And you get a lot out of it! Associativity of this quantum cohomology ring encapsulates a wealth of information on enumerative geometry counts associated to $M$; indeed, it was basically this idea that allowed Kontsevich to find his recursion for the number of degree $d$ curves through $3d + 1$ general points in $\mathbb{P}^2$. Finally, I kind of want to mention strange duality, even though that now really isn't an answer to the question, as you have to modify one side or the other; I'll just copy a very quick summary from the abstract to ``For X a compact Riemann surface of positive genus, the strange duality conjecture predicts that the space of sections of certain theta bundle on moduli of bundles of rank r and level k is naturally dual to a similar space of sections of rank k and level r.'' The paper itself is a great place to learn more about it if you're interested! share|cite|improve this answer Let $G$ be a finite group with order $n$. For each $d$ dividing $n$, the number of subgroups of $G$ of order $d$ equals the number of subgroups of order $n/d$ if $G$ is abelian. More broadly, the lattice of subgroups of a finite abelian group looks the same if you flip it around by 180 degrees. This is not at all obvious at the level at which the statement can first be understood, essentially because there is no natural way to construct subgroups of index $d$ from subgroups of order $d$ in a general finite abelian group with order divisible by $d$. It is not clear at a beginning level how the commutativity of the group leads to such conclusions. share|cite|improve this answer Here is an example from potential theory where symmetry is a not-so-obvious property: the Green function of a bounded open subset $\Omega \subset \mathbb{C}$. More precisely, having specified a point $a \in \Omega$, one defines the classical Green function for $\Omega$ with pole at $a$, , as a function on $\mathbb{C}$ with the following properties: (i) $G_\Omega(\cdot; a)$ is harmonic in $\Omega \setminus \{a\}$; (ii) $z \mapsto G(z;a) + \log |z-a|$ extends to a harmonic function on $\Omega$; (iii) for each $w \in \partial \Omega$, $\lim_{z \to w} G_\Omega(z;a)=0$. The symmetry property says that $G_\Omega(z;w)=G_\Omega(w;z)$ for any $z,w \in \Omega$ such that $z \ne w$. Note that the functions on either side of the equation are different: one has a pole at $w$ and the other at $z$. It is not very hard to prove the symmetry property, but it is not obvious either. The existence of such a function is related to the solution of a Dirichlet problem for the Laplace equation in $\Omega$. Analogous functions can be considered for domains in $\mathbb{R}^n, \ n>2$ or in $\mathbb{C}^n, n > 1$, and they also enjoy the symmetry property. share|cite|improve this answer In number theory, Terry Tao already mentioned Quadratic Reciprocity in his first comment, but there's also the reciprocity formula $$ s(b,c) + s(c,b) = \frac1{12}\left( \frac{b}{c} + \frac1{bc} + \frac{c}{b} \right) - \frac14 $$ for Dedekind sums, symmetrized further in Rademacher's formula $$ D(a,b;c) + D(b,c;a) + D(c,a;b) = \frac1{12} \frac{a^2+b^2+c^2}{abc} - \frac14. $$ [Here $D(a,b;c) = \sum_{n\,\bmod\,c} ((an/c)) ((bn/c))$, where $((\cdot))$ is the sawtooth function taking $x$ to $0$ if $x \in {\bf Z}$ and to $x - \lfloor x \rfloor - 1/2$ otherwise; and the Dedekind sum is the special case $s(b,c) = D(1,b;c)$.] share|cite|improve this answer But I don't understand what is so special about this, at least in terms of symmetry: for about any function $s(\cdot,\cdot)$, including the Legendre symbol, $s(b,c)+s(c,b)$ or $s(b,c)s(c,b)$ is symmetric in $b$ and $c$. Where is the surprise? – Wolfgang Dec 18 '13 at 18:01 @Wolfgang asks a fair question. To add to Matt Young's answer, we can define $s'(b,c) = s(b,c) + 1/8 - b/12c - 1/24bc$, and then the reciprocity formula says that $s'(b,c)$ is antisymmetric: $s'(b,c) = -s'(c,b)$. – Noam D. Elkies Dec 18 '13 at 20:25 @NoamD.Elkies Granted. That reminds me of the relation between $\zeta(1-s)$ and $\zeta(s)$, cast as $\Xi(1-s)=\Xi(s)$ with appropriate $\Xi$. – Wolfgang Dec 19 '13 at 7:56 Consider a differential inequality, like the Hardy-Sobolev inequality $$\left|\int\int_{{\mathbb R}^N\times{\mathbb R^N}}\frac{\overline{f(x)}g(y)}{|x-y|^\lambda}dxdy\right|\leq C\|f\|_r\|g\|_s.$$ Even if you put the sharp constant $C$ in this inequality, for most functions the inequality is strict. Now look for maximizers, i.e., functions for which the LHS is equal to the RHS: they are highly symmetric functions, actually spherically symmetric and very smooth. This is a general phenomenon, connected with monotonicity of $L^p$ and Sobolev norms with respect to symmetrization procedures. share|cite|improve this answer Characters of affine Kac-Moody Lie algebras and Virasoro Lie algebra are modular forms. These modular symmetries are not that much evident from the definitions. share|cite|improve this answer Maxwell's equations were originally formulated for Newtonian physics. However, special relativity has found that these equations have a surprising symmetry to Lorentz transformations. The equations remain true in a moving reference frame. The transformation of the values is such that (loosely speaking) what looks like pure electric charge in one reference frame can be electric current and charge in another reference frame; and what looks like pure electric field from one reference frame can be magnetic and electric field in another reference frame. See for a precise formulation. share|cite|improve this answer The Jacobson radical of a ring $R$ is defined to be the intersection of all maximal left ideals in $R$. It turns out that the Jacobson radical is the intersection of all maximal right ideals in $R$ as well, so the Jacobson radical does not depend on whether one considers left or right ideals. In particular, the Jacobson radical of a ring is a two-sided ideal. In fact, there are several characterizations of the Jacobson radical that do not appear to be symmetric with respect to "leftness" and "rightness" including the following. 1. The intersection of all maximal left ideals. 2. $\bigcap\{\textrm{Ann}(M)|M\,\textrm{is a simple left}\,R-\textrm{module}\}$ 3. $\{x\in R|1-rx\,\textrm{has a left inverse for each}\,r\in R\}$ 4. $\{x\in R|1-rx\,\textrm{has a two-sided inverse for each}\,r\in R\}$ share|cite|improve this answer Betti numbers: the symmetry $\dim(H^k(M^n))=\dim(H^{n-k}(M^n))$ does not immediately follow from the definition. share|cite|improve this answer @DanielLitt, I know, I just don't want to deal with torsion, and for the purpose of this question Betti numbers' symmetry is sufficient. – Michael Dec 13 '13 at 21:23 My point is that the symmetry does not come from the Betti numbers, but from the space $M$; I don't think this is an example of what the question asks for. – Daniel Litt Dec 13 '13 at 23:40 There is a philosophy that the functional equation of a zeta function should be a consequence of Poincare duality on some exotic space. For zeta functions of varieties over finite fields, this was made rigorous in the 1960s, but over number fields it's still just a philosophy. So we have two non-obvious symmetries that are the same, but not obviously the same. In other words, we have a non-obvious symmetry between non-obvious symmetries. – JBorger Jan 12 '14 at 19:01 Two (unrelated) examples from combinatorics: The first is Proposition 7.19.9 of volume 2 of Stanley's "Enumerative Combinatorics." Define a descent of a (skew) Standard Young Tableau $T$ of shape $\lambda/\mu$ to be an index $i$ such that $i+1$ is in a lower row than $i$. Let $D(T)$ denote the set of descents of $T$. Then for any $|\lambda/\mu|=n$ and for any $1 \leq i \leq n-1$, the number of SYTs $T$ of shape $\lambda/\mu$ such that $i \in D(T)$ is independent of $i$. The second follows from a bijection of De Médicis and Viennot (1994, Adv. Appl. Math.) Let $\mathcal{M}_n$ denote the set of perfect matchings of $[2n]$, i.e. the set of partitions of $[2n] := \{1,2,\ldots,2n\}$ into pairs. Let $M \in \mathcal{M}_n$. For $p = \{a,b\}, q = \{c,d\} \in M$ with $a<b$, $c<d$, and $a<c$, we say that $p$ and $q$ cross if $a < c < b< d$ and we say they nest if $a<c<d<b$. Finally, we say they are aligned if they neither cross nor nest, i.e., $a<b<c<d$. Define: $\mathrm{ne}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ nest}\}|;$ $\mathrm{cr}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ cross}\}|;$ $\mathrm{al}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ are aligned}\}|.$ Then $\sum_{M \in \mathcal{M}_n}x^{\mathrm{ne}(M)}y^{\mathrm{cr}(M)}=\sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{ne}(M)}$. However, crossings and alignments (or nestings and alignments) are not equidistributed: $\sum_{M \in \mathcal{M}_n}x^{\mathrm{al}(M)}y^{\mathrm{cr}(M)} \neq \sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{al}(M)}$. share|cite|improve this answer Let $r_4(n)$ be the number of $4$-tuples $a,b,c,d\in \bf Z$ satisfying $a^2+b^2+c^2+d^2=n$. Then $\sum_{n\geq 0}r_4(n)e^{2\pi i\, nz}dz$ is a holomorphic differential form on the upper half-plane that is invariant by a subgroup of finite index in ${\rm SL}_2(\bf Z)$ (acting by $\frac{az+b}{cz+d}$). The same is true if you replace $r_4(n)$ by $a_n(E)$ where: -- $E$ is an elliptic curve defined over $\bf Q$, -- if $p$ is a prime number, $a_p(E)=p+1-N_p(E)$ and $N_p(E)$ is the number of points of $E$ in ${\bf Z}/p{\bf Z}$, -- $a_n(E)$, for $n\in\bf N$, is defined by $\sum_n a_n(E)n^{-s}=\prod_p(1-a_p(E)p^{-s}+p^{1-2s})^{-1}$ (the product has to be taken over the prime numbers $p$ such that $E$ remains an elliptic curve modulo $p$ which excludes finitely many of them). share|cite|improve this answer I would like to add an example coming from the area of additive theory known as Freiman's structure theory. If I am not (too) blind, this has not been mentioned yet, and hopefully it qualifies as an appropriate answer. Assume that $\mathbb{A} = (A, +)$ is a (possibly non-commutative) semigroup, and let $X$ be a non-empty subset of $A$. Given an integer $n \ge 1$, we write $nX$ for $\{x_1+\cdots + x_n: x_1, \ldots, x_n \in X\}$. In principle, we have $1 \le |nX| \le |X|^n$, and for all $k \in \mathbb{N}^+$ and $i \in \{1, \ldots, k\}$ we can actually find a pair $(\mathbb{A}, X)$ such that $|X| = k$ and $|nX| = i$, with the result that, in general, not much can be concluded about the "structure" of $X$. However, if $|nX|$ is sufficiently small with respect to $|X|$ and $\mathbb{A}$ has suitable properties, then "surprising" things start happening, and for instance we have the following: Theorem. If $\mathbb{A}$ is a linearly orderable semigroup (i.e., there exists a total order $\preceq$ on $A$ such that $x + z \prec y + z$ and $z + x \prec z + y$ for all $x,y,z \in A$ with $x \prec y$) and $|2X| \le 3|X|-3$, then the smallest subsemigroup of $\mathbb{A}$ containing $X$ is abelian. This implies at once an analogous result by Freiman and coauthors which is valid for linearly ordered groups; see Theorem 1.2 in [F] (a preprint can be found here). I don't know of any similar result for larger values of $n$. [F] G. Freiman, M. Herzog, P. Longobardi, and M. Maj, Small doubling in ordered groups, to appear in J. Austr. Math. Soc. share|cite|improve this answer In the definition of "Latin square" there is complete symmetry between the roles of "row", "column" and "symbol", so that any of the 6 permutations of that role produces another Latin square. share|cite|improve this answer The "Little Prince" problem, which I learned from Greg Kuperberg, is a geometric answer to your question. Here is the problem: the Little Prince stands in (I do mean in, not on) the plane and wants to shape its planet from a given quantity of matter (of given density) in order to maximize the gravity he feels. The most efficient way to go is to shape the planet as a round disk. The problem has a particular point, the position of the Little Prince, but turns out to have a symmetric solution. Note that the same problem in higher dimension does not have a symmetric solution. Let me add two points that make this example all the more interesting: first, the results still stands if the Little Prince is also authorized to shape the space (rather the surface) he lives in, with the constraint that it should have nonpositive curvature and be simply connected: he should still make the planet a round flat disk. Second, if one takes a general domain and integrates the inequality between the felt gravity and the optimal gravity, one gets the isoperimetric inequality. share|cite|improve this answer Your Answer
9028734e48a976ed
Psychology Wiki Schrödinger's cat 34,200pages on this wiki Schrödinger's Cat: If the nucleus in the bottom left decays, the geiger counter on its right will sense it and trigger the release of the gas. In one hour, there is a 50% chance that the nucleus will decay, and therefore that the gas will be released and kill the cat. Schrödinger's cat is a seemingly paradoxical thought experiment devised by Erwin Schrödinger that attempts to illustrate the incompleteness of an early interpretation of quantum mechanics when going from subatomic to macroscopic systems. Schrödinger proposed his "cat" after debates with Albert Einstein over the Copenhagen interpretation, which Schrödinger defended, stating in essence that if a scenario existed where a cat could be so isolated from external interference (decoherence), the state of the cat can only be known as a superposition (combination) of possible rest states (eigenstates), because finding out (measuring the state) cannot be done without the observer interfering with the experiment — the measurement system (the observer) is entangled with the experiment. The thought experiment serves to illustrate the strangeness of quantum mechanics and the mathematics necessary to describe quantum states. The idea of a particle existing in a superposition of possible states, while a fact of quantum mechanics, is a concept that does not scale to large systems (like cats), which are not indeterminably probabilistic in nature. Philosophically, these positions which emphasise either probability or determined outcomes are called (respectively) positivism and determinism. The experiment Edit Schrödinger wrote: An illustration of both states, a dead and living cat. According to quantum theory, after an hour the cat is in a quantum superposition of coexisting alive and dead states. Yet when we look in the box we expect to only see one of the states, not a mixture of them. Schrödinger cat The experiment must be shielded from the environment to prevent quantum decoherence from inducing wavefunction collapse. The above text is a translation of two paragraphs from within a much larger original article, which appeared in the German magazine Naturwissenschaften ("Natural Sciences") in 1935: E. Schrödinger: "Die gegenwärtige Situation in der Quantenmechanik" ("The present situation in quantum mechanics"), Naturwissenschaften, 48, 807, 49, 823, 50, 844 (November 1935). It was intended as a discussion of the EPR article published by Einstein, Podolsky and Rosen in the same year. Apart from introducing the cat, Schrödinger also coined the term "entanglement" (German: Verschränkung) in his article. In posing this Schrödinger asked the question: when does a quantum system stop existing as a mixture of states and become one or the other? (More technically, when does the actual quantum state stop being a linear combination of states, each of which resemble different classical states, and instead begin to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The purpose of the thought experiment is to illustrate this apparent paradox: our intuition says that no observer can be in a mixture of states, yet it seems cats can be such a mixture. Are cats required to be observers, or does their existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Albert Einstein, who was impressed by the ability of the thought experiment to highlight these issues; in a letter to Schrödinger dated 1950 he wrote: You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality—if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gun powder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation. But perhaps it was inevitable that Einstein would be impressed with Schrödinger's cat—Einstein had previously suggested to Schrödinger a similar paradox involving an unstable keg of gunpowder, instead of a cat. Schrödinger had taken the next step of applying quantum mechanics to an entity that may or may not be conscious, to further illustrate the putative incompleteness of quantum mechanics. Copenhagen interpretationEdit Quantum physics Quantum psychology Schrödinger cat Quantum mechanics Introduction to... Mathematical formulation of... Fundamental concepts Decoherence · Interference Uncertainty · Exclusion Transformation theory Ehrenfest theorem · Measurement Double-slit experiment Davisson-Germer experiment Stern–Gerlach experiment EPR paradox · Schrodinger's Cat Schrödinger equation Pauli equation Klein-Gordon equation Dirac equation Advanced theories Quantum field theory Quantum electrodynamics Quantum chromodynamics Quantum gravity Feynman diagram Copenhagen · Quantum logic Hidden variables · Transactional Many-worlds · Many-minds · Ensemble Consistent histories · Relational Consciousness causes collapse Orchestrated objective reduction Bohm · In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This experiment makes apparent the fact that the nature of measurement, or observation, is not well defined in this interpretation. Some interpret the experiment to mean that while the box is closed, the system simultaneously exists in a superposition of the states "decayed nucleus/dead cat" and "undecayed nucleus/living cat", and that only when the box is opened and an observation performed does the wave function collapse into one of the two states. More intuitively, some feel that the "observation" is taken when a particle from the nucleus hits the detector. Recent developments in quantum physics show that measurements of quantum phenomena taken by non-conscious "observers" (such as a wiretap) most definitely alter the quantum state of the phenomena from the point of view of conscious observers reading the wiretap, lending support to this idea. A precise rule is that probability enters at the point where the classical approximation is first used to describe the system - almost by tautology, as the classical approximation is just a simplification of the quantum mathematics, and so must introduce imprecision in the measurement, which can be viewed as probability. Note, however, that this only applies to descriptions of the system, not the system itself. The cat is both 100% alive and 100% dead at the same time due to quantum theory. Under Copenhagen, the amount of uncertainty for a complex quantum system is predicted by quantum decoherence. Particles which exchange photons (and possibly other atomic or subatomic particles) become entangled with each other from the point of view of an observer, meaning that these particles can only be described accurately with reference to each other, which decreases the total uncertainty of those particles from the point of view of our observer. By the time one has reached "macroscopic" levels - such as a cat, which is made up of a number of atomic particles almost too large to express with words - so many particles have become entangled with each other so as to decrease the uncertainty to almost zero. (Quantum effects in huge collections of particles are only seen in very rare, and often man-made, situations, such as a Bose-Einstein condensate). Thus, at least from the point of view of the observer, any improbability regarding the cat as a system of quantum particles has disappeared due to the massive amount of entanglement between all of the particles that make it up, meaning that the cat does not truly exist as both alive and dead at the same time, at least from the point of view of any observer viewing the cat. Even before observation was noted to be fundamentally distinct from consciousness through experimentation, the experiment always contained at least two "observers" - the physicist and the cat. Even had the physicist been unaware of the cat's state in the hypothetical experiment, one would have had to posit that the cat, at least, would have been quite sure of its status (at least, as long as the gas had not yet ended its ability to "observe"). However, since "observation" has been shown by experiment to have nothing to do with consciousness - or at the very least, any traditional definition of consciousness - most conjecture along these lines probably falls under the "interesting but physically irrelevant" category. Everett many-worlds interpretation & consistent historiesEdit In the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process, both states persist, but decoherent from each other. When an observer opens the box, he becomes entangled with the cat, so observer-states corresponding to the cat being alive and dead are formed, and each can have no interaction with the other. The same mechanism of quantum decoherence is also important for the interpretation in terms of Consistent Histories. Only the "dead cat" or "alive cat" can be a part of a consistent history in this interpretation. In other words, when the box is opened, the universe (or at least the part of the universe containing the observer and cat) is split into two separate universes, one containing an observer looking at a box with a dead cat, one containing an observer looking at a box with a live cat. Ensemble interpretationEdit In the Ensemble Interpretation, the Schrödinger's cat paradox is a trivial non issue. In this interpretation, the state vector does not apply to individual cat experiments, it only applies to the statistics of many similar prepared cat experiments. Indeed, the cat paradox was specifically constructed by Schrödinger to illustrate that the Copenhagen Interpretation suffered fundamental problems. It was not intended as an example that quantum mechanics actually predicted that a cat could be alive and dead simultaneously, though some have made this further assumption. Practical applicationsEdit This has some practical use in quantum computing and quantum cryptography. It is possible to send light that is in a superposition of states down a fiber optic cable. Placing a wiretap in the middle of the cable which intercepts and retransmits the transmission will collapse the wavefunction (in the Copenhagen interpretation, "perform an observation") and cause the light to fall into one state or another. By performing statistical tests on the light received at the other end of the cable, one can tell whether it remains in the superposition of states or has already been observed and retransmitted. In principle, this allows the development of communication systems that cannot be tapped without the tap being noticed at the other end. This experiment can be argued to illustrate that "observation" in the Copenhagen interpretation has nothing to do with consciousness, in that a perfectly unconscious wiretap will cause the statistics at the end of the wire to be different. Yet, one still cannot factor out the observation of the wiretap as having an effect upon the outcome. In quantum computing, the phrase "cat state" often refers to the special entanglement of qubits where the qubits are in an equal superposition of all being 0 and all being 1, i.e. |00...0\rangle + |11...1\rangle. A variant of the Schrödinger's Cat experiment known as the quantum suicide machine has been proposed by cosmologist Max Tegmark. It examines the Schrödinger's Cat experiment from the point of view of the cat, and argues that this may be able to distinguish between the Copenhagen interpretation and many worlds. Another variant on the experiment is Wigner's friend. Physicist Stephen Hawking once exclaimed, "When I hear of Schrödinger's cat, I reach for my gun," paraphrasing German playwright and Nazi "Poet Laureate", Hanns Johst's famous phrase "Wenn ich 'Kultur' höre, entsichere ich meine Browning!" ("When I hear the word 'culture', I release the safety on my Browning!") In fact, Hawking and many other physicists are of the opinion that the "Copenhagen School" interpretation of quantum mechanics unduly stresses the role of the observer. Still, a final consensus on this point among physicists seems to be out of reach. See also Edit External linksEdit Wikimedia Commons has media related to: Simple:Schrödinger's cat <span class="FA" id="pt" style="display:none;" /> Around Wikia's network Random Wiki
47c47aba9dfc7181
Quantum Mechanics: Time Independent Schrodinger Wave Equation By Dragica Vasileska1, Gerhard Klimeck2 1. Arizona State University 2. Purdue University View Series Slides/Notes podcast Licensed according to this deed. Published on In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics. In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger who discovered it in 1926. Schrödinger's equation can be mathematically transformed into the Heisenberg formalism, and into the Feynman path integral. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is less severe in Heisenberg's formulation and completely absent in the path integral. For stationary potential, the time-dependent Schrodinger equation reduces to the time-independent Schrodinger wave equation (TISWE). The TISWE can be solved for two types of problems: (1) open systems and (2) bound states. Reading material regarding the treatment of the open systems and the bound-state problem is provided in the link below. Also provided are links to the PCPBS Lab (piece-wise constant potential barrier system)and the BSP Lab (bound states problem). These two simulation labs, in addition to supplemental reading material, also contain homework exercises. • Reading Material: TISWE • Piece-Wise-Constant Potential Barrier Lab • Bound State Problem Lab • Sponsored by Cite this work Researchers should cite this work as follows: • www.eas.asu.edu/~vasilesk • Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Time Independent Schrodinger Wave Equation," https://nanohub.org/resources/4937. BibTex | EndNote In This Series 1. Bound States Calculation Lab 2. Piece-Wise Constant Potential Barriers Tool 3. Reading Material: Time Independent Schrodinger Wave Equation (TISWE)
c66937e6b366a200
Chemistry 441 » Fall » Full Semester 4 Credits Physical Chemistry I Instructor(s): James M. Farrar Prerequisites: Physics (mechanics and electricity & magnetism), and math (calculus through differential equations strongly recommended) Crosslisting: CHM 251 Course Summary: This course is an introduction to the quantum theory of matter, with particular applications to problems of chemical interest. Our discussion of the subject of quantum chemistry will be based on the Schrödinger equation, the wave equation for matter waves. We will discuss the solutions to the Schrödinger equation for a number of important model systems, including piecewise constant potentials, the simple harmonic oscillator, the rigid rotor, and the Coulomb potential. We will apply these results to chemical bonding and atomic and molecular structure. There are weekly problem sets. Students also participate in workshops each week. Chemistry 441 is for graduate students who have not had previous coursework in quantum chemistry. Chemistry 441 students will have additional homework assignments. Course Topics: 1. Introduction, Planck distribution, necessity for quantum hypothesis. 2. Photoelectric effect, heat capacity of solids, line spectra of atoms, Bohr theory of the atom. 3. deBroglie waves, Davisson-Germer experiment, Heisenberg Uncertainty Principle, two-slit diffraction experiment and wave-particle duality. 4. Mathematics of waves, wave equations, separation of variables, solving linear second-order differential equations with constant coefficients. 5. Harmonic oscillator differential equation, clamped string: spatial, temporal solutions, normal modes. 6. Standing waves as superposition of travelling waves, Schrödinger equation for free particle, particle in 1-D infinite square well. 7. Quantization in the 1-D infinite square well, spectra of conjugated molecules, Born interpretation of wavefunctions, linear operators. 8. Operators and eigenvalues, Schrödinger equation as energy eigenvalue problem, expectation values, variance, x px for particle in a 1-D square well. 9. Postulates of quantum mechanics: maximum information in wavefunction, expectation values, observation of eigenvalues, zero variance of eigenfunctions, operators of quantum mechanics. 10. Wavefunction not an eigenstate of 1-D square well, time-independent Schrödinger equation, stationary states, superposition states. 11. Hermitian operators: eigenvalues real, eigenfunctions are orthogonal, complete. Projections of wavefunction onto basis functions. 12. Completeness, orthogonal expansions, Fourier series: resolution into components; probability of measuring an eigenvalue in terms of Fourier coefficients. 13. Commuting observables, simultaneous eigenfunctions, Schwartz inequality and Uncertainty Principle. 14. Relationships with commutators, time dependence of expectation values, Ehrenfest's Theorem, classical harmonic oscillator. 15. Relative coordinates, Taylor's series expansion of real potentials. Schrödinger equation for harmonic oscillator in reduced coordinates. 16. Asymptotic form for harmonic oscillator wavefunctions. Power series solution to Hermite differential equation. 17. Two-term recursion relations and termination of power series, quantized energy levels. 18. Hermite polynomials, parity, comparison with 1-D particle in a box wavefunctions. 19. Classically forbidden motion, 3-D systems, separability of Hamiltonian, wavefunction, energy. 20. Spherical polar coordinates, rigid rotor, molecular bond lengths. 21. Legendre polynomials, associated Legendre functions, angular momentum commutation relations, eigenfunctions of z-component of angular momentum. 22. Physical significance of m-quantum number. Vector model, space quantization, introduction to the hydrogen atom. 23. Radial equation for the hydrogen atom, Laguerre, associated Laguerre polynomials, radial wavefunctions. 24. Radial functions: functional forms and graphs. Angular functions for p-, d- orbitals. Hydrogen atom in a magnetic field. 25. Approximate methods: first order perturbation theory, corrections to the energy. Introduction to the Variation Theorem. 26. Proof of Variation Theorem; Gaussian approximation to the hydrogen atom ground state. 27. Linear variation method. secular determinant and secular equation. 28. Atoms: atomic units. Perturbation approach to the helium atom. Variation theorem and effective nuclear charge. Slater-type orbitals. Self-Consistent Field Method. 29. Hartree, Hartree-Fock method. Electron correlation. Electron spin, Pauli Exclusion Principle, Slater determinant applied to helium atom. 30. Slater determinants for N-electron systems. Coulomb, exchange integrals, Koopman's theorem. Term symbols. 31. Examples of term symbols. Spin-orbit coupling, atomic spectroscopy. Born-Oppenheimer approximation. 32. Heitler-London (Valence Bond) method. Chemical bond arising from exchange integral. 33. Electron spin and the hydrogen molecule. Introduction to the LCAO-MO method. 34. MO theory for second row homonuclear diatomics; molecular term symbols. 35. Semiclassical radiation theory: time-dependent perturbation theory, transition dipole. 36. The electromagnetic spectrum: pure vibrational and rotational spectroscopy. Boltzmann distribution for initial state population. 37. Vibrational-rotational spectroscopy: centrifugal distortion and vibration-rotation interaction. 38. Polyatomic vibrations: degrees of freedom and normal coordinates. 39. Electronic transitions. Franck-Condon Principle. Required Text: Donald A. McQuarrie, Quantum Chemistry, Second Edition, University Science Books, 2008. ISBN 978-1-891389-50-4.
a021d013e7c47f89
Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 9.71 MB Downloadable formats: PDF Do you understand why they seem paradoxical? Can you find a good physical theory that might explain them? As Quantum Physics has proven it is YOU who are responsible for WHATEVER outcomes you are experiencing in your life. A convex mirror acts like a negative lens, always producing a virtual image. But if these entities are not somehow identified with the wave function itself — and if talk of them is not merely shorthand for elaborate statements about measurements — then where are they to be found in the quantum description? Pages: 296 Publisher: World Scientific Publishing Company; 1 edition (May 9, 2011) ISBN: 9814295515 Nonlinear Waves Physics of Vibrations & Waves Shock Wave Dynamics: Derivatives and Related Topics Bäcklund and Darboux Transformations: Geometry and Modern Applications in Soliton Theory (Cambridge Texts in Applied Mathematics) Modeling Coastal and Marine Processes (2nd Edition) In physics, a wave is a traveling disturbance that travels through space and matter transferring energy from one place to another Theoretical Physics: Quantum read epub Let’s examine the four-vector u = (ug, c)/(1 − β 2 )1/2 where β = ug /c, ug being the velocity of some object. (a) Show that u is parallel to the world line of the object. (b) Show that u · u = −c2. (c) If ug is the group velocity of a relativistic wave packet, show that k = (µ/c2 )u, where k is the central wave four-vector of the wave packet , cited: Optical Fibre Communication Systems In everyday situations, relativity theory and quantum physics both predict the normal behavior that experience has taught us to expect , e.g. The Quantum Mechanics of Many-Body Systems (Pure & Applied Physics) If you are working with a plane mirror, you only need the trigometric relationships for right triangles to solve your problem. For circular lenses or mirrors, the key equation is In some cases, you might also need the defintion of magnification, m = hi/ho = -i/o and for mirrors the focal length is related to the radius of curvature by f = R/2 Field & Wave Electromagnetics read epub Field & Wave Electromagnetics. John Bell became its principal proponent during the sixties, seventies and eighties. In Bohmian mechanics the wave function, obeying Schrödinger's equation, does not provide a complete description or representation of a quantum system , e.g. Holographic Interferometry: download online However, his brother exerting his influences got him transferred to the radiotelegraphy section situated at the bottom of the Eiffel Tower, on which a radio transmitter had been installed Quantum Fields in Curved Space download online Revise physics or do homework for your physics course. If you are completely stuck on a problem for your physics homework? It is moderated by a qualified physicist who is happy to help and advise on questions in physics or mathematics. Using Flash simulations and interactive Flash experiments extensively demonstrate key-concepts such as the charge and discharge of capacitors or the conservation of momentum during collisions ref.: Pure Water: The Science of Water, Waves, Water Pollution, Water Treatment, Water Therapy and Water Ecology Max Born knew that matrices have this non-commuting property. Heisenberg showed that in general, the "quantum conditions" are that pq - qp = ih, which later was written as a minimal condition known as the "uncertainty principle," pq - qp ≥ ih pdf. Consider a wave moving. if the vibration of the particles of the medium are in the direction of wave propagation. A Longitudinal wave proceeds in the form of compression and rarefaction which is the stretched rubber band. For a longitudinal wave at places of compression the pressure and density tends to be maximum, while at places where rarefaction takes place, the pressure and density are minimum , source: Observed Statistics of Extreme Waves download epub. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured. More explicitly, the superposition principle (ψ = Σanψn) of quantum physics dictates that for a wave function ψ, a measurement will result in a state of the quantum system of one of the m possible eigenvalues fn, n=1,2,....m, of the operator∧F which in the space of the eigenfunctions ψn, n=1,2,...,n , source: Tsunami: The Underrated Hazard download epub download epub. The main current proponent of scalar wave pseudophysics is zero-point energy advocate Thomas E. Bearden, who has concocted an entire pseudoscientific "scalar field theory" unrelated to anything in actual physics of that name. It starts with Maxwell's equations originally having been written as quaternions; Bearden holds that the (mathematical) transformation to vectors lost important information. [1] Bearden says that scalar waves differ from conventional electromagnetic transverse waves by having two oscillations anti-parallel with each other, each originating from opposite charge sources, thereby lacking any net directionality The Detonation Phenomenon Engineering Electromagnetics Therefore, sunglasses with a suitably oriented piece of Polaroid will filter out the glare from such a surface, making it easier to drive / see fish under water. Microwaves can also be polarised, using a grille of metal rods, separated by approximately the same distance as the wavelength of the waves. The principle of superposition states that when two waves are travelling in the same region the total displacement at any point is equal to the vector sum of the individual displacements at that point Analysis and Simulation of Multifield Problems (Lecture Notes in Applied and Computational Mechanics) This seemed so strange that Planck regarded quantization as nothing more than a mathematical trick. According to Helge Kragh in his 2000 article in Physics World magazine, " Max Planck, the Reluctant Revolutionary ," "If a revolution occurred in physics in December 1900, nobody seemed to notice it Intelligent Video Surveillance Systems So until we look inside, according to quantum theory, the cat is both dead and alive Waves and Particles : Two download pdf! In 1935, Einstein published his “EPR” paper loudly proclaiming that quantum mechanics was incomplete due to the existence of ”hidden” quantum variables. ( Einstein, 1935 ) Einstein and others such as Bohm and Bell tried to describe the hidden variables, but such a task was difficult, if not impossible. ( Bohm, 1952 ) How does one describe a quantum variable mathematically, when the very nature of the variable is unknown pdf? The blue and red displacements add up algebraically. Hence a red displacement, above the line, over of a blue displacement (of equal magnitude) which is below the line, will cancel out. This produces a single point on the horizontal axis. A red displacement, above the line, over of a blue displacement, also above the line, will produce a displacement above the line equal to their sum Solitons in Mathematics and download here download here. Alternatively, if the vector is expressed in terms of length and direction, the magnitude of the vector is divided by the denominator and the direction is unchanged. Unit vectors can be used to define a Cartesian coordinate system pdf. Finite Element Analysis of Acoustic Scattering (Applied Mathematical Sciences) Particles and Fields Quantum Field Theory in Condensed Matter Physics (Theoretical and Mathematical Physics) Stress Waves in Solids (Dover Books on Physics) Few-Body Problems in Physics '02: Proceedings of the XVIIIth European Conference on Few-Body Problems in Physics, Bled, Slovenia, September 8-14, 2002 (Few-Body Systems) VSAT Networks Invariant Manifolds and Fibrations for Perturbed Nonlinear Schrödinger Equations (Applied Mathematical Sciences) Elements of Engineering Electromagnetics Quantum Fields and Strings: A Course for Mathematicians Light: A Very Short Introduction (Very Short Introductions) The Field: The Quest for the Secret Force of the Universe of Lynne McTaggart on 26 April 2007 The Global Approach to Quantum Field Theory (International Series of Monographs on Physics) Coastal Engineering 2004 - Proceedings of the 29th International Conference (in 4 Volumes) An Introduction to Quantum Field Theory Elements of engineering electromagnetics Waves Called Solitons: Concepts and Experiments Few-Body Problems in Physics '95: In memoriam Professor Paul Urban (Few-Body Systems) The solutions to Schrödinger's equation are known as wave functions. The complete knowledge of a system is described by its wave function, and from the wave function one can calculate the possible values of every observable quantity Banff/Cap Workshop on Thermal download online The alternator is the part common to all power plants , cited: Tools for Signal Compression: Applications to Speech and Audio Coding In the early days of quantum physics, in an attempt to explain the wavelike behavior of quantum particles, the French physicist Louis de Broglie proposed what he called a “pilot wave” theory. According to de Broglie, moving particles — such as electrons, or the photons in a beam of light — are borne along on waves of some type, like driftwood on a tide. Physicists’ inability to detect de Broglie’s posited waves led them, for the most part, to abandon pilot-wave theory epub. But physicists have seen this wave-particle duality for protons, atoms and increasingly large molecules such as buckyballs , cited: Topological Quantum Numbers in Nonrelati Of course, Deutsch would say that the mere similarity is somehow sufficient to connect them. Barbour’s answer seems to be that time is just one of those illusions that appears here and there as a tiny subset of a much larger uninteresting chaos. Our existence, our illusion of rational existence is only permitted if there is an apparently orderly timeline, so we see time (a restatement of the Weak Anthropic Principle) ref.: Photonic Waveguides (ISTE) The reference line for θ is the line straight down the middle, from the source to the screen, and θ identifies the angular position of each maximum or minimum from that center line. In one dimension, you typically want to relate the frequency of the wave to other wave properties. To do this, you simply need the wave equation: v = f λ. In two dimensions, you will generally relate the location of maxima and minima to wave properties and the geometry of the experimental apparatus Hyperspectral Data Exploitation: Theory and Applications Whenever you are asked to relate what you see (the image) to an object viewed in a mirror or through a lens, you have a geometric optics problem ref.: Multimedia Signal Processing: download online One particular position will be recorded by the measurement: the one corresponding to the eigenfunction chosen by the particle. If a further position measurement is made shortly afterwards the wavefunction will still be the same as when the first measurement was made (because nothing has happened to change it), and so the same position will be recorded download. From this emerged the idea that light is an electromagnetic wave. Electromagnetic waves can have different frequencies (and thus wavelengths), giving rise to various types of radiation such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and Gamma rays. The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle Nautical: Ocean Coloring Books download pdf This discretization brought in by energy quanta was a fundamental shift in thinking, inconsistent with classical institution of physicists at the time pdf. Rated 5.0/5 based on 873 customer reviews
b51e1d89df27c721
In 1979, Berry and Balazs firstly observed a nonspreading Airy wave function which is a solution of the quantum mechanical Schrodinger equation. Considering the mathematical correspondence between paraxial equation of diffraction and the Schrödinger equation, Siviloglou et al. demonstrated the accelerating optical Airy beams in 2007. The accelerating Airy beams possess the well-known ability of maintaining diffraction-free while propagating along a curved parabolic trajectory in the free space. Additionally, the Airy beams exhibits a self-healing feature. Based on these unique features, the Airy beams can possibly have promising applications over numerous challenging fields, such as optical routing, particle manipulation, light bullets, microscopy and plasma physics. Although most studies of the accelerating Airy beams are associated with the optical domain, it is still of great interest to demonstrate such Airy beams in the terahertz (THz) domain. Prof. Jinsong Liu’s group from Wuhan National Lab for Optoelectronics (WNLO) demonstrates the generation of the accelerating THz Airy beams with a 0.3-THz continuous wave. Two diffractive elements are designed and 3D-printed to form the generation system, which can not only imprint the desired complex phase pattern but also perform the required Fourier transform (FT). They both numerically and experimentally demonstrate the propagation dynamics of the accelerating THz Airy beam and investigate its self-healing property during propagation in the free space. Their observations are in good agreement with the numerical simulations. Such accelerating THz Airy beam could be able to develop novel THz imaging systems and robust THz communication links. This research is supported by the National Natural Science Foundation of China under grant No. 11574105, 61475054, 61405063, 61177095, the Fundamental Research Funds for the Central Universities under grant No. 2014ZZGH021, 2014QN023, the Wuhan applied basic research project with No. 20140101010009. Figure. Experimental (left) and simulation (right) results of the finite-energy accelerating THz Airy beam are illustrated. The top row indicates the xz normalized intensity profiles of the Airy beam. (b)-(e) The xy normalized intensity profiles of the experimentally generated Airy beam measured at the dash line positions of the upper image. (g)-(j) The corresponding simulation results at the same detection planes.
6364bee06ab36275
Atomic Physics (redirected from Atom physics) Also found in: Dictionary, Thesaurus. Atomic physics The study of the structure of the atom, its dynamical properties, including energy states, and its interactions with particles and fields. These are almost completely determined by the laws of quantum mechanics, with very refined corrections required by quantum electrodynamics. Despite the enormous complexity of most atomic systems, in which each electron interacts with both the nucleus and all the other orbiting electrons, the wavelike nature of particles, combined with the Pauli exclusion principle, results in an amazingly orderly array of atomic properties. These are systematized by the Mendeleev periodic table. In addition to their classification by chemical activity and atomic weight, the various elements of this table are characterized by a wide variety of observable properties. These include electron affinity, polarizability, angular momentum, multiple electric moments, and magnetism. See Quantum electrodynamics, Quantum mechanics Each atomic element, normally found in its ground state (that is, with its electron configuration corresponding to the lowest state of total energy), can also exist in an infinite number of excited states. These are also ordered in accordance with relatively simple hierarchies determined by the laws of quantum mechanics. The most characteristic signature of these various excited states is the radiation emitted or absorbed when the atom undergoes a transition from one state to another. The systemization and classification of atomic energy levels (spectroscopy) has played a central role in developing an understanding of atomic structure. Atomic Physics the branch of physics in which the structure and states of atoms are studied. Atomic physics arose at the turn of the 20th century. In the second decade of the 20th century it was established that the atom consisted of a nucleus and electrons bound together by electrical forces. In the first phase of its development, atomic physics also included problems associated with the structure of the atomic nucleus. In the 1930’s it was shown that the interactions which occur in the atomic nucleus were of a nature different from those which occur in the outer shell of the atom, and in the 1940’s nuclear physics branched off into an independent scientific discipline. In the 1950’s the physics of elementary particles—high-energy physics—also developed as an independent branch. Early history: Study of atoms in the 17th to 19th centuries. Hypothesis concerning the existence of atoms as indivisible particles arose even in antiquity; ideas of atomism were first stated by the ancient Greek thinkers Democritus and Epicurus. In the 17th century, the ideas were revived by the French philosopher P. Gassendi and the English chemist R. Boyle. The concepts of atoms that prevailed in the 17th and 18th centuries were poorly defined. Atoms were considered absolutely indivisible and immutable solid particles whose different types are distinguished by size and form. Combinations of atoms in one or another order produce various substances, and the motions of atoms determine all phenomena that take place in matter. I. Newton, M. V. Lomonosov, and certain other scientists supposed that atoms could combine into more complex particles—“corpuscles.” However, specific chemical and physical properties were not attributed to atoms. The study of atoms still had an abstract, natural-philosophical character. In the late 18th and early 19th centuries, as a result of the rapid development of chemistry, a basis for the quantitative treatment of the study of atoms was created. The English scientist J. Dalton was the first (1803) to consider the atom as the smallest particle of a chemical element, distinguished from atoms of other elements by its mass. According to Dalton, the basic characteristic of the atom is its atomic mass. Chemical compounds are a collection of “combined atoms” which contain a specific (characteristic for a given complex substance) number of atoms of each element. All chemical reactions are mere regroupings of atoms into new compound particles. Starting from these assumptions Dalton formulated his law of multiple proportions. The investigations of the Italian scientists A. Avogadro (1811) and, in particular, S. Canizzaro (1858), drew a sharp line between the atom and the molecule. In the 19th century the optical, as well as the chemical, properties of atoms were studied. It was established that each element had a characteristic optical spectrum; spectral analysis was discovered by the German physicists G. Kirchhoff and R. Bunsen in 1860. In this manner the atom appeared as a qualitatively unique particle of matter, characterized by strictly defined physical and chemical properties. But the properties of the atom were considered eternal and inexplicable. It was assumed that the number of types of atoms (chemical elements) was random and that there was no connection between them. However, it was gradually ascertained that there were groups of elements which had the same chemical properties, the same maximum valence, and comparable laws of variation (in the transition from one group to another) of physical properties—that is, melting point, compressibility, and so on. In 1869, D. I. Mendeleev discovered the periodic system of elements. He showed that the chemical and physical properties of the elements were periodically repeated with an increase in atomic mass (see Figures 1 and 2). Figure 1. Periodic dependence of atomic volume on atomic number The periodic system demonstrated the existence of relationships between the different types of atoms. This suggested the conclusion that the atom has a complex structure that varies with atomic mass. The problem of the discovery of atomic structure became the most important problem in chemistry and physics. Origin of atomic physics. The most important events in science, from which the beginning of atomic physics followed, were the discoveries of the electron and radioactivity. In the investigation of the flow of electric current through highly rarefied gases, rays were discovered which were emitted by the cathode discharge tube (cathode rays) and which had the property of being deflected in transverse electric and magnetic fields. It was ascertained that these rays consist of rapidly moving, negatively charged particles called electrons. In 1897 the English physicist J. J. Thomson measured the ratio of the charge e of these particles to their mass m. It was also discovered that metals, upon intense heating or illumination by light of short wavelength, emit electrons. From this it was concluded that electrons are part of all atoms. Hence, it followed that neutral atoms must also contain positively charged particles. Positively charged atoms (ions) were in fact discovered in the investigation of electrical discharges in rarefied gases. The representation of the atom as a system of charged particles explained, according to the theory of the Dutch physicist H. Lorentz, the very possibility of radiation by the atom of light: electromagnetic radiation arises with the oscillation of intra-atomic charges. This was verified in the study of the influence of a magnetic field on atomic spectra. It was explained that the ratio of the charge of intra-atomic electrons to their mass, elm, found by Lorentz in his theory of the Zeeman effect, is exactly equal to the value for elm for free electrons that was obtained in Thomson’s experiments. The theory of electrons and its experimental verification yielded indisputable proof of the complexity of the atom. The representation of the indivisibility and immutability of the atom was finally disproved by the work of the French scientists M. Skłodowska Curie and P. Curie. As a result of the investigation of radioactivity it was established by F. Soddy that atoms undergo transmutations of two types. Having emitted an alpha-particle (an ion of helium with positive charge 2e), the atom of a radioactive chemical element is transmuted into an atom of another element, located in the periodic system two positions to the left—that is, a polonium atom becomes a lead atom. Having emitted a beta-particle (electron) with negative charge -e, an atom of a radioactive Figure 2 Periodic dependence on atomic number of (1) the quantity 1/T. 104, where T is the melting point; (2) coefficient of linear expansion α. 105; (3) compressibility factor K. 106 chemical element is transmuted into an atom of the element located one position to the right—that is, a bismuth atom becomes polonium. The mass of an atom formed as a result of such transmutations is sometimes found to be different from the atomic weight of the element into whose position it transferred. This indicated the existence of a variety of atoms of the same chemical element with different masses; these varieties were given the name isotopes (that is, atoms that occupied the same place in Mendeleev’s table). Thus, the concept of the absolute identity of all atoms of a given chemical element proved to be incorrect. The results of the investigation of the properties of the electron and radioactivity permitted the construction of detailed models of the atom. In the model proposed by Thomson in 1903, the atom was represented in the form of a positively charged sphere in which were distributed small (in comparison with the atom) negative electrons (see Figure 3). Figure 3. Thomson’s model of the atom. The points denote electrons impregnated in a positively charged sphere. They were held in the atom because the forces of attraction of them by the distributed positive charge were balanced by their forces of mutual repulsion. The Thomson model gave a generally recognized explanation of the possibility of emission, scattering, and absorption of light by the atom. In the displacement of electrons from positions of equilibrium an “elastic” force arose, striving to restore equilibrium; this force is proportional to the electron’s displacement from the equilibrium position and, consequently, to the dipole moment of the atom. Under the influence of the electric forces of the incident electromagnetic wave, the electrons in the atom oscillate at the same frequency as does the electrical field strength in the light wave; the oscillating electrons, in turn, emit light at the same frequency. The scattering of electromagnetic waves by the atoms of matter occurs in this manner. According to the degree of weakening of the light beam in the mass of matter, it is possible to determine the total number of scattering electrons, and knowing the number of atoms per unit volume, it is possible to find the number of electrons in each atom. Formulation of the Rutherford planetary model of the atom. Thomson’s model of the atom proved to be unsatisfactory. On the basis of the model it was not possible to explain the completely unexpected result of the experiments of the English physicist E. Rutherford and his co-workers K. Geiger and E. Marsden on the scattering of alpha-particles by atoms. In these experiments, fast alpha-particles were used for the direct probing of atoms. Passing through matter, the alpha-particles collide with atoms. In each collision the alpha-particle, traveling through the electrical field of the atom, changes its direction of motion—that is, it undergoes scattering. In the overwhelming majority of scattering events, the deflections of alpha-particles (scattering angles) were very small. Therefore, upon passage of a beam of alpha-particles through a thin layer of matter, only a small blowup of the beam took place. However, a very small fraction of the alpha-particles was deflected through angles greater than 90°. This result could not be explained on the basis of the Thomson model because the electrical field in a “solid” atom would not be sufficiently strong to deflect a fast and massive alpha-particle through a large angle. In order to explain the results of experiments on the scattering of alpha-particles, Rutherford proposed a model of the atom that was new in principle and resembled the structure of the solar system; it came to be called the planetary system. It had the following form. In the center of the atom is a positively charged nucleus whose dimensions (∽10-12 cm) are very small in comparison with the dimensions of the atom (∽10-8 cm) but whose mass is almost equal to the mass of the atom. Around the nucleus move electrons, similar to the movement of the planets around the sun; the number of electrons in an uncharged (neutral) atom is such that their total negative charge compensates (neutralizes) the positive charge of the nucleus. The electrons must move around the nucleus, or they would fall into it under the influence of the attractive forces. The difference between the atom and the planetary system consisted in the fact that gravitational forces operated in the latter and electrical (Coulomb) forces operated in the atom. Near the nucleus, which could be considered as a point of positive charge, there existed a very strong electrical field. Therefore, in passing close to the nucleus, positively charged alpha-particles (helium nuclei) were subjected to a strong deflection. Subsequently it was explained by G. Moseley that the charge of the nucleus increased from one chemical element to another by the elementary unit of charge, equal to the charge of the electron but with a positive sign. Numerically, the charge of the atomic nucleus, expressed in units of elementary charge e, is equal to the ordinal number of the corresponding element in the periodic system. In order to check the planetary model, Rutherford and his co-worker C. G. Darwin calculated the angular distribution of alpha-particles scattered by a point nucleus—the center of the Coulomb forces. The result obtained was checked by experimental means—the measurement of the number of alpha-particles scattered through various angles. The results of the experiment agreed exactly with the theoretical calculations, brilliantly confirming the Rutherford planetary model of the atom. However, the planetary model of the atom encountered fundamental difficulties. According to classical electrodynamics, a charged particle which is moving under acceleration continuously radiates electromagnetic energy. Therefore, electrons moving around the nucleus—that is, under acceleration—must be continuously losing energy by radiation. But in this case they would, in a negligibly small fraction of a second, lose all their kinetic energy and fall into the nucleus. Another difficulty, also connected with radiation, was that if it is assumed (in correspondence with classical electrodynamics) that the frequency of the light radiated by the electron is equal to the frequency of the electron’s oscillations in the atom (that is, the number of revolutions performed by it along its orbit in one second) or a multiple of it, then the radiated light, according to the degree of approach of the electron to the nucleus, must continuously change its frequency and the spectrum of the light radiated by it must be continuous. This, however, is contradicted by the experiments. The atom radiates light waves of completely fixed frequency, typical of a given chemical element, and is characterized by a spectrum which consists of individual spectral lines—a line spectrum. In the line spectra of the elements a series of regularities were experimentally established, the first of which was discovered by the Swiss scientist J. Balmer (1885) in the hydrogen spectrum. The most general rule, the combination principle, was found by the Austrian scientist W. Ritz in 1908. This principle can be formulated in this manner: for the atoms of each element, it is possible to find a sequence of numbers T1, T2, T3,... of so-called spectral terms such that the frequency v of each spectral line of a given element is expressed in the form of the difference of two terms— v = Tk –Ti. For the hydrogen atom, the term is Tn = Rin2, where n is an integer that takes on the value n = 1, 2, 3, ... and R is the so called Rydberg constant. Thus, within the limits of the Rutherford model of the atom, the stability of the atom in relation to radiation and the line spectrum of its radiation could not be explained. On the basis of the model, neither the laws of thermal radiation nor the laws of photoelectric phenomena, which arise in the interactions of radiation with matter, could be explained. It became possible to explain these laws by proceeding from completely new—quantum—concepts first introduced by the German physicist M. Planck in 1900. For the derivation of the law of energy distribution in the spectrum of thermal radiation (the radiation of heated bodies), Planck proposed that the atoms of matter emitted electromagnetic energy (light) in the form of individual portions (quanta of light) whose energy is proportional to v (the frequency of the radiation): E = hv, where h is a constant—characteristic for quantum theory—called Planck’s constant. In 1905, A. Einstein gave a quantum explanation of photoelectric phenomena according to which the energy of the quantum hv is used for the emission of the electron from the metal, for the work function P, and for imparting to the electron a kinetic energy Tkin—hv = P + Tkin. Here Einstein introduced the concept of light quanta as a special kind of particle; these particles were subsequently called photons. The inconsistencies of the Rutherford model could be resolved only by rejecting a number of customary concepts of classical physics. The most important step in the construction of atomic theory was made by the Danish physicist N. Bohr in 1913. The Bohr postulates and the model of Bohr’s atom. On the basis of the quantum theory of the atom, Bohr proposed two postulates characterizing those properties of the atom which were not contained in classical physics. These Bohr postulates can be formed as follows: (1) Existence of stationary states: The atom does not radiate and is stable only in certain stationary (unchanging in time) states which correspond to a discrete (discontinuous) sequence of “permitted” values of the energy E1, E2, E3, E4..... Any change in energy is associated with a quantum transition (jump) from one stationary state to another. (2) Condition for radiation frequencies (quantum transitions with radiation): In the transition from one stationary state with energy Ei to another with energy Ek the atom emits or absorbs light of a specific frequency v in the form of a quantum of radiation (photon) hv according to the relation hv = E1Ek. In emission the atom passes from a state with higher energy E1 to a state with lower energy Ek; in absorption, on the other hand, it passes from a state with lower energy Ek to a state with higher energy E1. The Bohr postulates permit immediate understanding of the physical meaning of the Ritz combination principle (see above); comparison of the relations hv = Ei- Ek and v = Tk- T1 indicates that the spectral terms correspond to stationary states and that the energies of the latter must be equal (with accuracy up to a constant term) to Ei = -hTi, Ek = -hTk. In emission or absorption of light, the atom’s energy changes; this change is equal to the energy of the emitted or absorbed photon—that is, the law of conservation of energy holds. The line spectrum of the atom is a result of the discreteness of its possible energy values. For the determination of the permitted energy values of the atom—the quantization of its energy—and for the calculation of the characteristics of the corresponding stationary states, Bohr applied classical (Newtonian) mechanics. “If we wish in general to compose a visual representation of stationary states, we have no other means, at least now, than ordinary mechanics,” Bohr wrote in 1913 (Tri stat’i o spektrakh i stroenii atomov, p. 22, Moscow-Petrograd, 1923). For the simplest atom—the hydrogen atom, which consists of a nucleus with charge +e (a proton) and an electron with charge -e —Bohr considered the motion of the electron around the nucleus along circular orbits. Comparing the energy of the atom E with the spectral terms Tn = R/n2 for the hydrogen atom found with high accuracy from the frequencies of its spectral lines, Bohr obtained the possible values of the atom’s energy, En= - hT n= - hR/n2 (where n = 1, 2, 3, . . .). The values correspond to circular orbits of radius an = a0 n2, where a0 = 0.53 x 10-8 cm—the Bohr radius—is the radius of the smallest circular orbit (for n = 1). Bohr calculated the frequencies of revolution vn of the electron around the nucleus along circular orbits in relation to the electron’s energy. It turned out that the frequencies of the light radiated by the atom did not coincide with the frequencies of revolution vn as required by classical electrodynamics, but rather were proportional—according to the relation hv = Ei - E k—to the energy difference of the electron in two of its possible orbits. For the calculation of the connection between the frequency of the electron’s revolution along an orbit and the radiation frequency, Bohr made the assumption that the results of the quantum and classical theories must agree for small radiation frequencies (for large wavelengths; such agreement occurs for thermal radiation, the laws of which were derived by Planck). For large n, Bohr equated the frequency of transition v = (En+1 - En)/h to the frequency of revolution vn along an orbit with given n and calculated the value of the Rydberg constant R, which agreed to a high accuracy with the value of R found experimentally, thus confirming Bohr’s hypothesis. Bohr succeeded not only in explaining the hydrogen spectrum, but also in conclusively demonstrating that certain spectral lines which were attributed to hydrogen belonged to helium. Bohr’s hypothesis that the results of the quantum and classical theories must agree in the limiting case of small frequencies of radiation represented the original form of the so-called correspondence principle. Subsequently, Bohr successfully applied it for the calculation of spectral line intensity. As the development of modern physics indicated, the correspondence principle was found to be very general. In the Bohr theory of the atom, energy quantization—that is, the calculation of its possible values—was found to be a particular case of the general method of calculating “permitted” orbits. According to the quantum theory, such orbits are only those for which the angular momentum of the electron in the atom is an integral multiple of h/2п. To each permitted orbit there corresponds a specific possible value of the atom’s energy. The basic assumptions of the quantum theory of the atom—the two Bohr postulates—were totally confirmed experimentally. Particularly graphic support was given by the experiments of the German physicists G. Franck and H. Hertz (1913–16). The essence of these experiments is as follows. A beam of electrons whose energy could be controlled enters a vessel containing mercury vapor. Gradually increasing energy is imparted to the electrons. As the energy of the electron is increased, the current in a galvanometer connected to an electrical circuit increases. When the energy of the electrons becomes equal to specific values (4.9; 6.7; 10.4eV), the current decreases sharply (see Figure 4). At this moment the mercury vapor is observed to emit ultraviolet rays of a specific frequency. The stated facts permit only one interpretation. As long as the energy of the electrons is less than 4.9 eV, the electrons do not lose energy upon collision with mercury atoms—the collisions have an elastic character. When the energy becomes equal to a specific value, namely 4.9 eV, the electrons transmit their energy to the mercury atoms, which then emit it in the form of quanta of ultraviolet light. Calculation demon strates that the energy of these photons is exactly equal to the energy lost by the electrons. These experiments proved that the internal energy of the atom can have only specific discrete values, that the atom absorbs energy from without and emits it immediately in whole quanta, and finally, that the frequency of the light radiated by the atom corresponds to the energy lost by the atom. Figure 4. Dependence of current on voltage obtained in the experiments of J. Franck and H. Hertz The subsequent development of atomic physics showed the correctness of the Bohr postulates not only for atoms, but also for other microscopic systems—for molecules and for atomic nuclei. These postulates must be considered as firmly established empirical quantum laws. They compose that part of the Bohr theory which was not only preserved in the further development of quantum theory, but which also received its justification. The situation is somewhat different for the Bohr model of the atom, which is based on consideration of the motion of electrons in the atom according to the laws of classical mechanics, with the imposition of the additional conditions of quantization. Such an approach permitted the attainment of an entire series of important results but was inconsistent: the quantum postulates were added artificially to the laws of classical mechanics. A systematic theory was created in the 1920’s; this was called quantum mechanics. Its formulation was prepared by the further development of the model representations of Bohr’s theory, in the course of which its strong and weak sides were investigated. Development of the model theory of Bohr’s atom. A very important result of the Bohr theory was the explanation of the hydrogen atom spectrum. The next step in the development of the theory of atomic spectra was made by the German physicist A. Sommerfeld. Having worked out in more detail the rules of quantization, starting from a more complex picture of the motion of electrons in the atom (along elliptical orbits) and taking into account the shielding of the outer (so-called valence) electron in the field of the nucleus and inner electrons, Sommerfeld was able to give an explanation of a number of regularities of the spectra of alkaline metals. The theory of Bohr’s atom shed light on the structure of the so-called characteristic spectra of X-ray radiation. The X-ray spectra of atoms, in the same way as their optical spectra, have a discrete line structure characteristic of a given element (hence the designation). By investigating the characteristic X-ray spectra of various elements, the English physicist G. Moseley discovered this rule: the square roots of the frequencies of the radiated lines increase uniformly from element to element over the whole Mendeleev periodic system in proportion to the atomic number of the element. It is interesting that the Moseley law completely confirmed the correctness of Mendeleev, who in certain cases violated the principle of arrangement in the table according to increasing atomic weight and who placed certain heavier elements before lighter ones. On the basis of Bohr’s theory, it also became possible to give an explanation of the periodicity of the properties of atoms. In a complex atom electron shells are formed which are filled sequentially, beginning from the innermost, by specific numbers of electrons. (The physical principle of the formation of the shells became clear only on the basis of the Pauli principle; see below.) The structure of the outer electron shells is repeated periodically, which determines the periodic recurrence of the chemical and many physical properties of the elements which are located in the same group of the periodic system. On the basis of the Bohr theory, the German chemist W. Kossel in 1916 explained the chemical interactions in the so-called heteropolar molecules. However, far from all of the questions of atomic theory were successfully explained on the basis of the model representations of the Bohr theory. The theory was not able to deal with many problems of the theory of spectra; it made it possible to obtain correct values for the frequencies of the spectral lines of only the hydrogen and hydrogenlike atoms. The intensities of these lines remained unexplained; for the explanation of the intensities, Bohr was forced to use the correspondence principle. In going over to the explanation of the motions of electrons in atoms more complex than the hydrogen atom, the Bohr model theory found itself up a blind alley—the helium atom, in which two electrons move around the nucleus, did not yield a theoretical interpretation based on it. The difficulties in this case were not confined to quantitative experimental discrepancies. The theory was also useless in the solution of a problem such as the combining of atoms into a molecule. Why were two neutral hydrogen atoms combined into a hydrogen molecule? How can the nature of valence be explained in general? What links the atoms of a solid? These questions remained unanswered. Within the limits of the Bohr model it was impossible to find an approach to their solution. The quantum mechanical theory of the atom. The limitation of the Bohr model of the atom was based on the limitation of the classical representations of the motion of microparticles. It became clear that for the subsequent development of atomic theory it was necessary to critically reconsider the basic concepts of the motion and interaction of microparticles. The unsatisfactory nature of the model based on classical mechanics with the addition of quantization conditions was clearly understood by Bohr himself, whose views exerted a great influence on the further development of atomic physics. The beginning of the new stage in the development of atomic physics was the idea stated by the French physicist L. de Broglie in 1924 concerning the dual nature of the motion of microobjects, in particular of the electron. This idea became the point of departure of quantum mechanics, formulated in 1925–26 in the papers of W. Heisenberg and M. Born (Germany), E. Schrodinger (Austria), and P. Dirac (England), and of the modern quantum mechanical theory of the atom developed on the basis of it. The concepts of quantum mechanics concerning the motion of the electron (of a microparticle in general) differ radically from classical concepts. According to quantum mechanics, the electron does not move along a trajectory (orbit) as a solid ball does; the motion of the electron also exhibits certain properties which are characteristic of wave propagation. On the one hand, the electron always behaves (for example, in collisions) like a unified whole, like a particle which has indivisible charge and mass; at the same time electrons with a specific energy and momentum propagate like a plane wave that has a specific frequency (and wavelength). The energy E of the electron as a particle is associated with a frequency v of an electron wave by the relation E = hv, and its momentum p, with a wavelength λ by the relation p = h/λ. Stable motions of the electron in an atom, as shown by Schrödinger (1926), are in certain respects analogous to standing waves, whose amplitudes differ at different points. In addition, in the atom, as in an oscillatory system, only certain “allowed” motions with specific values of energy, angular momenta, and projections of moments of the electron in the atom are possible. Each stationary state of the atom is described by means of a certain wave function which is a solution of a wave equation of a particular type—the Schrödinger equation; an “electron cloud,” which (on the average) characterizes the distribution of electron charge density in the atom, corresponds to the wave function. In the 1920’s and 1930’s approximate methods of calculating the distribution of electron charge in complex atoms were developed—in particular, the Thomas-Fermi method (1926, 1928). This quantity and the value of the so-called atomic factor connected with it are important in the investigation of electron collisions with atoms and their scattering of X-rays. On the basis of quantum mechanics, the accurate calculation of the energies of electrons in complex atoms by means of the Schrödinger equation was successfully carried out. The approximate methods of such calculations were developed in 1928 by D. R. Hartree (England) and in 1930 by V. A. Fock (USSR). Investigations of atomic spectra completely confirmed the quantum mechanical theory of the atom. In addition, it was explained that the state of an electron in an atom depends essentially on its spin—the intrinsic mechanical angular momentum. An explanation was given of the effect of external electric and magnetic fields on the atom. An important general principle connected with electron spin was discovered by the Swiss physicist W. Pauli (1925): according to his principle, in each electron state in the atom it is possible to find only one electron; if the given state is already occupied by an electron, then the next electron entering into the composition of the atom is forced to occupy some other state. On the basis of the Pauli principle, the capacity of electron shells in complex atoms, which determines the periodicity of the properties of elements, was finally established. Starting from quantum mechanics, the German physicists W. Heitler and F. London in 1927 put forth a theory of the so-called homeopolar chemical bonds of two identical atoms (for example, the atoms of hydrogen in the H2 molecule), which cannot be explained within the framework of the Bohr model of the atom. Important applications of quantum mechanics in the 1930’s and later were the investigations of bound atoms, which form molecules or crystals. The states of atoms which are part of a molecule are essentially different from the states of a free atom. Significant changes are also undergone by an atom in a crystal under the influence of intracrystalline forces, the theory of which was first worked out by H. Bethe in 1929. By studying these changes, it is possible to establish the character of the interactions of the atoms with its environment. The greatest experimental achievement in this area of atomic physics was the discovery by E. K. Zavoiskii in 1944 of electron paramagnetic resonance, which afforded the possibility of studying the different types of bonding associations of atoms with their environment. Modern atomic physics. The basic branches of modern atomic physics are the theory of the atom, atomic (optical) spectroscopy, X-ray spectroscopy, radio spectroscopy (which also investigates the rotational levels of molecules), and the physics of atomic and ion collisions. The various branches of spectroscopy encompass different frequency ranges of radiation and, correspondingly, different energy ranges of quanta. Whereas X-ray spectroscopy investigates the radiation of atoms with quantum energies up to hundreds of thousands of eV, radio spectroscopy deals with very small quanta—down to quanta of less than 10-6 eV. The most important problem of atomic physics is the detailed determination of all the characteristics of atomic states. The question concerns the determination of the possible values of the atom’s energy (its energy levels), the values of the angular momenta, and other quantities that characterize the states of the atom. The fine and hyperfine structures of the energy levels and changes of the energy levels under the influence of electrical and magnetic fields—both external (macroscopic) and internal (microscopic)—are investigated. Such a characteristic of the states of the atom as the lifetime of an electron at an energy level has great significance. Finally, great attention is paid to the mechanism of excitation of atomic spectra. The fields of the phenomena which are studied by the different branches of atomic physics overlap. X-ray spectroscopy by measurement of the emission and absorption of X rays permits the determination for the most part of the binding energy of inner electrons with the atomic nucleus (ionization energy) and the distribution of the electric field within the atom. Optical spectroscopy studies the set of spectral lines which are emitted by atoms, and determines the characteristics of the atomic energy levels, the intensities of spectral lines and lifetimes of the atom in excited states associated with them, the fine structure of energy levels, and their displacement and splitting in electric and magnetic fields. Radio spectroscopy investigates in detail the width and shape of spectral lines, their hyperfine structure, shifting and splitting in a magnetic field, and intra-atomic processes in general which are caused by very weak interactions and influences of media. The analysis of the results of the collisions of fast electrons and ions with atoms affords the possibility of obtaining information about the electron charge density distribution (“electron cloud”) within the atom, the excitation energies of atoms, and ionization energies. The results of the detailed study of the structure of atoms find their broadest application not only in many branches of physics, but also in chemistry, astrophysics, and other fields of science. On the basis of the investigation of the broadening and displacement of spectral lines, it is possible to determine local fields in the medium (liquid, crystal) which cause these changes and the state of this medium (temperature, density, and others). Knowledge of the distribution of electron charge density in an atom and its variations during external interactions permits the prediction of the type of chemical bonds which the atom can form and the behavior of an ion in a crystalline lattice. Information concerning the structure and characteristics of atomic and ion energy levels is extremely important for quantum electronic devices. The behavior of atoms and ions during collisions—their ionizations, excitation, and charge exchange—is important for plasma physics. Knowledge of the detailed structure of atomic energy levels, particularly multiple-ionized atoms, is important for astrophysics. Thus, atomic physics is closely connected with other branches of physics and other natural sciences. The concepts of the atom which have been developed in atomic physics also have great significance for man’s Weltanschauung. The “stability” of the atom explains the stability of various types of matter and the immutability of the chemical elements under natural conditions—for example, under ordinary atmospheric temperature and pressure found on the earth. The “plasticity” of the atom—the variation of its properties and states during the variation of the external conditions under which it exists—explains the possibility of forming more complex systems which are qualitatively unique and their ability to take on various forms of internal organization. Thus a solution is found for the conflict between the idea of immutable atoms and the qualitative diversity of substances—a conflict which has existed both in ancient and in modern times and which has served as the basis for the criticism of atomism. Bohr, N. Tri stat’i o spektrakh i stroenii atomov. Moscow-Petrograd, 1923. (Translated from German.) Born, M. Sovremennaia fizika. Moscow, 1965. (Translated from German.) Brogue, L. de. Revoliutsiia v fizike. Moscow, 1963. (Translated from French.) Shpol’skii, E. V. Atomnaia fizika, 5th ed., vol. 1. Moscow, 1963. atomic physics [ə′täm·ik ′fiz·iks] The science concerned with the structure of the atom, the characteristics of the elementary particles of which the atom is composed, and the processes involved in the interactions of radiant energy with matter.
aedf388ddc81ae77
The 2000 Nora And Edward Ryerson Lecture Leo P. Kadanoff "Making a Splash, Breaking a Neck: The Development of Complexity in Physical Systems" April 17, 2000 Work done by Michael Brenner, Peter Constantin, Todd Dupont, Leo Kadanoff, Albert Libchaber, Sidney Nagel, Robert Rosner, and many others We study the motion of fluids. Our program is characterized by close cooperation among experimenters, theoreticians, and simulators. We aim to develop a fundamental understanding of fluid flow. Mostly our work involves solving particular problems, e.g., “how does heat flow in a pot of water heated over a flame?” But, in following these problems we soon get to broader issues: predictability and chaos and the natural formation of complex “machines.”1 A Question One of the great concepts of our physics profession is the simplicity of the laws of physics. Maxwell’s equations or the Schrödinger equation or Hamiltonian mechanics can each be expressed in a few lines. The ideas that form the foundation of our worldview are also very simple indeed: The world is lawful. Different laws apply in different situations. But always and everywhere the laws are consistent with one another and can be derived one from the other. Thus, everything is simple, consistent, and neat. The laws of physics are expressible in terms of everyday mathematics, usually partial differential equations. Everything is simple and neat—except, of course, the world. Visualize waves in a stormy sea. Each wave is different from the last, bigger or littler, moving east or north, perhaps breaking and throwing spray far up into the air. This jumble of varying objects defines one kind of complexity. But also think about any living thing. Any organism contains many different kinds of working parts, all functioning together as some amazingly intricate machine. Living things show functional intricacy and apparent purpose, and thus reflect another, richer, kind of complexity. Economic, social, and ecological systems also seem to show a rich complexity together with a great degree of organization. In our world, each event has a multitude of different causes. The present is determined by the past, but the chain of causation is often complicated. Furthermore, small changes in a hypothetical past can have effects which grow larger and larger as time progresses, making many long-term predictions beyond practical possibility. Things that behave like this are said to be chaotic. In general, complexity is the product of chaotic processes. But now come back to our question: Why, if the laws are so simple, is the world so complicated? Before understanding comes observation. We shall look at the world partly through the eyes of Harold Edgerton,2 inventor of strobe photography and founder of EG&G. He took great pleasure in recording natural processes in “stop-action.” Figure 1 shows the behavior of a tiny microcosm, constructed from milk. A drop falls into a glass. The drop makes a crater. The crater comes up and produces a crown with points. The points rise and separate from the main mass of milk. The crater subsides. A single drop of milk escapes from the mass of fluid and rises high in the air. So a very complicated event arises from the simplest of starting points. Figure 1. Strobe photos of the splash produced by a milk drop landing on the surface of a glass of milk. Notice the regularity of the initial shape and the irregularity of the last three pictures. Photo by Dr. Harold E. Edgerton, © Harold & Esther Edgerton Foundation, 2001, courtesy of Palm Press, Inc. Notice the irregularity of the crown. Edgerton liked perfect pictures. He wanted to produce a setup in which he could photograph a perfectly regular and reproducible structure. So he was frustrated by the result shown here. Over the years since this photo was taken, science has developed the concept of chaos to explain outcomes like this one. The idea is that small changes in the situation at the start of the splash can have an influence that will be magnified into a very large effect by the time the last picture is taken. More specifically, little gusts of air at the beginning of the process can change the number of points in the crown and the sequence in which the drops come off the points. The usual metaphor is that the presence or absence of a butterfly beating its wings in Brazil can substantially change the weather in Chicago two weeks later.3 Here at Chicago, a group of us4 became concerned with the details of the process in which a mass of fluid breaks in two. Instead of looking at the points atop the Edgerton crown, we focused upon fluid dripping from a pipette. (See figure 2 for Sidney Nagel’s sequence of pictures of a falling drop.) Just before a drop breaks loose, a thin neck connects it to the main mass of fluid. We were interested in this neck for several, related, reasons. We had good reason to believe that the shape of the thin neck was universal, that is, independent of the details of the experimental setup. Its shape would not depend upon whether the drop was moving up or down, whether the fluid was milk or honey, or whether the process was observed one microsecond or ten microseconds before the final separation. Experience and theoretical analysis have both led us to expect universal behavior whenever there is a qualitative change in the behavior of any physical system. Here the qualitative change is one in which a single mass of fluid separates into two. The mathematics and physics of qualitative change is, in itself, of considerable interest. Universality5 makes any result one gets broadly applicable. Figure 2. Stroboscopic pictures of a drop of water falling from a pipette. Photo by Sidney R. Nagel. See X. D. Shi, M. P. Brenner, S. R. Nagel, Science 265 (1994): 219. But at the same time as we scientists believe in universality, we also believe in chaos. In most situations, and certainly in the milk crown, we believe that the final outcome is sensitively affected by the details of the starting situation. Universality and chaos seem to be in opposition—incompatible. We physicists love physical situations in which a collision of ideas, an incompatibility, can occur. We believe that such situations lead to a deeper understanding of nature. Theorists predicted the characteristic shape of the neck just before drop-separation. Experimentalists measured this shape by looking in exquisite detail at pictures of the drop. (This shape can be seen, for example, in the fourth plate of figure 2 or in the just-separating drops in Edgerton’s figure 1.) The first measurements showed exactly the predicted shape. So has chaos disappeared? Not at all. Sometimes, the experiment showed a behavior considerably richer than the one shown here. Sometimes, a little piece of the neck elongated, producing a skinnier neck. That might stretch out too, producing a yet skinnier neck. In both theory and simulations, this stretching could occur several times, producing an unpredictable number of necks, each one however approaching the universal shape. Chaos and universality can coexist in a single system, often tied together in surprising ways. Lessons from Complexity At one time, many scientists believed that the study of complexity could give rise to a new science. In this science as in others, there would be general laws, with specific situations being studied as the inevitable working out of these laws of nature. The study of complexity has not gone in that direction. No universally applicable laws of complexity have emerged. Instead, the systems we study have taught us lessons, rather like the lessons for life our grandmothers told us. They are general ideas which apply broadly, but they must be applied with care and good judgment. Different portions of the drop behavior have different degrees of predictability. The order of separation of the tips of Edgerton’s crown are almost completely unpredictable. The shape at separation is almost completely predictable, except that sometimes a multiplicity of necks appear. Can’t you hear yourself being told, perhaps after a first unhappy love affair, “Sometimes you can know how things will come out, sometimes you have to live it to find out. . . .” No, complexity is not a science in the usual terms. Fluids Heated from Below Albert Libchaber, now at Rockefeller University, did a series of very carefully controlled experiments on turbulent motion in fluids heated from below. I played a role in analyzing and interpreting these experiments. Turbulence is ever-changing. The swirls of wind in Chicago never have the same pattern for two seconds in a row. The pattern changes and changes and changes again. Nonetheless, wind is in some sense always the same. We wanted to understand the flow patterns in a heated fluid. We expect to see some roughly fixed elements, appearing in a mosaic of different combinations, producing ever-changing patterns. One structure that is found in all heated fluids is called a plume.In most situations, heated fluid is less dense than its cooler counterpart. For this reason, the hotter parts of fluids feel forces that push them upwards. As heated fluid rises, it pushes aside fluid above it and is in turn deflected by this pushing. The rising fluid produces a little stalk, like the base of a mushroom while the deflected fluid produces a structure very much like the cap of a mushroom. As the pushing and deflection continue, the top of the cap folds over. This very characteristic mushroom-like structure can be found in many different situations. Figure 3.  Plume produced by thermonuclear explosion “Joe 4,” August 12, 1953. From Richard Rhodes, Dark Sun: The Making of the Hydrogen Bomb (New York: Simon & Schuster, 1995), photo 51. Look first at figure 3, which is a picture of a nuclear explosion. It shows a very large plume produced by rising gases. Yet larger plumes are depicted in figure 4, which is reproduced from a computer simulation of the surface of the sun done by Malagoli, Dubey, and Cattaneo. This picture shows many cold plumes falling downward into the sun. Yet another, more mundane, plume can be seen in Edgerton’s photo (figure 5) of the fluid rising from a candle flame. (Each scientist is proud of his/her art. Edgerton did not have to include the bullet and its shock wave in his candle picture, but he put it in to show the power of the strobe technique which he invented. Nagel and coworkers did not have to construct museum quality pictures of fluid in motion, but they were proud of their strobe technique (see figures 2 and 11) and wanted to show off, too.) Figure 4. Computer simulation of the temperature pattern near the surface of the sun. The darker colors indicate lower temperatures. Notice the many falling plumes near the surface. Simulation by Andrea Malagoli, Anshu Dubey, and Fausto Cattaneo. Figure 5. Plumes from candle flame. This strobe photo also shows a bullet and its associated shock wave passing through the flame. Photo by Dr. Harold E. Edgerton, © Harold & Esther Edgerton Foundation, 2001, courtesy of Palm Press, Inc. But what did Libchaber and coworkers actually see? They worked with a small container heated carefully and uniformly from below and cooled from above. They maintained a fixed temperature difference between the hotter bottom and the cooler top of their container. Figure 6. Picture of a fluid heated from below, Jun Zhang, S. Childress, and Albert Libchaber, Physics of Fluids 9 (1977): 1034. The bright lines show regions of rapid temperature variation. The fluid is seen to contain many plumes, especially near the walls. A counterclockwise flow can be seen. Figure 6 is a picture of a flow they observed. As you can see, the container is filled with plumes. Hot plumes congregate in an upwelling jet of fluid near the righthand wall of the container. A similar, downward jet formed from cold plumes occurs on the left-hand wall. Large numbers of hot plumes are also found in left-to-right motion in a layer called the mixing zone near the bottom of the container. A similar layer on the top contains cold plumes, moving in the opposite direction. The central region contains a few plumes, hot and cold, in mostly random motion. These plumes have gotten loose from the main flow but nonetheless participate in an overall counterclockwise motion. In addition, there are very thin layers, not really visible in the present picture, near the top and bottom walls. These boundary layers actually contain the majority of the temperature drop between bottom and top of the container. Figure 7. Cartoon view of Rayleigh Benard cell, by G. Zocchi, E. Moses, and A. Libchaber, Physica A 166 (1990): 397. This picture focuses on the bottom portion of the fluid and the mechanism for production of warm plumes. On the top, there is a similar mechanism for the production of cold plumes. So the container is a complex “machine” containing many different working parts: boundary layer, mixing zone, central region, jets. Figure 7 is a “cartoon” drawing which shows how the machine works. Start at the lower left-hand corner of the cell. A flow is progressing from left to right. This flow is like a wind. Like the wind on Lake Michigan, it can help waves move. In this case, the waves are changes in the height of the thin boundary layer at the bottom of the cell. Waves move from left to right throwing up spray as they go. Because the spray is hot fluid, it tends to rise forming the swirls and plumes shown in the picture. These structures grow larger. A few come loose and move into the central region. Most hit the right-hand wall and move upward as a jet toward the top of the cell. As the flow hits the top of the cell it makes a splash. The splash makes waves. The waves move along the top, producing cold spray and thence cold plumes. The plumes form into a downward jet at the left-hand side, splash on the bottom, and produce hot waves. . . . So a complex motion takes place, having the seeming purpose of moving heat from the bottom of the container to the top. So we have a story and a few lessons. The story:A cell is filled with a fluid. The fluid is strongly heated from below. Buoyancy raises the heated material and a flow starts. Lesson I:A nonequilibrium system can organize itself to produce the most amazing complexity, with many different working parts, each serving a different function. This self-organized machine reminds one of the processes occurring in biological systems. I believe that biological systems have arisen precisely because physical systems have a natural tendency to generate complexity. Lesson II:Memory can exist in a noisy environment. The system will move for a very long time in one sense of rotation. Here we showed a counterclockwise rotation. But if we had started the system in the other direction, it would have continued to flow in that sense for a very, very long time. Lesson III:Disasters happen. Once in a very long while this orderly flow in one direction stops because of some fluctuation in the system. And, in a very uncharacteristic and unusual pattern of movement, the flow reverses itself. Most complex things undergo large, rather unpredictable changes. Every once in a while we get a tornado in Illinois. Or an earthquake. Or an ice age. Complex systems do big, unexpected things. Example: Fluids in Motion: A Square Dance We have seen that fluids can exhibit extremely complex patterns of motion. Generally, matter in motion is described by equations of a type called partial differential equations (PDEs). These equations relate rates of change in space to rates of change in time. Spatial variations translate into motion. The particular equation used to describe fluid flow motion is called the Navier-Stokes equation— ut + (u º ▼) u = - ( ▼p) / Ρ + ν ▼2 u ▼ º u = 0 —and says how the velocity, u, and the pressure, p, depend upon space and time. PDE are for the initiates. However, as we shall see in just a moment, the basic ideas that go into the derivation of these particular PDEs are very simple indeed. We shall then consider a square dance (or, equivalently, a computer program) which realizes these ideas. Finally, we shall see how the dance, with only two basic steps, can reproduce all the complexity of fluid motion. There are three basic ideas in the PDE: First: A fluid contains many particles in motion. These particles will be our dancers. Second:“Conservation laws:” Some things are never lost, only moved around. The number of particles and momentum never change—they only move from place to place. Third: All changes are local and isotropic. These are technical words saying that all communication in the system is over short distances and that the system has, to a sufficient degree, a symmetry under rotations of its compass directions. The big idea: Do the above right (plus a little more) and you will construct a model system with behavior just like that of real fluids. We shall first use the metaphor that our fluid is a game, then translate that game into a dance, and finally show that the game and the dance have movements just like a fluid. For simplicity, all this is to happen in two-dimensional space. The space is represented by a board in the shape of a triangular lattice (See figure 8). On the board there are a group of pieces. Each piece comes with an arrow, which we pick to point in any one of the six directions along the axes of the lattice. Each site on the lattice can contain several pieces with their arrows pointing in different directions. Figure 8. The Dance. The dancers are represented by arrows, as shown in the left-hand frame. In the first step, each dancer moves one unit in the direction of his/her arrow. The result is the second frame. Next, if the dancers on each point have arrows that add up to zero (by vector addition), they twirl around to reach the configuration of the third frame. This motion then repeats, endlessly. Of course, the pieces are the particles, and their arrows are the different possible directions of their velocity or momentum. (The two are proportional to one another.) Our game or dance, must be set up so that the pieces or dancers move with rules that satisfy the three basic requirements of the fluids. So start the dance. Begin from the first panel of figure 8. The caller for this American country dance cries “Promenade,” and each dancer moves one step in the direction of his/her arrow. This brings us to the second panel of the figure. Neither the number of particles nor the total momentum has changed. So far, so good. But actual particles must be able to change their momentum. So now the caller cries out “Swing your partner” and sotto voce “if your total momentum is zero.” (This is, after all, a mathematical dance.) All pairs on a given lattice site with oppositely directed arrows and all triplets with arrows sixty degrees apart have vectors which add up to zero. As shown in figure 8, when these two cases occur, the dancers rotate together maintaining their total momentum to be zero, producing the last panel of the figure. These two steps satisfy all our rules. The dance continues with the caller crying first one step then the next, throughout the long night. To make a fluid, take a huge board with many dancers. Go through the two steps many, many times. My contention is that the resulting pattern of motion will, if smeared or averaged over a moderately large region of the board, exactly and precisely reproduce the solution of the NavierStokes equation and, in equal measure, the motion of the fluid. To justify this contention, look at the non-trivial motion of a fluid past an obstacle. Such a flow, for a real fluid moving past a long cylinder, is shown in figure 9. The flow is made visible with lines of smoke placed in the fluid. Far away from the obstacle, the fluid velocity is uniform. Over much of the field of view, the smoke lines are almost straight. But as the fluid moves close to and past the cylinder, its flow changes quite considerably. The region behind the obstacle develops a characteristic pattern of swirls, in which neighboring swirls sit on opposite sides of the centerline and go around in opposite senses. This real flow pattern will now be compared with a computer-generated square dance. Figure 9. Fluid flows past a cylindrical obstacle and produces a characteristic flow pattern called a von Kármán street. Photo courtesy of Peter Bradshaw, reproduced in Milton Van Dyke, An Album of Fluid Motion (Stanford, Calif.: Parabolic Press, 1982). Working with a large computer at Los Alamos National Laboratory, d’Humière, Pomeau, and Lallemand implemented the square dance model on a lattice with about two million sites and with perhaps six million dancers. They set the dancers into an average motion from left to right and made them move past a computer version of an obstacle. They had each of the dancers do the little dance steps many thousands of times. To see the resulting pattern, they then set up regions in their system containing about one hundred lattice sites and found the average particle velocity in each of these regions. Their result is shown in figure 10. Just as in the real flow, one sees behind the obstacle a set of moving vortices with alternative directions of flow. Examples like this one showed that the dance model got the qualitative properties of the fluid just right. Figure 10. Computer simulation of von Kármán street. See D. d’Humières, Y. Pomeau, and P. Lallemand, “Simulation d’allées de von Kármán bidimensionelles à l’aide d’un gaz sur réseau,” Comptes rendus de l’Académie des sciences, Série II 301 (1985): 1391–1394. I thank D. Rothman of MIT for providing this picture. At Chicago, Guy McNamara, Gianluigi Zanetti, and I did a more quantitative check. In our computer, we set up a narrow channel, like a pipe, and looked at a fluid in steady motion along this pipe. For comparison, we could look at a simple exact solution of the Navier-Stokes equation. The comparison showed that the mathematical solution and the computer simulation gave precisely the same results. So we learn that the richly complex motion of fluids can be constructed from the quite simple steps of the dancers. There is a lesson from it all, viz., complexity can arise from simplicity. Simple events, linked together, and repeated sufficiently often, can produce complex outcomes. It is possible that the complexity seen in biological systems is nothing more than the natural tendency of oft-repeated physical events to produce richly structured outcomes. A Cautionary Note I have described the work above as the outcome of the interaction between three different kinds of scientific tools: laboratory investigation, mathematical/theoretical analysis, and computer simulation. The quality of scientific technique and ability in the first two areas has remained roughly constant in recent decades. Sid Nagel uses an old camera derived directly from an Edgerton design. My use of universality ideas to solve PDEs arises directly from techniques put together by Kolmogorov, Greenspan, Barenblatt, Zeldovitch, and others three, four, or five decades ago. Modern advances in electronics and computer technique help us out some in lab and theory, but they mostly do not change our art. On the other hand, our capability for doing simulations has immensely improved as computers have gotten faster. In some areas, simulational methods seem to be, in large measure, displacing old-fashioned theory and experiment. For example, the Department of Energy in its studies aimed at permitting it to safely preserve our stockpiles of nuclear weapons seems to have largely given up small-scale experiment. Instead, it uses computer simulation to assess the possible outcomes of aging and accidents upon the behavior of nuclear weapons. I argue that a sole reliance on simulations is quite risky in any class of situations that has not had a full exploration by theory and particularly by experiment. Recent advances in computer technique have enabled us to construct better and more reliable studies of things that were mostly understood beforehand. But physical systems can produce results that are quite surprising. The discovery of such unexpected outcomes has, in recent years, been largely a product of laboratory experiments and theoretical work.6 Smallscale experiments can survey a wide range of conditions and of situations much more rapidly and flexibly than can simulation. Theory can put together a whole class of understandings and thereby focus predictive thought. But simulations will mostly help us see in more detail what we already know. Let me focus on this issue by looking at a particular experiment. Figure 11 shows a fluid—water—and an electrode shown in black. Below the surface of the water there is a flat metallic plate set horizontally. An electrical potential difference of twenty thousand volts is maintained between plate and electrode. As a result there is a very strong electric field in the neighborhood of the plate. Because of its electrical properties, the water is drawn toward regions of high electric field. It forms a bump which moves upward toward the plate. The top of the bump rises and rises, and then almost comes to a point. Figure 11. Strobe photos of behavior of fluid placed in a strong electric field. At the top of each frame, one can see an electrode, charged to 20,000 volts. In the first frames, a mound of fluid moves toward the high electric field region near the electrode. The fluid comes to a point. An arc of charged material is produced. Finally, the arc falls apart into many tiny fluid droplets. Lene Oddershede and Sidney R. Nagel, “Singularity during the Onset of an Electrohydrodynamic Spout,” Physical Review Letters 85 (2000): 1234–1237. As a theorist I am intrigued by this point. I specialize in seeing how new structures, like the point, are produced. I have worked with several different students and postdocs in setting up computer simulations which can give the rising bump and the approach to the point. Doing an accurate simulation is hard, and we have not yet reached complete success, but we are getting there. But look at the last three frames. Just after the fluid comes to its point, there is some motion between fluid and electrode. In the next frame, the motion resolves itself, and we can see that it is essentially a bolt of lightning flashing between the fluid and the electrode. In the next frame, the lightning has ceased and been replaced by the production of fine droplets of the water (rain?!) over an extended region. There is a lesson from this, too: Complex systems sometimes show qualitative changes in their behavior. (Here a bump has turned into lightning and rain.) Unexpected behavior is possible, even likely. This particular unexpected behavior would not, by a long shot, have been predicted by the simulations available to us today. Experiment found it. Theory can perhaps shed light on exactly what is going on. After the qualitative facts are exposed, we can then design simulations to test the ideas developed. To uncover and understand such things, it is best to have a balanced and interdisciplinary program of research. In the End... So perhaps there is no science of complexity. Nonetheless an open-minded and balanced scientific program can help us learn things about specific complex systems and even provide some general lessons about complexity. “Watch out for surprises” is one lesson. The need for a balanced program of research is another. Still another is that you should listen to your grandmother. 1. This paper is based in part upon a previous publication by Nigel Goldenfeld and myself: N. Goldenfeld and L. P. Kadanoff, “Simple Lessons from Complexity,” Science 284 (1999). Semitechnical expositions of the main subjects treated here are given in: The square dance machine: L. P. Kadanoff, Physics Today 39 (September 1986): 7. Breaking Necks: Blowups and Singularities: L. P. Kadanoff, “Reference Frames,” Physics Today 50 (September 1997): 11–12. Fluids heated from below: L. P. Kadanoff, A. Libchaber, E. Moses, and G. Zocchi, “Turbulence dans une Boîte,” La Recherche 22 (1991): 628–638. 2. Edgerton’s work can be seen in Stopping Time: The Photographs of Harold Edgerton (New York: Harry N. Abrams, 1987). 3. Edward N. Lorenz, The Essence of Chaos (reprint, Seattle: University of Washington Press, 1996). See also James Gleick, Chaos: Making of a New Science (New York: Viking, 1987): 9–32. The latter work presents quite a different view of the scientific study of complexity than the one given here. For another view quite different from mine, see Murray Gell-Mann. The Quark and the Jaguar (New York: Freeman, 1994). 4. Sidney Nagel, Itai Cohen, and X. D. Shi led the experimental effort. Theory and simulation came from a group that included Peter Constantin, Todd Dupont, Michael Brenner, Jens Eggers, Andrea Bertozzi, and myself. 5. Universality is a concept that physical scientists derived from many independent sources. The idea, therefore, has many parents. I was fortunate enough to pick up the word from A. Migdal and A. Polyakov in a dollar bar in the Soviet Union. I then imported it into the United States, where both the word and the concept found broad scientific usage. 6. A discussion of some early very notable accomplishments of computational physics can be found in Leo P. Kadanoff, “Computational Physics: Pluses and Minuses,” Physics Today 39 (1986): 7. A recent major accomplishment of simulational methods is the prediction of neutrino flux from the sun, which (when compared with observation) exposed an unexpected weakness in the common assumptions about neutrino behavior. About the Lecturer Leo P. Kadanoff is the John D. MacArthur Distinguished Service Professor in the Departments of Physics and Mathematics, the James Franck and Enrico Fermi Institutes, and the College. He received all of his academic degrees from Harvard University, completing his Ph.D. in 1960. In the 1960s, while on the faculty of the University of Illinois, he made innovative and original contributions to the understanding of phase changes, such as the change of water from liquid to ice. This work was recognized with the Buckley Prize of the American Physical Society in 1977, the Wolf Foundation Prize in 1980, and the Boltzmann Medal of the International Union of Pure and Applied Physics in 1989. As a Brown University faculty member in the 1970s, Kadanoff and his colleagues extended and applied the phase-transition work and developed a research program in computer simulations of urban dynamics. Kadanoff joined the University of Chicago faculty in 1978. Working with students, junior scientists, and colleagues, he helped construct a new field of knowledge called soft condensed-matter physics, which deals with such phenomena as the flow of fluids and the behavior of granular materials. He was awarded the National Medal of Science in 1999 for research contributions that have led to applications in engineering, urban planning, computer science, hydrodynamics, biology, applied mathematics, and geophysics. Kadanoff received the University’s Quantrell Award for Excellence in Undergraduate Teaching in 1990 and is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society. His other honors include the Centennial Medal of Honor of Harvard University, the Onsager Prize of the American Physical Society, and the Grande Medaille d’Or of the Académie des Sciences de l’Institut de France.
c1e1b8f045ca8af9
Saturday, March 18, 2017 Particles' wave functions always spread superluminally By these comments, Jacques says that he is ignorant about many things that I (and my instructors) considered basics of quantum field theory since I was an undergraduate, such as: 1. The special theory of relativity and quantum mechanics are consistent but their combination is constraining and has some unavoidable consequences – some basic general properties of quantum field theories. 2. Consistent relativistic quantum mechanical theories guarantee that objects capable of emitting a particle are necessarily able to absorb them as well, and vice versa. 3. For particles that are charged in any way, the existence of antiparticles becomes an unavoidable consequence of relativity and quantum mechanics. 4. Probabilities of processes (e.g. cross sections) that involve these antiparticles are guaranteed to be linked to probabilities involving the original particles via crossing symmetry or its generalizations. 5. The pair production of particles and antiparticles becomes certain when energy \(E\gg m\) is available or when fields are squeezed at distances \(\ell \ll 1/m\) (much) shorter than the Compton wavelength. 6. Only observables constructed from quantum fields may be attributed to regions of the Minkowski spacetime so that they're independent from each other at spacelike separations (because they commute or anticommute). 7. Wave functions that are functions of "positions of particles" unavoidably allow propagation that exceeds the speed of light and there can't be any equation that bans it. The causal propagation only applies to quantum fields (the observables), not to wave functions of particles' positions. 8. Equivalently, almost all trajectories of particles that contribute to the Feynman path integral are superluminal and non-differentiable almost everywhere and this fact can't be avoided by any relativistic version of the mathematical expressions. Causality is only obtained by a combination of emission and absorption, contributions from particles and antiparticles, and at the level of quantum fields (observables). It's a lot of basic stuff that Jacques should know but instead, he doesn't know it and these insight drive him up the wall. Let's look at those things. The most well-defined disagreement is about the "relativistically corrected" Schrödinger equation\[ i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi \] You see that it's like the usual one-particle equation except that the non-relativistic formula for the kinetic energy, \(E=|\vec p|^2/2m\), is replaced by the relativistic one, \(E=\sqrt{|\vec p|^2+m^2}\), with the same Laplacian (times \(-\hbar^2\)) substituted for \(|\vec p|^2\). Jacques believes that when you substitute a localized wave packet for \(\psi(x,y,z)\) at \(t=0\) and you wait for time \(t'\), it will only spread to the ball of radius \(t'\) away from the original region: it will never propagate superluminally. Search for "superluminally" in his blog post and comments. Oops, it's wrong and embarrassingly wrong. I think that the simplest way to see why he's wrong is to realize that the equation above still has the usual non-relativistic limit. As long as you guarantee that \(|\vec p| \ll m\) in the \(c=\hbar=1\) units, the evolution of the wave packets must be well approximated by non-relativistic physics and the non-relativistic Schrödinger equation. Consider an actual electron moving around a nucleus. In the hydrogen atom, the motion is basically non-relativistic. Consider an initial localized wave packet for the electron that has a uniform phase, is much larger than the Compton wavelength \(\hbar/mc\approx 2.4\times 10^{-12}\,{\rm m}\) (it's simply \(1/m\) in the \(c=\hbar=1\) units) but still smaller than the radius of the atom. For example, the radius of the packet is \(10^{-11}\) meters. Outside a sphere of this radius, the wave function is zero. Will this wave packet spread superluminally? You bet. By construction, the average speed is about an order of magnitude lower than the speed of light which is reasonably non-relativistic. So with a 1% accuracy (squared speed), and aside from the irrelevant phase linked to the additional additive shift \(E_0=mc^2\) to the energy, the wave packet will spread like if it followed the non-relativistic Schrödinger equation\[ i\hbar\frac{\partial}{\partial t} \psi = -\hbar^2\frac{\Delta}{2m} \psi + V(x) \psi \] Let's set \(V(x)=0\). OK, how do the wave packets spread according to the ordinary Schrödinger equation? Let's ask Ron Maimon – every good self-didact is enough to answer such questions. Well, it's simple: the Schrödinger equation is just a diffusion (or heat) equation where the main parameter is imaginary. If \(m\) above were imaginary, \(m=i\mu\), then the solution to the diffusion equation would be\[ \rho(x,t)\equiv \psi(x,t) = \frac{\sqrt{\mu}}{\sqrt{2\pi t}} \exp(-\mu x^2/t) \] The width of the Gaussian packet goes like \(\Delta x\sim \sqrt{t/\mu}\). It's very simple. If you know the graph of the square root, you must know that the speed is initially very high. The speed \(dx/dt\) scales like the derivative of the square root of time, i.e. as \(1/\sqrt{t\mu}\). For times shorter than \(1/\mu\), the speed with which the wave packet spreads unavoidably exceeds the speed of light. It's kosher that we're looking at timescales shorter than the "Compton time scale" of the electron. We only assumed that the spatial size of the wave packet is longer than the Compton wavelength. Whether an analogous scaling is obeyed by the dependence on time depends on the equation itself and the answer is clearly No. The asymmetric treatment of space and time in the equation (the square root is only used for the spatial derivatives) may be partly blamed for that asymmetry. Just to be sure, all the scalings are the same for the value of \(\mu=-im\) that is imaginary. If you don't feel sure that our non-relativistic approximation was adequate for the question, I can give you a stronger weapon: the exact solution of the equation (Schrödinger's equation with the square root). What is it? Well, it's nothing else than the retarded Green's function – as taught in the context of the quantum Klein-Gordon field. Look e.g. at Page 7 of these lectures by Gonsalves in Buffalo. The retarded function is the matrix element of the evolution operator for the one-particle Hilbert space\[ G_{\rm ret}(x-x') = \bra{x,y,z} \exp(H(t-t')/i) \ket{x',y',z'}. \] When the particle is initially (a delta function) at the position \((x',y',z')\) at time \(t'\) and you wait for time \(t-t'\) i.e. you evolve it by the square-root-based Hamiltonian up to the moment \(t'\), and you ask what will be the amplitude at the position \((x,y,z)\), the answer is nothing else than the retarded Green's function of the difference between the two four-vectors. Can the retarded Green's functions be analytically calculated? As long as you include Bessel functions among your "analytically allowed tools", the answer is Yes. If we set the four-vector \(x'=0\) to zero, the retarded Green's function is simply\[ G_{\rm ret}(x) = \theta(t) \zzav{ \frac{ \delta( x^\mu x_\mu ) }{2\pi} - \frac{m}{4\pi}J_1 (mx^\mu x_\mu ) } \] For small and large timelike or spacelike separation, the Bessel function of the first kind used in the expression asymptotically is an odd function of the argument and behaves as (the sign is OK for positive arguments)\[ J_n(z) \sim \left\{ \begin{array}{cc} \frac{1}{n!} \zav{ \frac{z}{2} }^n&{\rm for}\,\, |z|\ll 1 \\ \sqrt{\frac{2}{\pi z}} \cos\zav{ z- \frac{(2n+1)\pi}{4} } & {\rm for}\,\,|z|\gg 1 \end{array} \right. \] But another lesson of the calculation is that the Green's function is nonzero even for \(x^\mu x_\mu\) negative, i.e. spacelike separation – although it decreases roughly as \(\exp(-m|x|)\) over there if you redefine the normalization by the factor of \(2E\) in the momentum space (which is a non-local transformation in the position space). See the last displayed equation on page 2 of Gonsalves: Relativistic Causality: Quantum mechanics of a single relativistic free point particle is inconsistent with the principle of relativity that signals cannot travel faster than the speed of light. The probability amplitude for a particle of mass \(m\) to travel from position \({\bf r}_0\) to \({\bf r}\) in a time interval \(t\) is\[ U(t) = \bra{{\bf r}} e^{-iHt} \ket{{\bf r}_0} = \bra{{\bf r}} e^{-i\sqrt{{\bf p}^2+m^2}t} \ket{{\bf r}_0}\sim\\ \sim \exp(-m\sqrt{{\rm r}^2-t^2}),\quad {\rm for}\,\,{\rm spacelike}\,\, {\rm r}^2\gt t^2 Gonsalves also quotes "particle creation and annihilation" and "spin-statistics connection" as the other two unavoidable consequences of a consistent union of quantum mechanics and special relativity. He refers you to Chapter 2 of Peskin-Schroeder to learn these things from a well-known source. OK, you might ask, what's the right modification of the wave equation for one particle that guarantees that the wave packet never spreads luminally? There is none. The condition that the packet never spreads superluminally would violate the uncertainty principle, a fundamental postulate of quantum mechanics. Why is it so? I can give you a simple idea. If you compress the particle to a small region, \(\Delta x \ll 1/m\), much smaller than the Compton wavelength, the uncertainty principle unavoidably says \(\Delta p \gg m\), so the motion is ultrarelativistic. You could think that \(\Delta p\gg m\) or \(p\gg m\) is still consistent with \(v\leq 1\) but the evolved wave packets are unavoidably far from those that minimize the product of uncertainties and as the Bessel mathematics above shows, the piece in the spacelike region just can't exactly vanish, basically due to the non-local character of the operators. Similar derivations could be made with the help of the Feynman path integral. The typical trajectories contributing to the Feynman propagator are superluminal and non-differentiable almost everywhere and this fact does hold even in the calculation of the propagators in quantum field theory, a relativistic theory. As I discussed in a blog post in 2012, the superluminal or non-differentiable nature of generic paths in the path integral is needed for Feynman's formalism to be compatible with the uncertainty principle. Recall that we have solved a paradox: the calculation of \(xp-px\) in the path integral should amount to the insertion of the classical integrand \(xp-px\) to the path integral but this classical insertion is zero. The paradox was resolved thanks to the generic paths' being non-differentiable: the time ordering of \(x(t)\) and \(p(t\pm \epsilon)\) mattered. So does quantum field theory prevent you from sending signals to spacelike-separated regions? And how is it achieved? Yes, quantum field theory perfectly prohibits any propagation of signals superluminally or over spacelike separations. It does so by using the quantum fields. Quantum fields such as \(\Phi(x,y,z,t)\) and functions of them and their derivatives are associated with spacetime points and they commute or anticommute with each other when the separation is spacelike. The zero commutator means that you may measure them simultaneously – that the decision to measure one doesn't influence the other or that the order of the two measurements is inconsequential. Just to be sure, the previous sentence doesn't say that these spacelike-separated measurements are never correlated. They may be correlated but correlation doesn't mean causation. They're only correlated if the correlation (mathematically described as entanglement within quantum mechanics) follows from the previous contact of the two subsystems that have evolved or moved to the spacelike-separated points. The point is that the outcomes themselves may be correlated but the human decisions – e.g. which polarization is measured on one photon – do not influence the statistics for the other photon itself at all. The existence of the "collapse" associated with the first measurement doesn't change the odds for the second measurement – although if you know the result into which the first measurement "collapsed", you must refine your predictions for the outcome of the second measurements because a correlation/entanglement could have been present. OK, how does this vanishing of the spacelike-separated commutators agree with the fact that the packets spread superluminally? On page 27 of Peskin-Schroeder, you may see that the "commutator Green's function" is a difference between two ordinary Green's functions and because those two are equal in the spacelike region, the value just cancels in the spacelike region. But again, the Fourier transform of the ordinary propagator such as \(1/(p^2-m^2+i\epsilon)\) does not vanish in the spacelike regions of the 4-vector \(x^\mu\). It cannot vanish because this position space propagator knows about the correlation of fields at two points of space. And the fields in nearby, spacelike-separated points are correlated, of course (very likely to be almost equal), especially if they are closer than the Compton wavelength. You may view this correlation as a result of the escaping of high-momentum or high-energy quanta to infinity. Only low-momentum or low-energy quanta are left in the vacuum and its low-energy excitations – and because of the Fourier relationship of \(x\) and \(p\), this absence of high-energy quanta means that the quantum fields can't depend on the spatial coordinates too much. You know, the message is that the ban on superluminal signals is compatible with quantum mechanics but the creation and annihilation of particles must be unavoidably allowed when you reconcile these two principles, special relativity and quantum mechanics. Jacques Distler believes that relativistic causality works even in "QFT truncated to the one-particle Hilbert space" which simply isn't right. He's really misunderstanding the key reason why quantum field theory was needed at all. Try to calculate the expectation value of the commutator of two fields \(F(x)\) and \(G(y)\) at two spacelike-separated points \(x,y\). The fields \(F,G\) may be the Klein-Gordon \(\Phi\) itself or some bilinear constructed out of it, e.g. the component of a current \(J^0\) that Distler talks about at some point. Imagine that you're calculating this commutator. You first expand \(F,G\) in terms of \(\Phi\) and its derivatives. Then you insert the expansions of \(\Phi\) in terms of the creation and annihilation operators. And you know the expectation values of the type \(\bra 0 \Phi(x)\Phi(y) \ket 0\). When you time-order \(x,y\), it's just the usual propagator in the position space. The precise calculation will depend on the operators you choose but a general point is true: There will be lots of individual terms that are nonzero for spacelike \(x-y\). Only if you sum all these terms – which will pick creation operators from \(F\) and annihilation operators from \(G\) and vice versa etc., you can achieve the cancellation. In particular, if you consider the operators \(F,G \sim J^0\), those will contain terms of the type \(a^\dagger a\) as well as \(b^\dagger b\) for a field whose particles and antiparticles differ. Only if you include the correlators of from both particles and antiparticles matching between the points \(x,y\), you may get a cancellation of the commutator (its expectation value). In other words, the fact that a quantum field is capable of both creating a particle and annihilating an antiparticle (which is the same for "real" fields) is absolutely vital for its ability to commute with spacelike-separated colleagues! This insight may be formulated in yet another equivalent way. You just can't construct a localized – relativistically causally well-behaved – field operator at a given point that would only contain terms of a given creation-annihilation schematic type, e.g. only \(a^\dagger a\) but no \(b^\dagger b\), only \(a^\dagger\) but no \(b\), and so on. Any operator that has a well-defined "number of particles of each type that it creates or annihilates" is unavoidably "non-local" and can't exactly commute with its spacelike-separated counterparts! If you wanted to study the truncation of the quantum field theory to a one-particle Hilbert space where the number of particles is \(N=1\), and the number of antiparticles (and all other particle species) is zero, then all "first-quantized" operators on your Hilbert space correspond to some combination of operators of the \(a_k^\dagger a_m\) form. You annihilate one particle and create one particle. But no such combination of operators may be strictly confined to a region so that it would commute with itself at spacelike-separation. Students who have carefully done some basic calculations in quantum field theory know this fact from many "happy cancellations" that weren't obvious for some time. For example, consider the quantized electromagnetic field. Write the total energy as\[ H = \int d^3 x\,\frac{1}{2}\zav{B^2+ E^2}, \] i.e. the integral of the electric and magnetic energy density. Substitute \(\vec A\) and its derivatives for \(\vec B,\vec E\), and write \(A\) and its derivatives in terms of creation and annihilation operators for photons. So you will get terms of the form \(a^\dagger a\), \(aa\), and \(a^\dagger a^\dagger\). At the end, the total Hamiltonian only contains the terms of the \(a^\dagger a\) "mixed" type but this simplified form is only obtained once you integrate over \(\int d^3 x\) which makes the terms \(a a\) and \(a^\dagger a^\dagger\) vanish because of their oscillating dependence on \(x\). If you only write the energy density itself, it will unavoidably contain the operators of the type \(aa\) and \(a^\dagger a^\dagger\) – annihilating or creating two photons – too. And the terms of all these forms are equally important for the quantum field to be well-behaved, especially for the vanishing of its commutators at spacelike separations. The broader lesson is that important principles of physics are ultimately reconcilable but the reconciliation is often non-trivial and implies insights, principles, and processes that didn't seem to unavoidably follow from the principles separately. So the combination of relativity and quantum mechanics implies the basic phenomena of quantum field theory – antiparticles, pair production, the inseparability of creation and annihilation, spin-statistics relations, and a few other things. In the same way, perhaps a more extreme one, the unification of quantum mechanics and general relativity is possible but any consistent theory obeying both principles has to respect some qualitative features we know from quantum gravity – as exemplified by string theory, probably the only possible precise definition of a consistent theory of quantum gravity. In particular, black holes must carry a finite entropy, be practically indistinguishable from heavy particle species, and such heavy particle species must exist. The processes around black holes and those involving elementary particles are unavoidably linked by some UV-IR relationships and string theory's modular invariance is the most explicit known example (or toy model?) of such relationships. In combination, the known important principles of physics are far more constraining than the principles are separately and they imply that the "kind of a theory we need" or even "the precise theory" is basically unique. This strictness is ultimately good news. If it didn't exist, we would be drowning in the infinite field of possibilities. Because of the "bonus" strictness resulting from the combination of important principles of physics, we know that a theory combining quantum mechanics and special relativity must work like quantum field theory and a theory that also respects gravity as in general relativity has to be string/M-theory. No comments: Post a Comment
e175ad42761637e2
Thursday, March 31, 2016 Why Mersenne primes are so special? How to achieve stability against state function reductions? So: why Mersenne primes would be so special? Monday, March 28, 2016 Evidence for rho or omega meson of M89 hadron physics Sunday, March 27, 2016 Tetrahedral equation of Zamolodchikov I encountered in Facebook a link to very interesting article by John Baez telling about tetrahedral equation of Zamolodchikov - Zamolodchikov is one of the founders of conformal field theories. I should have been well-aware about this equation. Already because I worked some time ago with the question how non-associativity and language might emerge from fundamental physics [A(BC) is different from (AB)C: language is excellent example]. The illustrations in the article of Baez are necessary to obtain some idea about what is involved and I strongly recommend them. From the illustrations of the text of Baez one learns that a the 2-D surface in 4-D space-time is deformation known as third Reidermeister move. Physicists talk about Yang-Baxter equation (YBE) and it says that it does nothing for the topology. YBE tells that it does nothing to the quantum staet. One can however assume that "doing nothing" is replaced with what is called 2-morphism. "Kind of gauge transformation" takes place would be the physicist's first attempt to assign to this something familiar. The outcome is unitarily equivalent with the original but not the same anymore. This actually requires a generalization of the notion of group to quantum group. Braid statistics emerges: the exchange of braid strands brings in phase or even non-commutative operation on two braid state. Tetrahedral equation generalizes "Yang-Baxterator" so that it is not an identity anymore but becomes what is called 2-morphism. One however obtains an identity for two different combinations of 4 Reidemeister moves performed for 4 strands instead of 3. To make things really complicated one could give up also this identity and consider next level in the hierarchy. What makes this so interesting that in 4-D context of TGD that also 2-knots formed by 2-D objects (such as string world sheets and partonic 2-surfaces) in 4-D space-time become possible. Quite generally: D-2 dimensional things get knotted in D dimensions. I have proposed that 2-knots could be crucial for information processing in living matter. Knots and braids would represent information such as topological quantum computer programs, 2-knots information processing such as developing of these programs. In TGD one would something much more non-trivial than Reidermeister moves. The ordinary knots could really change in the operations represented by 2-knots unlike in Reidermeister moves. 2-knots/2-braids could represent genuine modifications of 1-knots since the reconnections at which knot strands can go through each other could open the knot partially or make it more complex (remember what Alexander the Great did to open the Gordion knot). The process of forming of knot invariant means gradual opening of knot in systematic stepwise manner. This kind of process could take in 4-D and be represented by string world sheet and corresponding evolution of quantum state in Zero Energy Ontology (ZEO) would represent opening of knot. One of the basic questions in consciousness theory is whether problem solving could have as a universal physical or topological counterpart. A crazy question: could opening of 1-knot - a process defining 2-knot- serve as the topological counterpart of problem solving and give rise to its quantal counterpart in ZEO? Or could Reidermeister moves transforming trivial knot to manifestly trivial form correspond to problem solving. It would seem that the Alexandrian manner to solve problems is what happens in the real world;-). What about higher-D knots? 4-D space-time surfaces can get knotted in 6-D space-times. If the twistorialization of TGD by lifting space-time surfaces to 6-D surface in the product of twistor spaces of Minkowski space and CP2 makes sense then space-time surfaces have representations as 4-surfaces in their 6-D twistor space. Could space-time surfaces get 4-knotted in twistor-space? If so, poor space-time surface - classical world - would be in really difficult situation!;-). By the way, also light-like 3-surfaces representing parton orbits could get knotted at the 5-D boundaries of 6-D twistor space regions assignable to space-time regions with Euclidian or Minkowskian signature! Wednesday, March 23, 2016 Direct evidence for Z' a la TGD and M89 J/Psi The bumps indicating the presence of new physics predicted by TGD have begun to accumulate rapidly and personally I dare to regard the situation as settled: individual bumps do not matter but when an entire zoo of bumps with predicted masses emerges, the situation changes (see this, this and this). Colleagues (especially the finnish ones) will encounter quite a demanding challenge in explaining how it is possible that I am still forced visit in bread quee in order to cope with the basic metabolic needs;-). Lubos told that there is direct evidence for Z' boson now: earlier the evidence was only indirect: breaking of universality and anomaly in angle distribution in B meson decays. Z' bump has mass around 3 TeV. TGD predicts 2.94 TeV mass for second generation Z breaking universality. The decay width by direct scaling would be .08 TeV and is is larger than deviation .06 TeV from 3 TeV. Lubos reported half year ago about excess at 2.9 GeV which is also consistent with TGD prediction. Lubos tells also about 3 sigma bump at 1.650 TeV assigned to Kaluza-Klein graviton in the search for Higgs pairs hh decaying to bbbar+ bbbar. Kaluza-Klein gravitons are rather exotic creatures and in absence of any other support for superstring model they are not the first candidate coming into my mind. I do not know how strong the evidence for spin 2 is but I dare to consider the possibility of spin 1 and ask whether M89 hadronic physics could allow an identification for this bump. 1. Very naively scaled up J/Psi of M107 hadron physics having spin J=1 and mass equal to 3.1 GeV would have mass 1.585 TeV: error is about 4 per cent. The effective action would be based on gradient coupling similar in form to Zhh coupling. The decays via hh → bbbar+bbbar could take place also now. 2. This scaling might be too naive: the quarks of M89 might be same as those of ordinary hadron physics and only the color magnetic energy would be scaled up by factor 512. c quark mass is equal 1.29 GeV so that the magnetic energy of ordinary J/Psi would be equal to .52 GeV. If so, M89 version of J/Psi would have mass of only 269 GeV. Lubos tells also about evidence for a 2 sigma bump at 280 GeV identified as CP odd Higgs - this identification of course reflects the dream of Lubos about standard SUSY at LHC energies. However, the scaling of eta meson mass 547.8 MeV by 512 gives 280.4 GeV so that the interpretation as eta meson proposed already earlier is convincing. The naive scaling might be the correct thing to do also for mesons containing heavier quarks. In any case, even if one forgets J/Psi, there is now direct evidence for as many as 3 new branches of physics predicted by TGD! Two scaled variants of hadron physics (M89 and MG,79) and second generation weak physics (MG,79)! Colleagues have realized that history is in making. I read from popular article that theoreticians left their ongoing projects and have started to study 750 GeV bump and certainly also other bumps. Ellis talked already about entire new physics. TGD message has gone through! But no one mentions TGD although all is published in Huping Hu's journals and in Research Gate! No need for this in the recent science community based on ethics of brutal opportunism: steal, lie, betray as hippies expressed it. Tuesday, March 22, 2016 Causal loophole, zero energy ontology, subjective time, geometric time Finnish experimental physicists K. S. Kumar, A. Vepsäläinen, S. Danilin and G. S. Paraoanu (the leader of the group) working at Aalto University have published a very interesting article in Nature Communications. One can find also a popular article about the discovery. On studies transition from state 1 to 2 to 3. Usual causality implies that you must first induce transition from state 1 to 2 - by suitable chosen pulse in the experiments: energy of photons in pulse must correspond to energy difference between 2 and 1. After than you can induce transition from 2 to 3 by second suitably chosen pulse. In quantum world you can make this in different order. First the pulse inducing transition 2 to 3 (state is 1 so that nothing happens). Then you generate pulse 1 and inducing transition 1 to 2 and transition 2 to 3 takes place! Weird! This might have profound implications for quantum information processing. Layman description for this loop in causality is following. Suppose you must get out from parking hall. In classical world you first reverse the car and then drive away. In quantum world you can first drive away and then reverse the car! Good choice if you have a really big hurry! Maybe you should however not try this without the guidance of quantum physicist. This is really crazy looking idea, which can be understood only in 4-D context. Zero Energy Ontology plus the fact that geometric time and subjective time (and therefore the corresponding causalities) are not one and the same thing explains the nicely. The "subjecively first" pulse represents 4-D wave as 4-D geometric entity (here time is geometric), which can induce the transition from state 2 to 3 if 2 is present in 4-D domain - causal diamond (CD) in TGD. Otherwise nothing happens: this is the case now! One kicks by second pulse state 1 to 2 "subjectively after" the first pulse. The state is 2 in entire CD and now the "subjectively first" pulse in CD can induce the transition from 2 to 3 in 4-D geometric space-time domain (CD)! Saturday, March 19, 2016 Tensor nets and S-matrices TGD suggests two approaches to the construction of S-matrix. The overly optimistic vision For the details see the new chapter Holography and Quantum Error Correcting Codes: TGD View of "Hyper-Finite Factors, p-Adic Length Scale Hypothesis, And Dark Matter Hierarchy" or the article with the same title. Thursday, March 17, 2016 Evidence for the eta meson of M89 hadron physics Lubos has had two postings about evidence for bumps at LHC. See the recent post about Moriond meeting and and earlier post about ATLAS gluino workshop. The post about Moriond meeting mentions a rumor spread by Jester telling about 5 sigma evidence for 750 GeV resonance from ATLAS. ATLAS refuses to comment. Remember that 750 GeV bump would correspond to one of the mesons of M89 hadron physics, whose masses are obtained by scaling those of ordinary hadron physics by factor 512 (see the earlier posting). There are many of them in the range 600-900 GeV with precisely predicted masses and Lubos indeed mentions that this is the region still allowing possibility for stop. I can only regret if the decays of M89 mesons could be resposible for wrong hopes about standard SUSY;-). I can estimate their masses and do some other simple things and even I confess that TGD predicted them but I am not responsible for their existence!;-) In Moriond meeting the existence of 750 GeV resonance - now christened as Chernette - was questioned. One might expect that it decays also via Zγ channel. It doesn't. Could meson property explain this? Ordinary neutral mesons decay to gamma pairs and these decays are exceptional resulting axial anomaly term (instanton term for electric field coupled to pseudoscalar meson). This should be the case also for their scaled up M89 variants. The decay rate should be exceptionally high since the instanton term is proportional to mass scale squared and decay rate to mass scale to fourth. This could make these decays of Chernette much faster than other decays and at the same time serve as a demonstration that new hadron physics predicted by TGD (not me) is to be blamed for the anomaly. Lubos mentions also indications for 285 GeV bump decaying to gamma pair. The mass of the eta meson or ordinary hadron physics is .547 GeV and the scaling of eta mass by factor 512 gives 280.5 GeV : the error is less than 2 per cent. I have already earlier demonstrated (see the earlier posting) that the mesons of ordinary hadron physics have bumps at the scaled up masses. After having worked with the idea about two decades, I dare to make bet that M89 is there. The production of M89 protons with mass about 4.8 TeV would be a really dramatic verification of M89 hadron physics. If the M89 quarks are ordinary current quarks and the mass of M89 proton is due to its magnetic body characterized by M89 instead of M107, M89 proton could be created as the magnetic body of ordinary proton makes p-adic phase transition and contracts by a scale factor 1/512. A more plausible option it that Planck constant increases by 512 so that the size does not change but the resulting proton (like also other M89 hadrons) would be dark. M89 proton should decay to ordinary proton by transforming the energy of its magnetic body to particles: the same mechanism would produce ordinary matter in TGD variant of inflaton decay. Does the dark proton transform to ordinary M89 proton first and then deay to ordinary proton plus meson or does it decay first to M89 hadrons, which eventually decay to ordinary hadrons? What is the life-time of the dark proton: is it so long that it leaves the reactor volume so that M89 dark proton would make itself visible as missing energy? I cannot answer these questions. In any case, this kind of phase transition is possible when the cm energy of proton in beam exceeds 4.8 TeV. The energy of 6.5 TeV per beam was reached last May so that the effect might have been observed if it is there. There is evidence also for other pieces of new physics predicted by TGD. First evidence for MG,79 hadron physics which should be also there with mass scale 214 times that of ordinary hadron physics and for the Higgs of the second generation weak bosons at the same mass scale and having mass 4 TeV. There is evidence also for the Z boson of the second generation weak physics inducing the breaking of lepton universality (see this). I know that my colleagues are not so stupid as they pretend to be, and the breakthrough of TGD is unavoidable and doomed to occur within few years. Tuesday, March 15, 2016 Cyclic cosmology from TGD perspective The motivation for this piece of text came from a very inspiring (interview of Neil Turok by Paul Kennedy in CBS radio ). The themes were the extreme complexity of theories in contrast to the extreme simplity of physics, the mysterious homegeny and isotropy of cosmology, and the cyclic model of cosmology developed also by Turok himself. In the following I will consider these issues from TGD viewpoint. 1. Extreme complexity of theories viz. extreme simplicity of physics The theme was the incredible simplicity of physics in short and long scales viz. equally incredible complexity of the fashionable theories not even able to predict anything testable. More precisely, super string theory makes predictions: the prediction is that every imaginable option is possible. Very safe but not very interesting. The outcome is the multiverse paradigm having its roots in inflationary scenario and stating that our local Universe is just one particular randomly selected Universe in a collection of infinite number of Universes. If so then physics has reached its end. This unavoidably brings to my mind the saying of Einstein: "Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.". Turok is not so pessimistic and thinks that some deep principle has remained undiscovered. Turok's basic objection against multiverse is that there is not a slightest thread of experimental evidence for it. In fact, I think that we can sigh for relief now: multiverse is disappearing to the sands of time, and can be seen as the last desperate attempt to establish super string theory as a respectable physical theory. Emphasis is now in the applications of AdS/CFT correspondence to other branches of physics such as condensed matter physics and quantum computation. The attempt is to reduce the complex strongly interaction dynamics of conformally invariant systems to gravitational interaction in higher dimensional space-time called bulk. Unfortunately this approach involves the effective field theory thinking, which led to the landscape catastrophe in superstring theory. Einstein's theory is assumed to describe low energy gravitation in AdS so that higher dimensional blackholes emerge and their interiors can be populated with all kinds of weird entities. For TGD view about the situation see (see this) One can of course criticize Turok's view about the simplicity of the Universe. What we know that visible matter becomes simple both at short and long scales: we actually know very little about dark matter. Turok also mentions that in our scales - roughly the geometric mean of shortest and longest scales for the known Universe - resides biology, which is extremely complex. In TGD Universe this would due to the fact that dark matter is the boss for living systems and the complexity of visible matter reflects that of dark matter. It could be that dark matter levels corresponding to increasing values of heff/h get increasingly complex in long scales and complexity increases. We just do not see it! 2. Why the cosmology is so homogenous and isotropic? Turok sees as one of the deepest problems of cosmology the extreme homogeny and isotropy of cosmic microwave background implying that two regions with no information exchange have been at the same temperature in the remote past. Classically this is extremely implausible and in GRT framework there is no obvious reason for this. Inflationary scenario is one possible mechanism explaining this: the observed Universe would have been very small region, which expanded during inflationary period and all temperature gradients were smoothed out. This paradigm has several shortcomings and there exists no generally accepted variant of this scenario. In TGD framework one can also consider several explanations. 1. One of my original arguments for H=M4× CP2 was that the imbeddability of the cosmology to H forces long range correlations (see this, this and this). The theory is Lorentz invariant and standard cosmologies can be imbedded inside future light-cone with its boundary representing Big Bang. Only Roberton-Walker cosmologies with sub-critical or critical mass are allowed by TGD. Sub-critical ones are Lorentz invariant and therefore a very natural option. One would have automatically constant temperature. Could the enormous reduction of degrees of freedom due to the 4-surface property force the long range correlations? Probably not. 4-surface property is a necessary condition but very probably far from enough. 2. The primordial TGD inspired cosmology is cosmic string dominated: one has a gas of string like objects, which in the ideal case are of form X2× Y2⊂ M4× CP2, where X2 is minimal surface and Y2 complex surface of CP2. The strings can be arbitrarily long unlike in GUTs. The conventional space-time as a surface representing the graph of some map M4→ CP2 does not exist during this period. The density goes like 1/a2, a light-cone proper time, and the mass of co-moving volume vanishes at the limit of Big Bang, which actually is reduced to "Silent Whisper" amplified later to Big Bang. Cosmic string dominated period is followed by a quantum critical period analogous to inflationary period as cosmic strings start to topologically condense at space-time sheets becoming magnetic flux tubes with gradually thickening M4 projections. Ordinary space-time is formed: the critical cosmology is universal and uniquely fixed apart from single parameter determining the duration of this period. After that a phase transition to the radiation dominated phase takes place and ordinary matter emerges in the decay of magnetic energy of cosmic strings to particles - Kähler magnetic energy corresponds to the vacuum energy of inflaton field. This period would do analogous to inflationary period. Negative pressure would be due to the magnetic tension of the flux tubes. Also the asymptotic cosmology is string dominated since the corresponding density of energy goes like 1/a2 as for primordial phase whereas for matter dominated cosmology it goes like 1/a3. This brings in mind the ekpyrotic phase of the cyclic cosmology. 3. This picture is perhaps over-simplified. Quite recently I proposed a lift of Kähler action to its 6-D twistorial counterpart (see this). The prediction is that a volume term with positive coefficient representing cosmological constant emerges from the 6-D twistorial variant of Kähler action via dimensional reduction. It is associated with the S2 fiber of M4 twistor space and Planck length characterizes the radius of S2. Volume density and magnetic energy density together could give rise to cosmological constant behind negative pressure term. Note that cosmological term for cosmic strings reduces to similar form as that from Kähler action and depending on the value of cosmological constant only either of them or both are important. TGD suggest strongly that cosmological constant Λ has a spectrum determined by quantum criticality and is proportional to the inverse of p-adic length scale squared so that both terms could be important. If cosmological constant term is small always the original explanation for the negative pressure applies. The vision about quantum criticality of TGD Universe would suggest that the two terms has similar sizes. For cosmic strings the cosmological term does not give pressure term since it come from the string world sheet alone. Thus for cosmic strings Kähler action would define the negative pressure and for space-time sheets both. If the contributions could have opposite signs, the acceleration of cosmic expansion would be determined by competing control variables. To my best understanding the signs of the two contributions are same (my best understanding does snot however guarantee much since I am a numerical idiot and blundering with numerical factors and signs are my specialities). If the signs are opposite, one cannot avoid the question whether quantum critical Universe could be able to control its expansion by cosmic homeostasis by varying the two cosmological constants. Otherwise the control of the difference of accelerations for expansion rates of cosmic strings and space-time sheets would be possible. 4. A third argument explaining the mysterious temperature correlations relies on the hierarchy of Planck constants heff/h=n labelling the levels of dark matter hierarchy with quantum scales proportional to n. Arbitrary large scales would be present and their presence would imply a hierarchy of arbitrary large space-time sheets with size characterized by n. The dynamics in given scale would be homogenous and isotropic below the scale of this space-time sheet. One could see the correlations of cosmic temperature as a signature of quantum coherence in cosmological scales involving also entanglement is cosmic scales (see this). Kähler magnetic flux tubes carrying monopole flux requiring no currents to generate the magnetic fields inside them would serve as correlates for the entanglement just as the wormholes serve as a correlate of entanglement in ER-EPR correlations. This would conform with the fact that the analog of inflationary phase preserves the flux tube network formed from cosmic strings. It would also explain the mysterious existence of magnetic fields in all scales. 3. The TGD analog of cyclic cosmology Turok is a proponent of cyclic cosmology combining so called ekpyrotic cosmology and inflationary cosmology. This cosmology offers a further solution candidate for the homogeny/isotropy mystery. Contracting phase would differ from the expanding phase in that contraction would be much slower than expansion and only during the last state there would be a symmetry between the two half-periods. In concrete realizations inflaton type field is introduced. Also scenarios in which branes near each other collide with each other cyclically and generate in this manner big crunch followed by big bang is considered. I find difficult to see this picture as a solution of the homogeny/isotropy problem. I however realized it is possible to imagine a TGD analog of cyclic cosmology in Zero Energy Ontology (ZEO). There is no need to assume that this picture solves the homogeny/isotropy problem and cyclicity corresponds to kind of biological cyclicity or rather sequence of re-incarnations. 3.1 A small dose of TGD inspired theory of consciousness 1. In ZEO the basic geometric object is causal diamond (CD), whose M4 projection represents expanding spherical light-front, which at some moment begis to contract - this defines an intersection of future and past directed light-cones. Zero energy states are pairs of positive and negative energy states at opposite light-like boundaries of CD such that all conserved quantum numbers are opposite. This makes it possible to satisfy conservation laws. 2. CD is identified as 4-D perceptive field of a conscious entity in the sense that the contents of conscious experiences are from CD. Does CD is represent only the perceptive field of an observer getting sensory representation about much larger space-time surface continuing beyond the boundaries of CD or does the geometry of CD imply cosmology, which is Big Bang followed by a Big Crunch. Or do the two boundaries of CD define also space-time boundaries so that space-time would end there. The conscious entity defined by CD cannot tell whether this is the case. Could a larger CD containing it perhaps answer the question? No! For larger CD the CD could represent the analog of quantum fluctuation so that space-time of CD would not extend beyond CD. 3. The geometry of CD brings in mind Big Bang - Big Crunch cosmology. Could this be forced by boundary conditions at future and past boundaries of CD meeting along the large 3-sphere forcing Big Bang at both ends of CD but in opposite directions. If CD is independent geometric entity, one could see it as Big Bang followed by Big Crunch in some sense but not in a return back to the primordial state: this would be boring and in conflict with TGD view about cosmic evolution. 4. To proceed some TGD inspired theory of consciousness is needed. In ZEO quantum measurement theory extends to a theory of consciousness. State function reductions can occur to either boundary of CD and Negentropy Maximization Principle (NMP) dictates the dynamics of consciousness (see this). Zeno effect generalizes to a sequence of state function reductions leaving second boundary of CD and the members of zero energy states at it unchanged but changing the states at opposite boundary and also the location of CD so that the distance between the tips of CD is increasing reduction by reduction. This gives rise to the experienced flow of subjective time and its correlation with the flow of geometric time identified as the increase of this distance. The first reduction to opposite boundary is forced to eventually occur by NMP and corresponds to state function reduction in the usual sense. It means the death of the conscious entity and its re-incarnation at opposite boundary, which begins to shift towards opposite time direction reduction by reduction. Therefore the distance between the tips of CD continues to increase. The two lifes of self are lived in opposite time directions. 5. Could one test this picture? By fractality CDs appear in all scales and are relevant also for living matter and consciousness. For instance, mental images should have CDs as correlates in some scale. Can one identify some analogy for the Big Bang-Big Crunch cosmology for them? I have indeed considered what time reversal for mental images could mean and some individuals (including me) have experienced it concretely in some altered states of consciousness. 3.2 Does cyclic cosmology correspond to a sequence of re-incarnations for a cosmic organism? The question that I am ready to pose is easy to guess by a smart reader. Could this sequence of life cycles of self with opposite directions of time serve as TGD analog for cyclic cosmology? 1. If so, the Universe could be seen a gigantic organism dying and re-incarnating and quantum coherence even in largest scales would explain the long range correlations of temperature in terms of entanglement - in fact negentropic entanglement, which is basic new element of TGD based generalization of quantum theory. 2. Big Crunch to primordial cosmology destroying all achievements of evolution should not occur at any level of dark matter hierarchy. Rather the process leading to biological death would involve the deaths of various subsystems with increasing scale and eventually the death in the largest scale involved. 3. The system would continue its expansion and evolution from the state that it reached during the previous cycle but in opposite time direction. What would remain from previous life would be the negentropic entanglement at the evolving boundary fixed by the first reduction to the opposite boundary, and this conscious information would correspond to static permanent part of self for the new conscious entity, whose sensory input would come from the opposite boundary of CD after the re-incarnation. Birth of organism should be analogous to Big Bang - certainly the growth of organism is something like this in metaphoral sense. Is the decay of organism analogous to Big Crunch? 4. What is remarkable that both primordial and asymptotic cosmology are dominated by string like objects, only their scales are different. Therefore the primordial cosmology would be dominated by cosmic strings thickened to cosmic strings also for the reversed cycle. Even more, the accelerated expansion could rip the space-time into pieces - this is one of the crazy looking predictions of accelerated expansion - and one would have free albeit thickened cosmic strings and in rough enough resolution they would look like ideal cosmic strings. The cycling would not be trivial and boring (dare I say stupid) repeated return to the same primordial state in conflict with NMP implying endless evolution. It would involve scaling up at each step. The evolution would be like a repeated zooming up of Mandelbrot fractal! Breathing is a good metaphor for this endless process of re-creation: God is breathing! Or Gods, since the is fractal hierarchy of CDs within CDs. 5. There is however a trivial problem that I did not first notice. The light-cone proper times a+/- assignable to the two light-cones M4+/- defining CD are not same. If future directed light-cone M4+ corresponds to a+2= t2-rM2 with the lower tip of CD at (t,rM)=(0,0), the light-cone proper time associated with M4- corresponds a-2= (t-T)2-rM2= a+2-2tT+T2 = a+2-2(a+2+rM2)1/2T +T2. The energy density would behave near the upper tip like ρ ∝ 1/a+2 rather than ρ ∝ 1/a-2. Does this require that a Big Crunch occurs and leads to the phase where one has gas of cosmic strings in M4-? This does not seem plausible. Rather, the gas of presumably thickened cosmic strings in M4- is generated in the state function reduction to the opposite boundary. This state function reduction would be very much like the end of world and creation of a new Universe. To sum up, single observation - the constancy of cosmic temperature - gives strong support for extremely non-trivial and apparently completely crazy conclusion that quantum coherence is present in cosmological scales and also that Universe is living organism. This should prove how incredibly important the interaction between experiment and theory is. For details see the chapter TGD and Cosmology of "Physics in Many-Sheeted Space-time" or the article Cyclic Cosmology from TGD Perspective Monday, March 14, 2016 Holography and Quantum Error Correcting Codes: TGD View Thursday, March 10, 2016 New evidence for second generation weak bosons predicted by TGD Already earlier evidence for the breaking of lepton universality has been found in the decays of beauty meson B consisting of b quark and d quark. The breaking of lepton universality means that lepton generations (electron, muon, tau and corresponding neutrinos) are not identical with respect to weak interactions. Indeed, there were indications that the decays do not occur with the same rate to electron -, muon, and tau pairs (there are small corrections breaking the universality due to different lepton masses). A possible reason is that there exists new weak bosons, whose couplings are not universal. What is known as Z' boson would make itself visible in the decays of B. Now additional evidence for the existence of this kind of weak boson has emerged. If I understood correctly, the average angle between the decay products of B meson is not quite what it is predicted to be. This is interpreted as an indication that Z' type boson appears as an intermediate state in the decay. What says TGD? TGD predicts three gauge boson families and the new boson families have couplings to fermions which are not universal (see the earlier posting) . There is indeed evidence for the Higgs of the second family as a bump predicted to have mass 32 times higher than ordinary Higgs, which makes rather precisely 4 TeV. This coupling could explain the breaking of universality in the decays of B boson. In TGD Z' would correspond to second generation Z boson. p-Adic length scale hypothesis plus assumption that new Z boson corresponds to Gaussian Mersenne MG,79 =(1+i)79-1 predicts that its mass is by factor 32 higher than mass of ordinary Z boson making 2.9 TeV for 91 GeV mass for Z. If I remember correctly, there are indications for a bump at this mass value. Leptoquark made of right handed neutrino and quark is less plausible explanation but predicted by TGD as squark. The breaking of the universality is characterized by charge matrices of weak bosons for the dynamical SU(3) assignable with family replication. The first generation corresponds to unit matrix whereas higher generation charge matrices can be expressed as orthogonal combinations of isospin and hypercharge matrices I3 and Y. I3 distinguishes between tau and lower generations (third experiment) but not between the lowest two generations. There is however evidence for this (the first two experiments above). Therefore a mixing the I3 and Y should occur. Does the breaking of universality occurs also for color interactions? If so, the predicted M89 and MB,79 hadron physics would break universality in the sense that the couplings of their gluons to quark generations would not be universal. This also forces to consider to the possibility that there are new quark families associated with these hadron physics but only new gluons with couplings breaking lepton universality. This looks somewhat boring at first. One the other hand, there exist evidence for bumps at masses of M89 hadron physics predicted by scaling to be 512 time heavier than the mesons of the ordinary M107 hadron physics (see the earlier posting) . According to the prevailing wisdom coming from QCD, the meson and hadron masses are however known to be mostly due to gluonic energy and current quarks give only a minor contribution. In TGD one would say that color magnetic body gives most of the meson mass. Thus the hypothesis would make sense. One can also talk about constituent quark masses if one includes the mass of corresponding portion of color magnetic body to quark mass. These masses are much higher than current quark masses and it would make sense to speak about constituent quarks for M89 hadron physics. For background see the chapter New particle physics predicted by TGD: part I. Thursday, March 03, 2016 Twistor googly problem transforms from a curse to blessing in TGD framework There was a nice story with title "Michael Atiyah’s Imaginative State of Mind" about mathematician Michael Atyiah in Quanta Magazine. The works of Atyiah have contributed a lot to the development of theoretical physics. What was pleasant to hear that Atyiah belongs to those scientists who do not care what others think. As he tells, he can afford this since he has got all possible prices. This is consoling and encouraging even for those who have not cared what others think and for this reason have not earned any prizes. Nor even a single coin from what they have been busily doing their whole lifetime! In the beginning of the story "twistor googly problem" was mentioned. I had to refresh my understanding about googly problem. In twistorial description the modes of massless fields (rather than entire massless fields) in space-time are lifted to the modes in its 6-D twistor-space and dynamics reduces to holomorphy. The analog of this takes place also in string models by conformal invariance and in TGD by its extension. One however encounters googly problem: one can have twistorial description for circular polarizations with well-defined helicity +1/-1 but not for general polarization states - say linear polarizations, which are superposition of circular polarizations. This reflects itself in the construction of twistorial amplitudes in twistor Grassmann program for gauge fields but rather implicitly: the amplitudes are constructed only for fixed helicity states of scattered particles. For gravitons the situation gets really bad because of non-linearity. Mathematically the most elegant solution would be to have only +1 or -1 helicity but not their superpositions implying very strong parity breaking and chirality selection. Parity parity breaking occurs in physics but is very small and linear polarizations are certainly possible! The discusion of Penrose with Atyiah has inspired a possible solution to the problem known as "palatial twistor theory". Unfortunately, the article is behind paywall too high for me so that I cannot say anything about it. What happens to the googly problem in TGD framework? There is twistorialization at space-time level and imbedding space level. 1. One replaces space-time with 4-surface in H=M4×CP2 and lifts this 4-surface to its 6-D twistor space represented as a 6-surface in 12-D twistor space T(H)=T(M4)×T(CP2). The twistor space has Kähler structure only for M4 and CP2 so that TGD is unique. This Kähler structure is needed to lift the dynamics of Kähler action to twistor context and the lift leads to the a dramatic increase in the understanding of TGD: in particular, Planck length and cosmological constant with correct sign emerge automatically as dimensional constants besides CP2 size. 2. Twistorialization at imbedding space level means that spinor modes in H representing ground states of super-symplectic representations are lifted to spinor modes in T(H). M4 chirality is in TGD framework replaced with H-chirality, and the two chiralities correspond to quarks and leptons. But one cannot superpose quarks and leptons! "Googly problem" is just what the superselection rule preventing superposition of quarks and leptons requires in TGD! One can look this in more detail. 1. Chiral invariance makes possible for the modes of massless fields to have definite chirality: these modes correspond to holomorphic or antiholomorphic amplitudes in twistor space and holomorphy (antiholomorphy is holomorphy with respect to conjugates of complex coordinates) does not allow their superposition so that massless bosons should have well-defined helicities in conflict with experimental facts. Second basic problem of conformally invariant field theories and of twistor approach relates to the fact that physical particles are massive in 4-D sense. Masslessness in 4-D sense also implies infrared divergences for the scattering amplitudes. Physically natural cutoff is required but would break conformal symmetry. 2. The solution of problems is masslessness in 8-D sense allowing particles to be massive in 4-D sense. Fermions have a well-defined 8-D chirality - they are either quarks or leptons depending on the sign of chirality. 8-D spinors are constructible as superpositions of tensor products of M4 spinors and of CP2 spinors with both having well-defined chirality so that tensor product has chiralities (ε1, ε2), εi=+/- 1, i=1,2. H-chirality equals to ε=ε1ε2. For quarks one has ε= 1 (a convention) and for leptons ε=-1. For quark states massless in M4 sense one has either (ε12) = (1,1) or (ε12) = (-1,-1) and for massive states superposition of these. For leptons one has either (ε1, ε2) = (1,-1) or (ε1, ε2) = (-1,1) in massless case and superposition of these in massive case. 3. The twistorial lift to T(M4)× T(CP2) of the ground states of super-symplectic representations represented in terms of tensor products formed from H-spinor modes involves only quark and lepton type spinor modes with well-defined H-chirality. Superpositions of amplitudes in which different M4 helicities appear but M4 chirality is always paired with completely correlating CP2 chirality to give either ε=1 or ε=-1. One has never a superposition of of different chiralities in either M4 or CP2 tensor factor. I see no reason forbidding this kind of mixing of holomorphicities and this is enough to avoid googly problem. Linear polarizations and massive states represent states with entanglement between M4 and CP2 degrees of freedom. For massless and circularly polarized states the entanglement is absent. 4. This has interesting implications for the massivation. Higgs field cannot be scalar in 8-D sense since this would make particles massive in 8-D sense and separate conservation of B and L would be lost. Theory would also contain a dimensional coupling. TGD counterpart of Higgs boson is actually CP2 vector, and one can say that gauge bosons and Higgs combine to form 8-D vector. This correctly predicts the quantum numbers of Higgs. Ordinary massivation by constant vacuum expectation value of vector Higgs is not an attractive idea since no covariantly constant CP2 vector field exists so that Higgsy massivation is not promising except at QFT limit of TGD formulated in M4. p-Adic thermodynamics gives rise to 4-D massivation but keeps particles massless in 8-D sense. It also leads to powerful and correct predictions in terms of p-adic length scale hypothesis. Addition: Anonymous reader gave me a link to the paper of Penrose and this inspired further more detailed considerations of googly problem. 1. After the first reading I must say that I could not understand how the proposed elimination of conjugate twistor by quantization of twistors solves the googly problem, which means that both helicities are present (twistor Z and its conjugate) in linearly polarized classical modes so that holomorphy is broken classically. 2. I am also very skeptic about quantizing of either space-time coordinates or twistor space coordinates. To me quantization is natural only for linear objects like spinors. For bosonic objects one must go to higher abstraction level and replace superpositions in space-time with superpositions in field space. Construction of "World of Classical Worlds" (WCW) in TGD means just this. 3. One could however think that circular polarizations are fundamental and quantal linear combination of the states carrying circularly polarized modes give rise to linear and elliptic polarizations. Linear combination would be possible only at the level of field space (WCW in TGD), not for classical fields in space-time. If so, then the elimination of conjugate Z by quantization suggested by Penrose would work. 4. Unfortunately, Maxwell's equations allow classically linear polarisations! In order to achieve classical-quantum consistency, one should modify classical Maxwell's equations somehow so that linear polarizations are not possible. Googly problem is still there! What about TGD? 1. Massless extremals representing massless modes are very "quantal": they cannot be superposed classically unless both momentum and polarisation directions for them (they can depend space-time point) are exactly parallel. Optimist would guess that the classical local classical polarisations are circular. No, they are linear! Superposition of classical linear polarizations at level of WCW can give rise to local linear but not local circular polarization! Something more is needed. 2. The only sensible conclusion is that only gauge boson quanta (not classical modes) represented as pairs of fundamental fermion and antifermion in TGD framework can have circular polarization! And indeed, massless bosons - in fact, all elementary particles- are constructed from fundamental fermions and they allow only two M4, CP 2 and M4× CP2 helicities/-chiralities analogous to circular polarisations. B and L conservation would transform googly problem to a superselection rule as already described. To sum up, both the extreme non-linearity of Kähler action, the representability of all elementary particles in terms of fundamental fermions and antifermions, and the generalization of conserved M4 chirality to conservation of H-chirality would be essential for solving the googly problem in TGD framework. For background see the chapter From Principles to giagrams or the article From Principles to Diagrams. Wednesday, March 02, 2016 Chi Energy - master gets animals to sleep In Thinking Allowed there was an interesting link from Jeff Hall to a video with title Chi Energy - master gets animals to sleep. The video was very impressive and I recommend seeing it. Below I propose an explanation for the feats of the master. I have constructed a theory of remote mental interactions but always said that I do not believe in them - I just take their possibility very seriously. To be honest, the only reason for this attitude is that they emerge naturally from TGD inspired theory of consciousness. This video made me a believer. I know that skeptic "knows" that the video is hoax and demands 10 sigma statistical proof that every chi master in every corner of the Universe can put animals to sleep under controlled laboratory conditions by weaving his hands. It does not matter: we can laugh together to my gullibility if this helps skeptic to avoid despair in his intellectual isolation. We had a long discussion about the video and Ulla noticed the similarity with hypnosis: even the word "hypnosis" originally means some kind of sleep like state. In TGD framework hypnosis could be seen as a particular example of remote mental interactions. Simplifying: hypnotizer would in some sense hijack some part of brain of the subject by quantum entangling with it so that it becomes part of hypnotizer and obeys his commands. Note that the social explanation of hypnosis as the desire of subject ot please the hypnotiser does not explain what happens to the animals. In the discussion consciousness was of course mentioned and consciousness was compared to field. As a philosophically oriented physicist I get worried when one says "consciousness is a field" or something like that. I would prefer to speak about field patterns as correlates for contents of consciousness. To me consciousness itself is an independent form of existence not reducing to a property of physical system as materialist believes. This looks like pedantry but becomes absolutely crucial if one really wants to understand consciousness. Real progress is science is mostly getting rid of sloppy language implying sloppy thinking. I have explained so many times the basic ideas of TGD inspired theory of consciousness (call it TTC for short) and I am afraid that most readers have not got the message. I think that independently rediscovering TTC is the only manner to realize what I am trying to say. Therefore only few paragraphs. One needs a new ontology - a vision about what exists. This ontology is neither materialistic nor dualistic and in which consciousness is not a property of physical state as "-ness" would suggest but resides in the nowhere-nowhen-land between two quantum states replaced with analogs of quantum evolutions of Schrödinger equation. I call the new ontology Zero Energy Ontology (ZEO) and it leads to a new view about quantum measurement theory and state function reduction giving theory of consciousness as by-product by transforming observer from an outsider to the Universe a part of quantum physics. Conscious entity is the outcome of Zeno effect - a sequence of state function reductions which would not change the state in standard ontology at all but gives rise to the experienced flow of time in ZEO. A lot of unexpected predictions follow. Mention only the possibility of exotic unexpected phenomena such as time reversed consciousness, the re-incarnation of conscious entity in different time after biological death, and the predicted hierarchy of conscious entities with mental images identifiable as sub-selves - conscious entities. Also a detailed view about quantum biology and about remote mental interactions emerges. Quantum biology involves a generalization of both classical physics and quantum physics. 1. Classical physics is generalized by replacing space-time with space-time surfaces bringing in notions like many-sheeted space-time, magnetic flux quanta/tubes, field body and topological light rays essential for understanding living matter. Magnetic body (MB) becomes what might be called intentional agent. Our MB is the "real us" using our biological body (BB) as a motor instrument and sensory receptor. EEG and its scaled variants mediate sensory information from neuronal/cell membranes to parts of magnetic body having onion-like structure and control commands from MB to genome initiating gene expressions and possible other hitherto unknown genome related functions such as topological quantum computation and communications with dark photons which an decay to bio-photons. Magnetic flux tubes accompany and are space-time correlates of entanglement: note that also superstringers have ended up with this idea but talk about wormholes instead of flux tubes. Concerning remote mental interactions, the crucial difference from Maxwell's linear and relatively simple theory is that flux tubes make possible precisely targeted communications such that the signal does not weaken with distance. This is like replacing radio station with something sending laser signals along cable: replacing mass communication like radio broadcast with email. The signals - I call them topological light rays - are analogous to laser light beams travelling along flux tubes: also their existence distinguishes TGD from Maxwell's theory where light signals travel in all directions and weaken like 1/r2. 2. The generalization of quantum physics involves the hierarchy of Planck constants coming as multiples of ordinary Planck constants and identified in terms of dark matter which becomes a key player in living systems. Scaling of Planck constant scales up quantum lengths and gives rise to macroscopic quantum coherence, which is the key property of living matter. p-Adic physics and fusion of real physics (correlates of sensory experience) and various p-adic physics (correlates of cognition and imagination) is an essential element of the theory too. Consider now what remote mental interactions might be. 1. Attention is obviously an essential element. This master intensively attends. Magnetic flux tubes are correlates for attention. When I attend something the flux tubes connecting some part of me to this something are formed. This something could be mental image perhaps localizable to my brain or an object of external world - say my cat. Or the animals in the amazing video, which motivated the writing of this posting. Magnetic flux tubes are like tentacles studying the environment and when they find tentacle of another BB, reconnection to a bridge connecting the biological bodies can happen if the magnetic field strengths are nearly the same. This implies that cyclotron frequencies are same so that the reconnection involves resonance. This is a good reason to identify the prerequisites/correlates for remote mental interactions as magnetic flux tubes, which are TGD counterparts of Maxwellian magnetic fields but differ from them since they are topologically quantized. 2. Remote mental interactions are not anything exotic in this world view: the communications from BB and control of my BB by my MB rely on remote mental interactions. What we are used to call remote mental interactions is the same phenomenon except that the target is not my BB but something else: say patient in remote healing or computer in experiments testing whether intention can affect random number generator. What might happen in the video? 1. What could happen as the master in the video weaves his hands? Same as in hypnosis, which is also a remote mental interactions. The magnetic flux tubes for a part of hypnotizer's MB reconnect with those for a part of subject's MB fusing two conscious entities single one with chi master serving as boss for the unit formed in this manner. Both supra currents and analogs of laser light signals can proceed along these bridges thus formed. This is the same effect as the fusion of mental images - subselves - producing stereo vision. Fusion can occur also for mental images in different brains: our consciousness is not so private as we think - be cautious with your thoughts;-). Your brain children are not always only your brain children! 2. What makes a fellow who just weaves his hands "superhuman" - as the video says? How the movement of his hands can have so magic effect? It cannot. MB acting as an intentional agent is needed. The skills of the master in using his MB give him his magic looking powers - he is a master in magnetic gymnastics:-). Yoga trains your BB, meditation trains your MB. Using the tentacles emanating from is hands the master can get a contact even to the MBs of members of different species and make them part of this own MB and give commands to them. As the master weaves his hands he helps the flux tubes to form reconnections with the MBs of the subject animals. I wonder whether the master can "see" the flux tubes of foreign magnetic bodies (not necessarily consciously at his level of self hierarchy). This would make his task much easier.
d6e63f857da8d3f5
Notice: Undefined offset: 16425 in /var/www/scholarpedia.org/mediawiki/includes/parser/Parser.php on line 5961 Linear and nonlinear waves - Scholarpedia Linear and nonlinear waves From Scholarpedia Graham W Griffiths and William E. Schiesser (2009), Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 revision #154041 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Graham W Griffiths The study of waves can be traced back to antiquity where philosophers, such as Pythagoras (c. 560-480 BC), studied the relation of pitch and length of string in musical instruments. However, it was not until the work of Giovani Benedetti (1530-90), Isaac Beeckman (1588-1637) and Galileo (1564-1642) that the relationship between pitch and frequency was discovered. This started the science of acoustics, a term coined by Joseph Sauveur (1653-1716) who showed that strings can vibrate simultaneously at a fundamental frequency and at integral multiples that he called harmonics. Isaac Newton (1642-1727) was the first to calculate the speed of sound in his Principia. However, he assumed isothermal conditions so his value was too low compared with measured values. This discrepancy was resolved by Laplace (1749-1827) when he included adiabatic heating and cooling effects. The first analytical solution for a vibrating string was given by Brook Taylor (1685-1731). After this, advances were made by Daniel Bernoulli (1700-82), Leonard Euler (1707-83) and Jean d'Alembert (1717-83) who found the first solution to the linear wave equation, see section (The linear wave equation). Whilst others had shown that a wave can be represented as a sum of simple harmonic oscillations, it was Joseph Fourier (1768-1830) who conjectured that arbitrary functions can be represented by the superposition of an infinite sum of sines and cosines - now known as the Fourier series. However, whilst his conjecture was controversial and not widely accepted at the time, Dirichlet subsequently provided a proof, in 1828, that all functions satisfying Dirichlet's conditions (i.e. non-pathological piecewise continuous) could be represented by a convergent Fourier series. Finally, the subject of classical acoustics was laid down and presented as a coherent whole by John William Strutt (Lord Rayleigh, 1832-1901) in his treatise Theory of Sound. The science of modern acoustics has now moved into such diverse areas as sonar, auditoria, electronic amplifiers, etc. The study of hydrostatics and hydrodynamics was being pursued in parallel with the study of acoustics. Everyone is familiar with Archimedes (c. 287-212 BC) eureka moment; however he also discovered many principles of hydrostatics and can be considered to be the father of this subject. The theory of fluids in motion began in the 17th century with the help of practical experiments of flow from reservoirs and aqueducts, most notably by Galileo's student Benedetto Castelli. Newton also made contributions in the Principia with regard to resistance to motion, also that the minimum cross-section of a stream issuing from a hole in a reservoir is reached just outside the wall (the vena contracta). Rapid developments using advanced calculus methods by Siméon-Denis Poisson (1781-1840), Claude Louis Marie Henri Navier (1785-1836), Augustin Louis Cauchy (1789-1857), Sir George Gabriel Stokes (1819-1903), Sir George Biddell Airy (1801-92), and others established a rigorous basis for hydrodynamics, including vortices and water waves, see section (Physical wave types). This subject now goes under the name of fluid dynamics and has many branches such as multi-phase flow, turbulent flow, inviscid flow, aerodynamics, meteorology, etc. The study of electromagnetism was again started in antiquity, but very few advances were made until a proper scientific basis was finally initiated by William Gilbert (1544-1603) in his De Magnete. However, it was only late in the 18th century that real progress was achieved when Franz Ulrich Theodor Aepinus (1724-1802), Henry Cavendish (1731-1810), Charles-Augustin de Coulomb (1736-1806) and Alessandro Volta (1745-1827) introduced the concepts of charge, capacity and potential. Additional discoveries by Hans Christian Ørsted (1777-1851), André-Marie Ampère (1775-1836) and Michael Faraday (1791-1867) found the connection between electricity and magnetism and a full unified theory in rigorous mathematical terms was finally set out by James Clerk Maxwell (1831-79) in his Treatise on Electricity and Magnetism. It was in this work that all electromagnetic phenomena and all optical phenomena were first accounted for, including waves, see section (Electromagnetic wave). It also included the first theoretical prediction for the speed of light. At the end of the 19th century, when some erroneously considered physics to be very nearly complete, new physical phenomena began to be observed that could not be explained. These demanded a whole new set of theories that ultimately led to the discovery of general relativity and quantum mechanics; which, even now in the 21st century are still yielding exciting new discoveries. However, as this article is primarily concerned with classical wave phenomena, we will not pursue these topics further. Historic data source: 'Dictionary of The History of Science [Byn-84]. A wave is a time evolution phenomenon that we generally model mathematically using partial differential equations (PDEs) which have a dependent variable \(u(x,t)\) (representing the wave value), an independent variable time \(t\) and one or more independent spatial variables \(x\in\mathbb{R}^{n}\ ,\) where \(n\) is generally equal to \(1,2 \;\textrm{or}\; 3\ .\) The actual form that the wave takes is strongly dependent upon the system initial conditions, the boundary conditions on the solution domain and any system disturbances. Waves occur in most scientific and engineering disciplines, for example: fluid mechanics, optics, electromagnetism, solid mechanics, structural mechanics, quantum mechanics, etc. The waves for all these applications are described by solutions to either linear or nonlinear PDEs. We do not focus here on methods of solution for each type of wave equation, but rather we concentrate on a small selection of relevant topics. However, first, it is legitimate to ask: what actually is a wave? This is not a straight forward question to answer. Now, whilst most people have a general notion of what a wave is, based on their everyday experience, it is not easy to formulate a definition that will satisfy everyone engaged in or interested in this wide ranging subject. In fact, many technical works related to waves eschew a formal definition altogether and introduce the concept by a series of examples; for example, Physics of waves [Elm-69] and Hydrodynamics [Lam-93]. Nevertheless, it is useful to at least make an attempt and a selection of various definitions from normally authoritative sources is given below: • "A time-varying quantity which is also a function of position" - Chambers Dictionary of Science and technology [Col-71]. • "... a wave is any recognizable signal that is transferred from one part of the medium to another with a recognizable velocity of propagation" - Linear and non-linear Waves [Whi-99]. • "Speaking generally, we may say that it denotes a process in which a particular state is continually handed on without change, or with only gradual change, from one part of a medium to another" - 1911 Encyclopædia Britannica. • "a periodic motion or disturbance consisting of a series of many oscillations that propagate through a medium or space, as in the propagation of sound or light: the medium does not travel outward from the source with the wave but only vibrates as it passes" - Webster's New World College Dictionary, 4th Ed. • "... an oscillation that travels through a medium by transferring energy from one particle or point to another without causing any permanent displacement of the medium" - Encarta® World English Dictionary [Mic-07]. The variety of definitions given above, and their clearly differing degrees of clarity, confirm that 'wave' is indeed not an easy concept to define! Because this is an introductory article and the subject of linear and non-linear waves is so wide ranging, we can only include sufficient material here to provide an overview of the phenomena and related issues. Relativistic issues will not be addressed. To this end we will discuss, as proxies for the wide range of known wave phenomena, the linear wave equation and the nonlinear Korteweg-de Vries equation in some detail by way of examples. To supplement this discussion we provide brief details of other types of wave equation and their application; and, finally, we introduce a number of PDE wave solution methods and discuss some general properties of waves. Where appropriate, references are included to works that provide further detailed discussion. Physical wave types A non-exhaustive list is given below of physical wave types with examples of occurrence and references where more details may be found. • Acoustic waves - audible sound, medical applications of ultrasound, underwater sonar applications [Elm-69]. • Chemical waves - concentration variations of chemical species propagating in a system [Ros-88]. • Electromagnetic waves - electricity in various forms, radio waves, light waves in optic fibers, etc [Sha-75]. • Gravitational waves - The transmission of variations in a gravitational field in the form of waves, as predicted by Einstein's theory of general relativity. Undisputed verification of their existence is still awaited [Oha-94, chapter 5]. • Seismic Waves - resulting from earthquakes in the form of P-waves and S-waves, large explosions, high velocity impacts [Elm-69]. • Traffic flow waves - small local changes in velocity occurring in high density situations can result in the propagation of waves and even shocks [Lev-07]. • Water waves - some examples • Capillary waves (Ripples) - When ripples occur in water they are manifested as waves of short length, \(\lambda=2\pi/k<0.1m\ ,\) (\(k=\)wavenumber) and in which surface tension has a significant effect. We will not consider them further, but a full explanation can be found in Lightfoot [Lig-78, p221]. See also Whitham [Whi-99, p404]. • Rossby (or planetary) waves - Long period waves formed as polar air moves toward the equator whilst tropical air moves to the poles - due to variation in the Coriolis effect. As a result of differences in solar radiation received at the equator and poles, heat tends to flow from low to high latitudes, and this is assisted by these air movements [Gil-82]. • Shallow water waves - For waves where the wavelength \(\lambda\ \) (distance between two corresponding points on the wave, e.g. peaks), is very much greater than water depth \(h\ ,\) they can be modelled by the following simplified set of coupled fluid dynamics equations, known as the shallow water equations \[\tag{1} {\displaystyle \left[\begin{array}{c} {\displaystyle \frac{\partial h}{\partial t}}\\ \\\frac{ {\displaystyle \partial u}}{{\displaystyle \partial t} }\end{array}\right]+\left[\begin{array}{c} {\displaystyle \frac{\partial\left(hu\right)}{\partial x}}\\ \\{\displaystyle \frac{\partial\left({\textstyle \frac{1}{2}}u^{2}+gh\right)}{\partial x}}\end{array}\right]=}\left[\begin{array}{c} 0\\ \\-g{\displaystyle \frac{\partial b}{\partial x}}\end{array}\right].\ \]   \(b\left(x\right)\) = fluid bed topography   \(h\left(x,t\right)\) = fluid surface height above bed   \(u\left(x,t\right)\) = fluid velocity - horizontal   \(g\) = acceleration due to gravity. For this situation, the celerity or speed of wave propagation can be approximated by \(c=\sqrt{gh}\ .\) For detailed discussion refer to [Joh-97]. • Ship waves - These are surface waves that are formed by a ship travelling in deep water, relative to the wavelength, and where surface tension can be ignored. The dispersion relation is given by \(\omega=\sqrt{gk}\ ;\) so for phase velocity and group velocity see section (Group and phase velocity), we have respectively: \[\tag{2} c_{p} = \frac{\omega}{k}=\sqrt{\frac{g}{k}}, \] \[\tag{3} c_{g} = \frac{d\omega\left(k\right)}{dk}=\frac{1}{2}c_{p}.\ \] The result is that the ship's wake is a wedge-shaped envelope of waves having a semi-angle of \(\backsimeq19.5\) degrees and a feathered pattern with the ship at the vertex. The shape is a characteristic of such waves, regardless of the size of disturbance - from a small duckling paddling on a pond to large ocean liner cruising across an ocean. These patterns are referred to as Kelvin Ship Waves after Lord Kelvin (William Thomson) [Joh-97]. • Tsunami waves - See section (Tsunami). Linear waves Linear waves are described by linear equations, i.e. those where in each term of the equation the dependent variable and its derivatives are at most first degree (raised to the first power). This means that the superposition principle applies, and linear combinations of simple solutions can be used to form more complex solutions. Thus, all the linear system analysis tools are available to the analyst, with Fourier analysis: expressing general solutions in terms of sums or integrals of well known basic solutions, being one of the most useful. The classic linear wave is discussed in section (The linear wave equation) with some further examples given in section (Linear wave equation examples). Linear waves are modelled by PDEs that are linear in the dependent variable, \(u\ ,\) and its first and higher derivatives, if they exist. The linear wave equation The following represents the classical wave equation in one dimension and describes undamped linear waves in an isotropic medium \[\tag{4} {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}}={\displaystyle \frac{\partial^{2}u}{\partial x^{2}}.}\] It is second order in \(t\) and \(x\ ,\) and therefore requires two initial condition functions (ICs) and two boundary condition functions (BCs). For example, we could specify \[\tag{5} \begin{array}{lcl} \textrm{ICs:}\quad u\left(x,t=0\right)=f\left(x\right),\quad u_{t}\left(x,t=0\right)=g\left(x\right) , \end{array}\] \[\tag{6} \begin{array}{lcl}\textrm{BCs:}\quad u\left(x=a,t\right)=u_{a},\quad u\left(x=b,t\right)=u_{b}. \end{array}\] Consequently, equations (4), (5) and (6) constitute a complete description of the PDE problem. We assume \(f\) to have a continuous second derivative (written \(f\in C^{2}\)) and \(g\) to have a continuous first derivative (\(g\in C^{1}\)). If this is the case, then \(u\) will have continuous second derivatives in \(x\) and \(t\ ,\) i.e. (\(u\in C^{2}\)), and will be a correct solution to equation (4) with any consistent set of appropriate ICs and BCs [Stra-92]. Extending equation (4) to three dimensions, the classical wave equation becomes, \[\tag{7} \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}=\nabla^{2}u,\] where \(\nabla^{2}=\nabla\cdot\nabla\) represents the Laplacian operator. Because the Laplacian is co-ordinate free, it can be applied within any co-ordinate system and for any number of dimensions. Given below are examples of wave equations in 3 dimensions for Cartesian, cylindrical and spherical co-ordinate systems \[ \begin{array}{lccl} \textrm{Cartesian co-ordinates:} & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{\partial z^{2}},}\\ \textrm{Cylindrical co-ordinates}: & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial u}{\partial r}\right)+\frac{1}{r^{2}}\frac{\partial^{2}u}{\partial\theta^{2}}+\frac{\partial^{2}u}{\partial z^{2}},}\\ \textrm{Spherical co-ordinates}: & {\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}} & = & {\displaystyle \frac{1}{r^{2}}\frac{\partial}{\partial r}\left(r^{2}\frac{\partial u}{\partial r}\right)+\frac{1}{r^{2}\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial u}{\partial\theta}\right)+\frac{1}{r^{2}\sin^{2}\theta}\frac{\partial^{2}u}{\partial\phi^{2}}.}\end{array}\] These equations occur in one form or another, in numerous applications in all areas of the physical sciences; see for example section (Linear wave equation examples ). The d'Alembert solution The solution to equations (4), (5) and (6) was first reported by the French mathematician Jean-le-Rond d'Alembert (1717-1783) in 1747 in a treatise on Vibrating Strings [Caj-61] [Far-93]. D'Alembert's remarkable solution, which used a method specific to the wave equation (based on the chain rule for differentiation), is given below \[\tag{8} u(x,t)=\frac{1}{2}\left[f(x-ct)+f(x+ct)\right]+\frac{1}{2c}\int_{x-ct}^{x+ct}g(\xi)d\xi.\] It can also be obtained by the Fourier Transform method or by the separation of variables (SOV) method, which are more general than the the method used by d'Alembert [Krey-93]. The d'Alembertian \(\square=\nabla^{2}-{\displaystyle \frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}}\ ,\) also known as the d'Alembert operator or wave operator, allows a succinct notation for the wave equation, i.e. \(\square u=0\ .\) It first arose in d'Alembert's work on vibrating strings and plays a useful role in modern theoretical physics. Linear wave equation examples Acoustic (sound) wave We will consider the acoustic or sound wave as a small amplitude disturbance of ambient conditions where second order effects can be ignored. We start with the Euler continuity and momentum equations \[\tag{9} \frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho v\right) = 0,\] \[\tag{10} \frac{\partial\left(\rho v\right)}{\partial t}+\nabla\cdot\left(\rho vv\right)-\rho g+\nabla p+\nabla\cdot T = 0,\]   \(T\) = stress tensor (Pa)   \(g\) = gravitational acceleration (m/s\(^{2}\))   \(p\) = pressure (Pa)   \(t\) = time (s)   \(v\) = fluid velocity (m/s)   \(\rho\) = fluid density (kg/m\(^{3}\)) We assume an inviscid dry gas situation where gravitational effects are negligible. This means that the third and fifth terms of equation (10) can be ignored. If we also assume that we can represent velocity by \(v=u_{0}+u\ ,\) where \(u_{o}\) is ambient velocity which we set to zero and \(u\) represents a small velocity disturbance, the second term in equation (10) can be ignored (because it becomes a second order effect). Thus, equations (9) and (10) reduce to \[\tag{11} \frac{\partial\rho}{\partial t}+\nabla\cdot\left(\rho u\right) = 0,\] \[\tag{12} \frac{\partial\left(\rho u\right)}{\partial t}+\nabla p = 0.\] Now, taking the divergence of equation (12) and the time derivative of equation (11), we obtain\[ \frac{\partial^{2}\rho}{\partial t^{2}}-\nabla^{2}p=0.\] To complete the analysis we need to apply an equation-of-state relating \(p\) and \(\rho\) when we obtain the linear acoustic wave equation \[\tag{13} \frac{1}{c^{2}}\frac{\partial^{2}p}{\partial t^{2}}=\nabla^{2}p,\] where \[\tag{14} c^{2}=\frac{\partial p}{\partial\rho}\ .\] We now consider three cases: • The isothermal gas case\[p=\rho RT_{0}/MW\] (ideal gas law) \(\Rightarrow\left(\frac{\partial p}{\partial\rho}\right)_{T}=RT_{0}/MW\) and \(c=\sqrt{RT_{0}/MW}\ ,\) where \(T_{0}\) is the ambient temperature of the fluid, \(R\) is the ideal gas constant, \(MW\) is molecular weight and subscript \(T\) denotes constant temperature conditions. • The isentropic gas case\[p/\rho^{\gamma}=K\Rightarrow\left(\frac{\partial p}{\partial\rho}\right)_{s}=\gamma K\rho^{\gamma-1}=\gamma RT_{0}/MW\] and \(c=\sqrt{\gamma RT_{0}/MW}\ ,\) where \(\gamma\) is the isentropic or adiabatic exponent for the fluid (equal to the ratio of specific heats) and subscript \(s\) denotes constant entropy conditions. • The isothermal liquid case\[\left(\frac{\partial p}{\partial\rho}\right)_{T}=\beta/\rho\] and \(c=\sqrt{\beta/\rho},\) where \(\beta\) is bulk modulus. For atmospheric air at standard conditions we have \(p=101325\)Pa, \(T_{0}=293.15\)K, \(R=8.3145\)J/mol/K, \(\gamma=1.4\) and \(MW=0.028965\)kg/mol, which gives \[\tag{15} \textrm{isothermal:}\quad c = 290\textrm{m/s,}\] \[\tag{16} \textrm{isentropic:}\quad \; c = 343\textrm{m/s.}\] For liquid distilled water at \(20\)C we have \(\beta=2.18\times10^{9}\)Pa and \(\rho=1,000\)kg/m\(^{3},\) which gives \[\tag{17} \textrm{liquid}:\quad c=1476\textrm{m/s.}\] Waves in solids Waves in solids are more complex than acoustic waves in fluids. Here we are dealing with displacement \(\varrho\ ,\) and the resulting waves can be either longitudinal, P-waves, or shear (transverse), S-waves. Starting with Newton's second Law we arrive at the vector wave equation [Elm-69, chapter 7] \[\tag{18} \left(\lambda+\mu\right)\nabla\left(\nabla\cdot\varrho\right)+\mu\nabla^{2}\varrho=\rho\frac{\partial^{2}\varrho}{\partial t^{2}},\] from which, using the fundamental identity from vector calculus, \(\nabla\times\left(\nabla\times\varrho\right)=\nabla\left(\nabla\cdot\varrho\right)-\nabla^{2}\varrho\ ,\) we obtain \[\tag{19} \left(\lambda+2\mu\right)\nabla\left(\nabla\cdot\varrho\right)+\mu\nabla\times\left(\nabla\times\varrho\right)=\rho\frac{\partial^{2}\varrho}{\partial t^{2}}.\] Now, for irrotational waves, which vibrate only in the direction of propagation \(x\ ,\) \(\nabla\times\varrho=0\Rightarrow\nabla\left(\nabla\cdot\varrho\right)=\nabla^{2}\varrho\) and equation (19) reduces to the familiar linear wave equation \[\tag{20} \frac{1}{c^{2}}\frac{\partial^{2}\varrho}{\partial t^{2}}=\nabla^{2}\varrho,\] where \(c=\sqrt{\left(\lambda+2\mu\right)/\rho}=\sqrt{\left(K+\frac{4}{3}\mu\right)/\rho}\) is the wave speed, \(\lambda=E\upsilon/\left(1+\upsilon\right)\left(1-2\upsilon\right)\) is the Lamé modulus, \(\mu={\displaystyle \frac{E}{2\left(1+\upsilon\right)}}\) is the shear modulus and \(K=E/3\left(1-2\upsilon\right)\) is the bulk modulus of the solid material. Here, \(E\) and \(\upsilon\) are Young's modulus and Poisson's ratio for the solid respectively. Irrotational waves are of the longitudinal type, or P-waves. For solenoidal waves, which can vibrate independently in the \(y\) and \(z\) directions but not in the direction of propagation \(x\ ,\) we have \(\nabla\cdot\varrho=0\) and equation (18) reduces to the linear wave equation where the wave speed is given by \(c=\sqrt{\mu/\rho}\) . Solenoidal waves are of the transverse type, or S-waves. For a typical mild-steel at \(20\)C with \(\rho=7,860\)kg/m\(^{3}\ ,\) \(E=210\times10^{9}\)N/m\(^{2}\) and \(\upsilon=0.29\) we find that the P-wave speed is \(5917\)m/s and the S-wave speed is \(3,218\)m/s. For further discussion refer to [Cia-88]. Electromagnetic waves The fundamental equations of electromagnetism are the Maxwell Equations, which in differential form and SI units, are usually written as: \[\tag{22} \nabla\cdot E = \frac{1}{\epsilon_{0}}\rho,\] \[\tag{23} \nabla\cdot B = 0,\] \[\tag{24} \nabla\times E = -\frac{\partial B}{\partial t},\] \[\tag{25} \nabla\times B = \mu_{0}J+\mu_{0}\epsilon_{0}\frac{\partial E}{\partial t},\]   \(B =\) magnetic field (T)   \(E =\) electric field (V/m)   \(J =\) current density (A/m\(^{2}\))   \(\; t =\) time (s)   \(\epsilon_{0} =\) permittivity of free space (\(8.8541878\times10^{-12}\simeq10^{-9}/36\pi\) F/m)   \(\mu_{0} =\) permeability of free space (\(4\pi\times10^{-7}\) H/m)   \(\; \rho =\) charge density (C/m\(^{3}\)) If we assume that \(J=0\) and \(\rho=0\ ,\) then on taking the curl of equation (24) and again using the fundamental identity from vector calculus, \(\nabla\times\left(\nabla\times E\right)=\nabla\left(\nabla\cdot E\right)-\nabla^{2}E\ ,\) we obtain \[\tag{26} \frac{1}{c_{0}^{2}}\frac{\partial^{2}E}{\partial t^{2}}=\nabla^{2}E.\] Similarly, taking the curl of equation (25) we obtain \[\tag{27} \frac{1}{c_{0}^{2}}\frac{\partial^{2}B}{\partial t^{2}}=\nabla^{2}B.\] Equations (26) and (27) are the linear electric and magnetic wave equations respectively, where \(c_{0}=1/\sqrt{\mu_{0}\epsilon_{0}}\simeq3\times10^{8}\) m/s, the speed of light in a vacuum. They take the familiar form of linear wave equation (4). For further discussion refer to [Sha-75]. Nonlinear waves Nonlinear waves are described by nonlinear equations, and therefore the superposition principle does not generally apply. This means that nonlinear wave equations are more difficult to analyze mathematically and that no general analytical method for their solution exists. Thus, unfortunately, each particular wave equation has to be treated individually. An example of solving the Korteweg-de Vries equation by direct integration is given below. Some advanced methods that have been used successfully to obtain closed-form solutions are listed in section (Closed form PDE solution methods), and example solutions to well known evolution equations are given in section (Nonlinear wave equation solutions). Closed form PDE solution methods There are no general methods guaranteed to find closed form solutions to non-linear PDEs. Nevertheless, some problems can yield to a trial-and-error approach. This hit-and-miss method seeks to deduce candidate solutions by looking for clues from the equation form, and then systematically investigating whether or not they satisfy the particular PDE. If the form is close to one with an already known solution, this approach may yield useful results. However, success is problematical and relies on the analyst having a keen insight into the problem. We list below, in alphabetical order, a non-exhaustive selection of advanced solution methods that can assist in determining closed form solutions to nonlinear wave equations. We will not discuss further these methods and refer the reader to the references given for details. All these methods are greatly enhanced by use of a symbolic computer program such as: Maple V, Mathematica, Macysma, etc. • Bäcklund transformation - A method used to find solutions to a non-linear partial differential equation from either a known solution to the same equation or from a solution to another equation. This can facilitate finding more complex solutions from a simple solution, e.g. a multi-soliton solutions from a single soliton solution [Abl-91],[Inf-00],[Dra-89]. • Generalized separation of variables method - For simple cases this method involves searching for exact solutions of the multiplicative separable form \( u\left(x,t\right)=\varphi\left(x\right)\psi\left(t\right)\) or, of the additive separable form \(u\left(x,t\right)=\varphi\left(x\right)+\psi\left(t\right)\ ,\) where \(\varphi\left(x\right)\) and \(\psi\left(t\right)\) are functions to be found. The chosen form is substituted into the original equation and, after performing some algebraic operations, two expressions are obtained that are each deemed equal to a constant \(K\ ,\) the separation constant. Each expression is then solved independently and then combined additively or multiplicatively as appropriate. Initial conditions and boundary conditions are then applied to give a particular solution to the original equation. For more complex cases, special solution forms such as \(u\left(x,t\right)=\varphi\left(x\right)\psi\left(t\right)+\chi\left(x\right)\) can be sought - refer to [Pol-04, pp. 698-712], [Gal-06], and [Pol-07, pp. 681-696] for a detailed discussion. • Differential constraints method - This method seeks particular solutions of equations of the form \(F\left(x,y,u,{\displaystyle \frac{\partial u}{\partial x},}{\displaystyle \frac{\partial u}{\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial x^{2}}},{\displaystyle \frac{\partial^{2}u}{\partial x\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial y^{2}},\cdots}\right)=0\) by supplementing them with an additional differential constraint(s) of the form \(G\left(x,y,u,{\displaystyle \frac{\partial u}{\partial x},}{\displaystyle \frac{\partial u}{\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial x^{2}}},{\displaystyle \frac{\partial^{2}u}{\partial x\partial y}},{\displaystyle \frac{\partial^{2}u}{\partial y^{2}},\cdots}\right)=0\ .\) The exact form of the differential constraint is determined from auxiliary problem conditions, usually based on physical insight. Compatibility analysis is then performed, for example by differentiating \(F\) and \(G\) (possibly several times), which enables an ordinary differential equation(s) to be constructed that can be solved. The resulting ODE is the compatibility condition for \(F\) and \(G\) and its solution can be used to obtain a solution to the original equation - refer to [Pol-04, pp. 747-758] for a detailed discussion. • Group analysis methods (Lie group methods) - These methods seeks to identify symmetries of an equation which permit us to discover: (i) transformations under which the equation is invariant, (ii) new variables in which the structure of the equation is simplified. For an \((n+1)\)-dimensional Euclidean space, the set of transformations \(\mathrm{T}_{\epsilon}=\left\{ \begin{array}{rc} \bar{x_{i}}=\varphi_i\left(x,u,\epsilon\right), & \left.\bar{x_{i}}\right|_{\epsilon=0}=x_{i}\\ \bar{u}=\psi\left(x,u,\epsilon\right), & \left.\bar{u}\right|_{\epsilon=0}=u\end{array}\right.\ ,\) where \(\varphi_{i}\) and \(\psi\) are smooth functions of their arguments and \(\epsilon\) is a real parameter, is called a one-parameter continuous point Lie group of transformations, \(G\ ,\) if for all \(\epsilon_{1}\) and \(\epsilon_{2}\) we have \(T_{\epsilon_{1}}\circ T_{\epsilon_{2}}=T_{\epsilon_{1}+\epsilon_{2}}\) - refer to [Ibr-94] and [Pol-04, pp. 735-743] for a detailed discussion. • Hirota's bilinear method - This method can be used to construct periodic and soliton wave solutions to nonlinear PDEs. It seeks a solution of the form \(u=-2\left(\log f\right)_{xx}\) by introducing the bilinear operator \(D_{t}^{m}D_{x}^{n}\left(a\cdot b\right)=\left.\left({\displaystyle \frac{\partial}{\partial t}-\frac{\partial}{\partial t^{\prime}}}\right)^{m}\left({\displaystyle \frac{\partial}{\partial x}-\frac{\partial}{\partial x{}^{\prime}}}\right)^{n}a\left(x,t\right)b\left(x^{\prime},t^{\prime}\right)\right|_{\begin{array}{c} x^{\prime}=x\\ t^{\prime}=t\end{array}}\) for non-negative integers \(m\) and \(n\) [Joh-97],[Dai-06]. • Hodograph transformation method - This method belongs to the class of point transformations and involves the interchange of dependent and independent variables, i.e. \(\tau=t\ ,\) \(\xi=u\left(x,t\right)\ ,\) \(\eta\left(\xi,\tau\right)=x\ .\) This transformation can, for certain applications, result in a simpler (possibly an exact linearization) problem for which solutions can be found [Cla-89], [Pol-04, pp. 686-687]. • Inverse scattering transform (IST) method - The phenomenon of scattering refers to the evolution of a wave subject to certain conditions, such as boundary and/or initial conditions. If data relating to the scattered wave are known, then it may be possible to determine from these data the underlying scattering potential. The problem of reconstructing the potential from the scattering data is referred to as the so-called inverse scattering transform. The IST is a nonlinear analog of the Fourier transform used for solving linear problems. This useful property allows certain nonlinear problems to be treated by what are essentially linear methods. The IST method has been used for solving many types of evolution equation [Abl-91], [Inf-00], [Kar-98], [Whi-99]. • Lax pairs - A Lax pair consists of the Lax operator \(L\) (which is self-adjoint and may depend upon \(x,\, u_{x},\, u_{xx},\cdots\ ,\) but not explicitly upon \(t\)) and the operator \(A\) that together represent a given partial differential equation such that \(L_{t}=[A,L]=\left(AL-LA\right)\ .\) Note\[\left(AL-LA\right)\] represents the commutator of the operators \(L\) and \(A\ .\) Operator \(A\) is required to have enough freedom in any unknown parameters or functions to enable the operator \(L_{t}=[L,A]\) to be chosen so that it is of degree zero, i.e. a multiplicative operator. \(L\) and \(A\) can be either scalar or matrix operators. If a suitable Lax pair can be found, the analysis of the nonlinear equation can be reduced to that of two simpler equations. However, the process of finding \(L\) and \(A\) corresponding to a given equation can be quite difficult. Therefore, if a clue(s) is available, inverting the process by first postulating a given \(L\) and \(A\) and then determining which partial differential equation they correspond to, can sometimes lead to good results. However, this may require the determination of many trial pairs and, ultimately, may not lead to the required solution [Abl-91],[Inf-00],[Joh-97],[Pol-07]. • Painlevé test - The Painlevé test is used as a means of predicting whether or not an equation is likely to be integrable. The test involves checking of self-similar reduced equations against a set of the six Panlevé equations (or, Panlevé transcendents) and, if there is a match, the system is integrable. A nonlinear evolution equation which is solvable by the IST is a Panelevé type, which means that it has no movable singularities other than poles [Abl-91],[Joh-97]. • Self-similar and similarity solutions - An example of a self-similar solution to a nonlinear PDE is a solution where knowledge of \(u(x,t=t_{0})\) is sufficient to obtain \(u(x,t)\) for all \(t>0\ ,\) by suitable rescaling [Bar-03]. In addition, by choosing a suitable similarity transformation(s) it is sometimes possible to find a similarity solution whereby a combination of variables is invariant under the similarity transformation [Fow-05]. Some techniques for obtaining traveling wave solutions The following are examples of techniques that transform PDEs into ODEs which are subsequently solved to obtain traveling wave solutions to the original equations. • Exp-function method - This is a straight forward method that assumes a traveling wave solution of the form \(u\left(x,t\right)=u\left(\eta\right)\) where \(\eta=kx+\omega t\ ,\) \(\omega=\) frequency and \(k=\) wavenumber. This transforms the PDE into an ODE. The method then attempts to find solutions of the form \(u(\eta)=\frac{\sum_{n=-c}^{d}a_{n}\exp\left(n\eta\right)}{\sum_{m=-p}^{q}b_{m}\exp\left(m\eta\right)}\ ,\) where \(c\ ,\) \(d\ ,\) \(p\) and \(q\) are positive integers to be determined, and \(a_{n}\) and \(b_{m}\) are unknown constants [He-06]. • Factorization - This method seeks solutions PDEs with a polynomial non-linearity by rescaling to eliminate coefficients and assuming a travelling wave solution of the form \(u\left(x,t\right)=U\left(\xi\right)\ ,\) where \(\xi=k\left(x-vt\right)\ ,\) \(v=\) velocity and \(k=\) wavenumber. The resulting ODE is then factorized and each factor solved independently [Cor-05]. • Tanh method - This is a very useful method that is conceptually easy to use and has produced some very good results. Basically, it assumes a travelling wave solution of the form \(u\left(x,t\right)=U\left(\xi\right)\) where \(\xi=k\left(x-vt\right)\ ,\) \(v=\) velocity and \(k=\) wavenumber. This has the effect of transforming the PDE into a set of ODEs which are subsequently solved using the transformation \(Y=\tanh\left(\xi\right)\) [Mal-92],[Mal-96a],[Mal-96b]. Some example applications of these and other methods can be found in [Gri-11]. Nonlinear wave equation solutions A non-exhaustive selection of well known 1D nonlinear wave equations and their closed-form solutions is given below. The closed form solutions are given by way of example only, as nonlinear wave equations often have many possible solutions. • Hopf equation (inviscid Burgers equation): \(u_{t}+uu_{x}=0\) [Pol-02] - Applications: gas dynamics and traffic flow. - Solution\[u=\varphi\left(\xi\right),\;\xi=x-\varphi\left(\xi\right)t.\]   \(u\left(x,t=0\right)=\varphi\left(x\right)\), arbitrary initial condition. • Burgers equation: \(u_{t}+uu_{x}-au_{xx}=0\) [Her-05] - Applications: acoustic and hydrodynamic waves. - Solution\[u(x,t)=2ak\left[1-\tanh k\left(x-Vt\right)\right] .\]   \(k=\) wavenumber,   \(a=\) arbitrary constant. • Fisher: \(u_{t}-u_{xx}-u\left(1-u\right)=0\) [Her-05] - Applications: heat and mass transfer, population dynamics, ecology. - Solution\[u(x,t)=\frac{1}{4}\left\{ 1-\tanh k\left[x-Vt\right]\right\} ^{2}.\]   \(k={\displaystyle \frac{1}{2\sqrt{6}}}\) (wavenumber),   \(V={\displaystyle \frac{5}{\sqrt{6}}}\) (velocity). Note: wavenumber and velocity are fixed values. • Sine Gordon equation: \(u_{tt}=au_{xx}+b\sin\left(\lambda u\right)\) [Pol-07] - Applications: various areas of physics - Solution\[u\left(x,t\right)=\left\{ \begin{array}{l} {\displaystyle \frac{4}{\lambda}}\arctan\left[\exp\left(\pm{\displaystyle \frac{b\lambda\left(kx+\mu t+\theta_{0}\right)}{\sqrt{b\lambda\left(\mu^{2}-ak^{2}\right)}}}\right)\right],\quad b\lambda\left(\mu^{2}-ak^{2}\right)>0,\\ \\{\displaystyle \frac{4}{\lambda}}\arctan\left[\exp\left(\pm{\displaystyle \frac{b\lambda\left(kx+\mu t+\theta_{0}\right)}{\sqrt{b\lambda\left(ak^{2}-\mu^{2}\right)}}}\right)\right]-{\displaystyle \frac{\pi}{\lambda}},\quad b\lambda \left(\mu^{2}-ak^{2}\right)<0.\end{array}\right. \]   \(k=\) wavenumber,   \(\mu, \theta_{0}= \) arbitrary constants. • Cubic Schrödinger equation: \(iu_{t}+u_{xx}+q\left|u\right|^{2}u=0\) [Whi-99] - Applications: various areas of physics, non-linear optics, superconductivity, plasma models. - Solution\[u(x,t)=\sqrt{\frac{\alpha}{q}}\textrm{sech}\left(\sqrt{\alpha}\left(x-Vt\right)\right),\quad \alpha>0,q>0 .\]   \(\alpha,q=\) arbitrary constants. • Korteweg-de Vries (a variant)\[u_{t}+uu_{x}+bu_{xxx}=0\] [Her-05] - Applications: various areas of physics, nonlinear mechanics, water waves. - Solution\[u(x,t)=12bk^{2}\textrm{sech}^{2}k\left(x-Vt\right)\]   \(b=\) arbitrary constant. • Boussinesq equation: \(u_{tt}-u_{xx}+3uu_{xx}+\alpha u_{xxxx}=0\) [Abl-91] - Applications: surface water waves - Solution\[{\displaystyle \frac{1}{6}\left\{ 1+8k^{2}-V^{2}\right\} -2k^{2}\tanh^{2}k\left(x+Vt\right)}\] • Nonlinear wave equation of general form: \(u_{tt}=\left[f\left(u\right)u_{x}\right]_{x}\) This equation can be linearized in the general case. Some exact solutions are given in [Pol-04, pp252-255] and, by way of an example consider the following special case where \(f\left(u\right)=\alpha e^{\lambda u}\ :\) Wave equation with exponential non-linearity: \(u_{tt}=\left(\alpha e^{\lambda u}u_{x}\right)_{x},\quad\alpha>0.\) [Pol-04, p223] - Applications: traveling waves - Solution\[u(x,t)={\displaystyle \frac{1}{\lambda}}\ln\left(\alpha ax^{2}+bx+c\right)-{\displaystyle \frac{2}{\lambda}}\ln\left(\alpha at+d\right)\ :\]   \(\alpha,\lambda,a,b,c,d=\) arbitrary constants. Additional wide-ranging examples of traveling wave equations, with solutions, from the fields of mathematics, physics and engineering are given in Polyanin & Manzhirov [Pol-07] and Polyanin & Zaitsev [Pol-04]. Examples from the biological and medical fields can be found in Murray [Mur-02] and Murray [Mur-03]. A useful on-line resource is the DispersiveWiki [Dis-08]. The Korteweg-de Vries equation The canonical form of the Korteweg-de Vries (KdV) equation is \[\tag{28} \frac{\partial u}{\partial t}-6u\frac{\partial u}{\partial x}+\frac{\partial^{3}u}{\partial x^{3}}=0,\] and is a non-dimensional version of the following equation originally derived by Korteweg and de Vries for a moving (Lagrangian) frame of reference [Jag-06], [Kor-95], \[\tag{29} \frac{\partial\eta}{\partial\tau}=\frac{3}{2}\sqrt{\frac{g}{h_{o}}}\frac{\partial}{\partial\chi}\left[\frac{1}{2}\eta^{2}+\frac{2}{3}\alpha\eta+\frac{1}{3}\sigma\frac{\partial^{2}\eta}{\partial\chi^{2}}\right].\] It is, historically, the most famous solitary wave equation and describes small amplitude, shallow water waves in a channel, where symbols have the following meaning:   \(g =\) gravitational acceleration (m/s\(^{2}\))   \(h_{o} =\) nominal water depth (m)   \(T =\) capillary surface tension of fluid (N/m)   \(\alpha =\) small arbitrary constant related to the uniform motion of the liquid (dimensionless)   \(\eta =\) wave height (m)   \(\rho =\) fluid density (kg/m\(^{3}\))   \(\tau =\) time (s)   \(\chi =\) distance (m) After re-scaling and translating the dependent and independent variables to eliminate the physical constants using the transformations [Abl-91], \[\tag{30} u=-\frac{1}{2}\eta-\frac{1}{3}\alpha;\quad x=-\frac{\chi}{\sqrt{\sigma}};\quad t=\frac{1}{2}\sqrt{\frac{g}{h_{o}\sigma}}\tau\] where \(\sigma=h_{o}^{3}/3-Th_{o}/\left(\rho g\right)\ ,\) and \(Th_{o}/\left(\rho g\right)\) is called the Bond number (a measure of the relative strengths of surface tension and gravitational force), we arrive at the Korteweg-de Vries equation, i.e. equation (28). The basic assumptions for the derivation of KdV waves in liquid, having wavelength \(\lambda\ ,\) are [Abl-91]: • the waves are long waves in comparison with total depth, \({\displaystyle \frac{h_{o}}{\lambda}}\ll1\ ;\) • the amplitude of the waves is small, \(\varepsilon={\displaystyle \frac{\eta}{h_{o}}}\ll1\ ;\) • the first two effects approximately balance, i.e. \({\displaystyle \frac{h_{o}}{\lambda}}=\mathcal{O\left(\varepsilon\right)}\ ;\) • viscous effects can be neglected. The KdV equation was found to have solitary wave solutions [Lam-93], which confirmed John Scott-Russell's account of the solitary wave phenomena [Sco-44] discovered during his experimental investigations into water flow in channels to determine the most efficient design for canal boats [Jag-06]. Subsequently, the KdV equation has been shown to model various other nonlinear wave phenomena found in the physical sciences. John Scott-Russell, a Scottish engineer and naval architect, also described in poetic terms his first encounter with the solitary wave phenomena, thus: An experimental apparatus for re-creating the phenomena observed by Scott-Russell have been built at Herriot-Watt University. Scott-Russell also coined the term solitary wave and conducted some of the first experiments to investigate another nonlinear wave phenomena, the Doppler effect, publishing an independent explanation of the theory in 1848 [Sco-48]. It is interesting to note that, a KdV solitary wave in water that experiences a change in depth will retain its general shape. However, on encountering shallower water its velocity and height will increase and its width decrease; whereas, on encountering deeper water its velocity and height will decrease and its width increase [Joh-97, pp 268-277]. A closed form single soliton solution to the KdV equation (28) can be found using direct integration as follows. Assume a travelling wave solution of the form \[\tag{31} u(x,t)=f(x-vt)=f(\xi).\] Then on substituting into the canonical equation the PDE is transformed into the following ODE \[\tag{32} -v\frac{df(\xi)}{d\xi}-6f\frac{df(\xi)}{d\xi}+\frac{d^{3}f(\xi)}{d\xi^{3}}=0.\] Now integrate with respect to \(\xi\) and multiply by \({\displaystyle \frac{df(\xi)}{d\xi}}\) to obtain \[\tag{33} -vf(\xi)\frac{df(\xi)}{d\xi}-3f(\xi)^{2}\frac{df(\xi)}{d\xi}+\frac{df(\xi)}{d\xi}\left(\frac{d^{2}f(\xi)}{d\xi^{2}}\right)=A\frac{df(\xi)}{d\xi}.\] Now integrate with respect to \(\xi\) once more, to obtain \[\tag{34} -\frac{1}{2}vf(\xi)^{2}-f(\xi)^{3}+\frac{1}{2}\left(\frac{df(\xi)}{d\xi}\right)^{2}=Af(\xi)+B.\] Where \(A\) and \(B\) are arbitrary constants of integration which we set to zero. We justify this by assuming that we are modeling a physical system with properties such that \(f,f^{\prime}\) and \(f^{\prime\prime}\rightarrow0\) as \(\xi\rightarrow\pm\infty\ .\) After rearranging and evaluating the resulting integral, we find \[\tag{35} f\left(\xi\right)=\frac{v}{2}\textrm{sech}^{2}\left(\frac{\sqrt{v}}{2}\xi\right).\] The solution is therefore \[\tag{36} u(x,t) = f(x-vt),\] \[\tag{37} \quad= 2k^{2}\textrm{sech}^{2}\left(k\left[x-vt-x_{0}\right]\right),\] where \(k={\displaystyle \frac{\sqrt{v}}{2}}\) represents wavenumber and the constant \(x_{0}\) has been included to locate the wave peak at \(t=0\ .\) Thus, we observe that the wave travels to the right with a speed that is equal to twice the peak amplitude. Hence, the taller a wave the faster it travels. The KdV equation also admits many other solutions including multiple soliton solutions, see figure (15), and cnoidal (periodic) solutions. Solutions of KdV equation can be systematically obtained from solutions \(\psi_{i}\) of of the free particle Schrödinger equation \[\tag{38} -\left(\frac{\partial^{2}}{\partial x^{2}}\psi_{i}\right)=E_{i}\psi_{i},\quad i=1,\cdots,n\] using the the relationship \[\tag{39} u\left(x,t\right)=2\left(\frac{\partial^{2}}{\partial x^{2}}\ln\left(W_{n}\right)\right),\] where we use the the Wronskian function \[\tag{40} W_{n}=W_{n}\left[\psi_{1},\psi_{2},\cdots,\psi_{n}\right].\] The Wronskian is the determinant of a \(n\times n\) matrix [Dra-89] composed from the functions \(\psi_{i}(\xi_{i})\ ,\) where \(\xi_{i}\) for our purposes is given by \[\tag{41} \xi_{i} = k_{i}\left(x-v_{i}t\right),\quad E_{i}<0,\] \[\tag{42} \xi_{i} = k_{i}\left(x+v_{i}t\right),\quad E_{i}>0.\] For example, a two-soliton solution is given by \[\tag{43} u(x,t)=\frac{\left(k_{1}^{2}-k_{2}^{2}\right)\left\{ 2k_{2}^{2}\textrm{csch}\, k_{2}\left(x-v_{2}t\right)+2k_{1}^{2}\textrm{sech}\, k_{1}\left(x-v_{1}t\right)\right\} }{\left[k_{1}\tanh k_{1}\left(x-v_{1}t\right)+k_{2}\coth k_{2}\left(x-v_{2}t\right)\right]^{2}}\] and a cnoidal wave solution is given by \[\tag{44} u(x,t)=\frac{1}{6k}\left(4k^{2}(2m-1)-vk\right)-2k^{2}\textrm{cn}^{2}\left(kx-vkt+x_{0};m\right).\] where 'cn' represents the Jacobi elliptic cosine function with modulus \( m, \left( 0<m<1 \right)\). Note: as \(m\rightarrow1\) the periodic solution tends to a single soliton solution. Interestingly, the KdV equation is invariant under a Galilean transformation, i.e. its properties remain unchanged, see section (Galilean invariance). Numerical solution methods Linear and nonlinear evolutionary wave problems can very often be solved by application of general numerical techniques such as: finite difference, finite volume, finite element, spectral, least squares, weighted residual (e.g. collocation and Galerkin) methods, etc. These methods, which can all handle various boundary conditions, stiff problems and may involve explicit or implicit calculations, are well documented in the literature and will not be discussed further here. For general texts refer to [Bur-93],[Sch-94],[Sch-09], and for more detailed discussion refer to [Lev-02],[Mor-94],[Zie-77]. Some wave problems do, however, present significant problems when attempting to find a numerical solution. In particular we highlight problems that include shocks, sharp fronts or large gradients in their solutions. Because these problems often involve inviscid conditions (zero or vanishingly small viscosity), it is often only practical to obtain weak solutions. Some PDE problems do not have a mathematically rigorous solution, for example where discontinuities or jump conditions are present in the solution and/or characteristics intersect. Such problems are likely to occur when there is a hyperbolic (strongly convective) component present. In these situations weak solutions provide useful information. Detailed discussion of this approach is beyond the scope of this article and readers are referred to [Wes-01, chapters 9 and 10] for further discussion. General methods are often not adequate for accurate resolution of steep gradient phenomena; they usually introduce non-physical effects such as smearing of the solution or spurious oscillations. Since publication of Godunov's order barrier theorem, which proved that linear methods cannot provide non-oscillatory solutions higher than first order [God-54],[God-59], these difficulties have attracted a lot of attention and a number of techniques have been developed that largely overcome these problems. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a total variation diminishing (TVD) characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL (Monotone Upstream-Centred Schemes for Conservation Laws) a flux/slope limiter method [van-79],[Hir-90],[Tan-97],[Lan-98],[Tor-99] and the WENO (Weighted Essentially Non-Oscillatory) method [Shu-98],[Shu-09]. MUSCL methods are usually referred to as high resolution schemes and are generally second-order accurate in smooth regions (although they can be formulated for higher orders) and provide good resolution, monotonic solutions around discontinuities. They are straight-forward to implement and are computationally efficient. For problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth order accurate WENO scheme, whilst higher order schemes can be used where the problem demands improved accuracy in smooth regions. Initial conditions and boundary conditions Consider the classic 1D linear wave equation \[\tag{45} \dfrac{\partial^{2}u}{\partial t^{2}}=\frac{1}{c^{2}}\dfrac{\partial^{2}u}{\partial x^{2}}.\] In order to obtain a solution we must first specify some auxiliary conditions to complete the statement of the PDE problem. The number of required auxiliary conditions is determined by the highest order derivative in each independent variable. Since equation (45) is second order in \(t\) and second order in \(x\ ,\) it requires two auxiliary conditions in \(t\) and two auxiliary conditions in \(x\ .\) To have a complete well posed problem, some additional conditions may have to be included - refer to section (Wellposedness). The variable \(t\) is termed an initial value variable and therefore requires two initial conditions (ICs). It is an initial value variable since it starts at an initial value, \(t_{0}\ ,\) and moves forward over a finite interval \(t_{0}\leq t\leq t_{f}\) or a semi-infinite interval \(t_{0}\leq t\leq\infty\) without any additional conditions being imposed. Typically in a PDE application, the initial value variable is time, as in the case of equation (45). The variable \(x\) is termed a boundary value variable and therefore requires two boundary conditions (BCs). It is a boundary value variable since it varies over a finite interval \(x_{0}\leq x\leq x_{f}\ ,\) a semi-infinite interval \(x_{0}\leq x\leq\infty\) or a fully infinite interval \(-\infty\leq x\leq\infty\ ,\) and at two different values of \(x\), conditions are imposed on \(u\) in equation (45). Typically, the two values of \(x\) correspond to boundaries of a physical system, and hence the name boundary conditions. BCs can be of three types: • Dirichlet or first type - the boundary has a value \(u(x=x_{0},t)=u^{b}\left(t\right)\ .\) • Neumann or second type - the spatial gradient at the boundary has a value \(\dfrac{\partial u(x=x_{f},t)}{\partial x}=u_{x}^{b}\left(t\right)\ ,\) and for multi-dimensions it is normal to the boundary. • Robin or third type - both the dependent variable and its spatial derivative appear in the BC, i.e. a combination of Dirichlet and Neumann. An important consideration is the possibility of discontinuities at the boundaries, produced for example by differences in initial and boundary conditions at the boundaries, which can cause computational difficulties, such as shocks - see section (Shock waves), particularly for hyperbolic PDEs such as equation (45) above. Numerical dissipation and dispersion Some dissipation and dispersion occur naturally in most physical systems described by PDEs. Errors in magnitude are termed dissipation and errors in phase are called dispersion. These terms are defined below. The term amplification factor is used to represent the change in the magnitude of a solution over time. It can be calculated in either the time domain, by considering solution harmonics, or in the complex frequency domain by taking Fourier transforms. Dissipation and dispersion can also be introduced when PDEs are discretized in the process of seeking a numerical solution. This introduces numerical errors. The accuracy of a discretization scheme can be determined by comparing the numeric amplification factor \(G_{numeric},\) with the analytical or exact amplification factor \(G_{exact}\ ,\) over one time step. For further reading refer to [Hir-88, chap. 8], [Lig-78, chap. 3], [Tan-97, chap. 4], [Wes-01, chap 8 and 9]. Dispersion relation Physical waves that propagate in a particular medium will, in general, exhibit a specific group velocity as well as a specific phase velocity - see section (Group and phase velocity). This is because within a particular medium there is a fixed relationship between the wavenumber \(k\ ,\) and the frequency \(\omega\ ,\) of waves. Thus, frequency and wavenumber are not independent quantities and are related by a functional relationship, known as the dispersion relation , \(\omega(k)\). We will demonstrate the process of obtaining the dispersion relation by example, using the advection equation \[\tag{46} u_{t}+au_{x}=0.\] Generally, each wavenumber \(k \,\ ,\) corresponds to \(s \,\) frequencies where \(s \,\) is the order of the PDE with respect to \(t\ .\) Now any linear PDE with constant coefficients admits a solution of the form \[\tag{47} u\left(x,t\right)=u_{0}e^{i\left(kx-\omega t\right)}.\] Because we are considering a linear system, the principal of superposition applies and equation (47) can be considered to be a frequency component or harmonic of the Fourier series representation of a specific solution to the advection equation. On inserting this solution into a PDE we obtain the so called dispersion relation between \(\omega\) and \(k\) i.e., \[\tag{48} \omega=\omega\left(k\right),\] and each PDE will have its own distinct form. For example, we obtain the specific dispersion relation for the advection equation by substituting equation (47) into equation (46) to get \[ -i\omega u_{0}e^{i\left(kx-\omega t\right)} = -iaku_{0}e^{i\left(kx-\omega t\right)}\] \[\Downarrow \] \[\tag{49} \omega = ak.\] This confirms that \(\omega\) and \(k\) cannot be determined independently for the advection equation, and therefore equation (47) becomes \[\tag{50} u\left(x,t\right)=u_{0}e^{ik\left(x-at\right)}.\] Note: If the imaginary part of \(\omega\left(k\right)\) is zero, then the system is non-dissipative. The physical meaning of equation (50) is that the initial value \(u\left(x,0\right)=u_{0}e^{ikx}\ ,\) is propagated from left to right, unchanged, at velocity \(a\ .\) Thus, there is no dissipation or attenuation and no dispersion. A similar approach can be used to establish the dispersion relation for systems described by other forms of PDEs. Amplification factor As mentioned above, the accuracy of a numerical scheme can be determined by comparing the numeric amplification factor \(G_{numeric},\) with the exact amplification factor \(G_{exact}\ ,\) over one time step. The exact amplification factor can be determined by considering the change that takes place in the exact solution over a single time-step. For example, taking the advection equation (46) and assuming a solution of the form \(u\left(x,t\right)=u_{0}e^{ik\left(x-at\right)}\ ,\) we have \[ G_{exact} = \frac{u\left(x,t+\Delta t\right)}{u\left(x,t\right)}=\frac{u_{0}e^{ik\left(x-a\left(t+\Delta t\right)\right)}}{u_{0}e^{ik\left(x-at\right)}}.\] \[\tag{51} \therefore G_{exact} = e^{-iak\Delta t}.\] We can also represent equation (51) in the form \[\tag{52} G_{exact}=\left|G_{exact}\right|e^{i\Phi_{exact}},\] where \[\tag{53} \Phi_{exact}=\angle G=\tan^{-1}\left(\frac{\textrm{Im}\left\{ G\right\} }{\textrm{Re}\left\{ G\right\} }\right).\] Thus, for this case \[\tag{54} \left|G_{exact}\right| = 1\] and \[\tag{55} \Phi_{exact} = \tan^{-1}\left(\tan\left(-ak\Delta t\right)\right)=-ak\Delta t.\] The amplification factor provides an indication of how the the solution will evolve because values of \(\left|\Phi\right|\rightarrow0\) are associated with low frequencies and values of \(\left|\Phi\right|\rightarrow\pi\) are associated with high frequencies. Also, because phase shift is associated with the imaginary part of \( G_{exact}\ ,\) if \(\Im\left\{ G_{exact}\right\} =0\ ,\) the system does not exhibit any phase shift and is purely dissipative. Conversely, if \(\Re\left\{ G_{exact}\right\} =1\ ,\) the system does not exhibit any amplitude attenuation and is purely dispersive The numerical amplification factor \(G_{numeric}\) is calculated in the same way, except that the appropriate numerical approximation is used for \(u(x,t)\ .\) For stability of the numerical solution, \(\left|G_{numeric}\right|\leq1\) for all frequencies. Numerical dissipation Figure 1: Figure 1: Illustration of pure numeric dissipation effect on a single sinusoid, as it propagates along the spatial domain. Both exact and simulated dissipative waves begin with the same amplitude; however, the amplitude of the dissipative wave decreases over time, but stays in phase. Figure 2: Figure 2: Effect of numerical dissipation on a step function applied to the advection equation \(u_{t}+u_{x}=0\ .\) In a numerical scheme, a situation where waves of different frequencies are damped by different amounts, is called numerical dissipation, see figure (1). Generally, this results in the higher frequency components being damped more than lower frequency components. The effect of dissipation therefore is that sharp gradients, discontinuities or shocks in the solution tend to be smeared out, thus losing resolution, see figure (2). Fortunately, in recent years, various high resolution schemes have been developed to obviate this effect to enable shocks to be captured with a high degree of accuracy, albeit at the expense of complexity. Examples of particularly effective schemes are based upon flux/slope limiters [Wes-01] and WENO methods [Shu-98]. Dissipation can be introduced by numerical discretization of a partial differential equation that models a non-dissipative process. Generally, dissipation improves stability and, in some numerical schemes it is introduced deliberately to aid stability of the resulting solution. Dissipation, whether real or numerically induced, tend to cause waves to lose energy. The dissipation error as a result of discretization can be determined by comparing the magnitude of the numeric amplification factor \(\left|G_{numeric}\right|,\) with the magnitude of the exact amplification factor \(\left|G_{exact}\right|\ ,\) over one time step. The relative numerical diffusion error or relative numerical dissipation error compares real physical dissipation with the anomalous dissipation that results from numerical discretization. It can be defined as \[\tag{56} \varepsilon_{D}=\frac{\left|G_{numeric}\right|}{\left|G_{exact}\right|},\] and the total dissipation error resulting from \(n\) steps will be \[\tag{57} \varepsilon_{Dtotal}=\left(\left|G_{numeric}\right|^{n}-\left|G_{exact}\right|^{n}\right)u_{0}.\] If \(\varepsilon_{D}>1\) for a given value of \(\theta\) or Co, this discretization scheme will be unstable and a modification to the scheme will be necessary. As mentioned above, if the imaginary part of \(\omega\left(k\right)\) is zero for a particular discretization, then the scheme is non-dissipative. Numerical dispersion Figure 3: Figure 3: Illustration of pure numeric dispersion effect on a single sinusoid, as it propagates along the spatial domain. Both exact and simulated dispersive waves start in phase; however, the phase of the dispersive wave lags the exact wave over time, but its amplitude is unaffected. Figure 4: Figure 4: Effect of numerical dispersion on a step function applied to the advection equation \(u_{t}+u_{x}=0\ .\) In a numerical scheme, a situation where waves of different frequencies move at different speeds without a change in amplitude, is called numerical dispersion - see figure (3). Alternatively, the Fourier components of a wave can be considered to disperse relative to each other. It therefore follows that the effect of a dispersive scheme on a wave composed of different harmonics, will be to deform the wave as it propagates. However the energy contained within the wave is not lost and travels with the group velocity. Generally, this results in higher frequency components traveling at slower speeds than the lower frequency components. The effect of dispersion therefore is that often spurious oscillations or wiggles occur in solutions with sharp gradient, discontinuity or shock effects, usually with high frequency oscillations trailing the particular effect, see figure (4). The degree of dispersion can be determined by comparing the phase of the numeric amplification factor \(\left|G_{numeric}\right|,\) with the phase of the exact amplification factor \(\left|G_{exact}\right|\ ,\) over one time step. Dispersion represents phase shift and results from the imaginary part of the amplification factor. The relative numerical dispersion error compares real physical dispersion with the anomalous dispersion that results from numerical discretization. It can be defined as \[\tag{58} \varepsilon_{P}=\frac{\Phi_{numeric}}{\Phi_{exact}},\] where \(\Phi=\angle G=\tan^{-1}\left(\frac{\textrm{Im}\left\{ G\right\} }{\textrm{Re}\left\{ G\right\} }\right)\ .\) The total phase error resulting from \(n\) steps will be \[\tag{59} \varepsilon_{Ptotal}=n\left(\Phi_{numeric}-\Phi_{exact}\right)\] If \(\varepsilon_{P}>1\ ,\) this is termed a leading phase error. This means that the Fourier component of the solution has a wave speed greater than the exact solution. Similarly, if \(\varepsilon_{P}<1\ ,\) this is termed a lagging phase error. This means that the Fourier component of the solution has a wave speed less than the exact solution. Again, high resolution schemes can all but eliminate this effect, but at the expense of complexity. Although many physical processes are modeled by PDE's that are non-dispersive, when numerical discretization is applied to analyze them, some dispersion is usually introduced. Group and phase velocity The term group velocity refers to a wave packet consisting of a low frequency signal modulated (or multiplied) by a higher frequency wave. The result is a low frequency wave, consisting of a fundamental plus harmonics, that propagates with group velocity \(c_{g}\) along a continuum oscillating at a higher frequency. Wave energy and information signals propagate at this velocity, which is defined as being equal to the derivative of the real part of the frequency \(\omega\ ,\) with respect to wavenumber \(k\) (scalar or vector proportional to the number of wave lengths per unit distance), i.e. \[\tag{60} c_{g}=\frac{d\,\textrm{Re}\left\{ \omega\left(k\right)\right\} }{dk}.\] If there are a number of spatial dimensions then the group velocity is equal to the gradient of frequency with respect to the wavenumber vector, i.e. \(c_{g}=\nabla\textrm{Re}\left\{ \omega\left(k\right)\right\} \ .\) The complementary term to group velocity is phase velocity, \(c_{p}\ ,\) and this refers to the speed of propagation of an individual frequency component of the wave. It is defined as being equal to the real part of the ratio of frequency to wavenumber, i.e. \[\tag{61} c_{p}=\textrm{Re}\left\{ \frac{\omega}{k}\right\} .\] It can also be viewed as the speed at which a particular phase of a wave propagates; for example, the speed of propagation of a wave crest. In one wave period \(T\) the crest advances one wave length \(\lambda\ ;\) therefore, the phase velocity is also given by \(c_{p}=\lambda/T\ .\) We see that this second form is equal to equation (61) due to the following relationships: wavenumber \(k=\frac{2\pi}{\lambda}\) and frequency \(\omega=2\pi f\) where \(f=\frac{1}{T}\ .\) For a non-dispersive wave the phase error is zero and therefore \(c_{g}=c_{p}\ .\) To calculate group and phase velocity for linear waves (or small amplitude waves) we assume a solution of the form \(u(x,t)=Ae^{i(kx-\omega t)}\ ,\) where \(A\) is a constant and \(x\) can be a scalar or vector, and substitute into the wave equation (or linearized wave equation) under consideration. For example, for \(u_{t}+u_{x}+u_{xxx}=0\) we obtain the dispersion relation \(\omega=k-k^{3}\ ,\) from which we calculate the group and phase velocities to be \(c_{g}=1-3k^{2}\) and \(c_{p}=1-k^{2}\) respectively. Thus, we observe that \(c_{g}\neq c_{p}\) and that therefore, this example is dispersive. For most practical situations our interest is primarily in solving partial differential equations numerically; and, before we embark on implementing a numerical procedure, we would usually like to have some idea as to the expected behaviour of the system being modeled, ideally from an analytical solution. However, an analytical solution is not usually available; otherwise we would not need a numerical solution. Nevertheless, we can usually carry out some basic analysis that may give some idea as to steady state, long term trend, bounds on key variables, and reduced order solution for ideal or special conditions, etc. One key estimate that we would like to know is whether the fundamental system is stable or well posed. This is particularly important because if our numerical solution produces seemingly unstable results we need to know if this is fundamental to the problem or whether it has been introduced by the solution method we have selected to implement. For most situations involving simulation this is not a concern as we would be dealing with a well analyzed and documented system. But there are situations where real physical systems can be unstable and we need to know these in advance. For a real system to become unstable there needs to be some form of energy source: kinetic, potential, reaction, etc., so this can provide a clue as to whether or not the system is likely to become unstable. If it is, then we may need to modify our computational approach so that we capture the essential behaviour correctly - although a complete solution may not be possible. In general, solutions to PDE problems are sought to solve a particular problem or to provide insight into a class of problems. To this end existence, uniqueness and stability of the solution are of vital importance [Zwi-97, chapter 10]. Whilst at this introductory level we must restrict our discussion, it is desirable to emphasize that for a solution of an evolutionary PDE (together with appropriate ICs and BCs) to be useful we require that: • A unique solution must exist. The question as to whether or not a solution actually exists can be rather complex, and an answer can be sought for analytic PDEs by application of the Cauchy-Kowalewsky theorem [Cou-62, pp39-56]. • The solution must be numerically stable if we are to be able to predict its evolution over time. If the physical system is actually unstable, then prediction may not be possible. • The solution must depend continuously on data such as boundary/initial conditions, forcing functions, domain geometry, etc. If these conditions are full-filled, then the problem is said to be well posed, in the sense of Hadamard [Had-23]. Numerical schemes for particular PDE systems can be analyzed mathematically to determine if the solutions remain bounded. By invoking Parseval's theorem of equality this analysis can be performed in the time domain or in the Fourier domain. A good introduction to this subject is given by LeVeque [Lev-07], and more advanced technical discussions can be found in the monographs by Tao [Tao-05] and Kreiss & Lorenz [Kre-04]. Characteristics are surfaces in the solution space of an evolutionary PDE problem that represent wave-fronts upon which information propagates. For example, consider the 1D advection equation problem \[\tag{62} u_{t}=cu_{x},\quad u\left(x,t=0\right)=u_{0},\; t\geq0\] where the characteristics are given by \(dx/dt=c\ .\) For this problem the characteristics are straight lines in the \(xt\)-plane with slope \(1/c\) and, along which, the dependent variable \(u\) is constant. The consequence of this is that the initial condition propagates from left to right at constant speed \(c\ .\) But, for other situations such as the inviscid Burgers equation problem, \[\tag{63} u_{t}=uu_{x},\quad u\left(x,t=0\right)=u_{0},\; t\geq0,\] the propagation speed is not constant and the shape of the characteristics depend upon the initial conditions. If the initial condition is monotonically increasing with \(x\ ,\) the characteristics will not overlap and the problem is well behaved. However, if the initial conditions are not monotonically increasing with \(x\ ,\) at some time \(t>0\) the characteristics will overlap and the solution will become multi-valued and a shock will develop. In this situation we can only find a weak solution (one where the problem is re-stated in integral form) by appealing to entropy considerations and the Rankine-Hugoniot jump condition. PDEs other than equations (62) and (63), such as those involving conservation laws, introduce additional complexity such as rarefaction or expansion waves. We will not discuss these aspects further here, and for additional discussion readers are referred to [Hir-90, chap. 16]. The method of characteristics The method of characteristics (MOC) is a numerical method for solving evolutionary PDE problems by transforming them into a set of ODEs. The ODEs are solved along particular characteristics, using standard methods and the initial and boundary conditions of the problem. For more information refer to [Kno-00],[Ost-94],[Pol-07]. MOC is a quite general technique for solving PDE problems and has been particularly popular in the area of fluid dynamics for solving incompressible transient flow in pipelines. For an introduction refer to [Stre-97, chap. 12]. General topics We conclude with a brief overview of some general aspects relating to linear and nonlinear waves. Galilean invariance Certain wave equations are Galilean invariant, i.e. the equation properties remain unchanged under a Galilean transformation. For example: • A Galilean transformation for the linear wave equation (4) is \[\tag{64} \tilde{u}=Au\left(\pm\lambda x+C_{1},\pm\lambda t+C_{2}\right),\] where \(A\ ,\) \(C_{1}\ ,\) \(C_{2}\) and \(\lambda\) are arbitrary constants. • A Galilean transformation for the nonlinear KdV equation (28) is \[\tag{65} \tilde{u}=u\left(x-6\lambda t,t\right)-\lambda ,\] where \(\lambda\) is an arbitrary constant. Other invariant transformations are possible for many linear and nonlinear wave equations, for example the Lorentz transformation applied to Maxwell's equations, but these will not be discussed here. Plane waves Figure 5: Figure 5: Plane sinusoidal wave where its source is assumed to be at \( x = -\infty\ ,\) and its fronts are advancing from right to left. A Plane wave is considered to exist far from its source and any physical boundaries so, effectively, it is located within an infinite domain. Its position vector remains perpendicular to a given plane and satisfies the 1D wave equation \[\tag{66} \frac{1}{c^2}\frac{\partial^{2}u}{\partial t^{2}}=\frac{\partial^{2}u}{\partial x^{2}}\] with a solution of the form \[\tag{67} u=u_{0}\cos\left(\omega t-kx+\phi\right)\] where \(c=\frac{\omega}{k}\) represents propagation velocity and \(\phi\) the phase of the wave. See figure (5). Refraction and diffraction Wave crests do not necessarily travel in a straight line as they proceed - this may be caused by refraction or diffraction. Wave refraction is caused by segments of the wave moving at different speeds resulting from local changes in characteristic speed, usually due to a change in medium properties. Physically, the effect is that the overall direction of the wave changes, its wavelength either increases or decreases but its frequency remains unchanged. For example, in optics refraction is governed by Snell's law and in shallow water waves by the depth of water. Wave diffraction is the effect whereby the direction of a wave changes as it interacts with objects in its path. The effect is greatest when the size of the object causing the wave to diffract is similar to the wavelength. Reflection results from a change of wave direction following a collision with a reflective surface or domain boundary. A hard boundary is one that is fixed which causes the wave to be reflected with opposite polarity, e.g. \(u(x-vt)\;\rightarrow\;-u(x+vt)\ .\) A soft boundary is one that changes on contact with the wave, which causes the wave to be reflected with the same polarity, e.g. \(u(x-vt)\;\rightarrow\; u(x+vt)\ .\) If the propagating medium is not isotropic, i.e. it is not spatially uniform, then a partial reflection can result, with an attenuated original wave continuing to propagate. The polarity of the partial reflection will depend upon the characteristics of the medium. Consider a travelling wave situation where the domain has a soft boundary with incident wave \(\phi_{I}=I\exp\left(j\omega t-k_{1}x\right)\ ,\) reflected wave \(\phi_{R}=R\exp\left(j\omega t+k_{1}x\right)\) and transmitted wave \(\phi_{T}=T\exp\left(j\omega t-k_{2}x\right)\ .\) In addition, for simplicity, consider the medium on both sides of the boundary to be isotropic and non-dispersive, which implies that all three waves will have the same frequency. From the conservation of energy law we have \(\phi_{I}+\phi_{R}=\phi_{T}\) for all \(t\ ,\) which implies \(I+R=T.\) Also, on differentiating with respect to \(x\ ,\) we obtain \(-ik_{1}I+ik_{1}R=-ik_{2}T\ .\) Thus, on rearranging we have \[\tag{68} \frac{T}{I} = \frac{2k_{1}}{k_{1}+k_{2}},\] \[\tag{69} \frac{R}{I} = \frac{k_{1}-k_{2}}{k_{1}+k_{2}}.\] Equations (68) and (69) indicate that: • the transmitted wave is always in-phase with the incident wave, i.e.synchronized (in-step with) and no phase-shift • the reflected wave is only in-phase with the incident wave if \(k_{1}>k_{2}.\) Also, because \(c_{g}=c_{p}={\displaystyle \frac{\omega}{k}},\;\) if \(\;k_{1}>k_{2}\), then this implies that \(c_{g1}<c_{g2},\;\) see section (Group and phase velocity). We mention two other quantities \[\tag{70} \tau = \left|\frac{T}{I}\right| ,\] \[\tag{71} \rho = \left|\frac{R}{I}\right| ,\] the so-called coefficients of transmission and reflection respectively. Resonance describes a situation where a system oscillates at one of its natural frequencies, usually when the amplitude increases as a result of energy being supplied by a perturbing force. A striking example of this phenomena is the failure of the mile-long Tacoma Narrows Suspension Bridge. On 7 November 1940 the structure collapsed due to a nonlinear wave that grew in magnitude as a result of excitation by a 42 mph wind. A video of this disaster is available on line at: archive.org . Another less dramatic example of resonance that most people have experienced is the effect of sound feedback from loudspeaker to microphone. A more complex form of resonance is autoresonance, a nonlinear phase-locking phenomenon which occurs when a resonantly driven nonlinear system becomes phase-locked (synchronized or in-step) with a driving perturbation or wave. Doppler effect The Doppler effect (or Doppler shift) relates to the change in frequency and wavelength of waves emitted from a source as perceived by an observer, where the source and observer are moving at a speed relative to each other. At each moment of time the source will radiate a wave and an observer will experience the following effects: • Wave source moving towards the observer - To the observer the moving source has the effect of compressing the emitted waves and the frequency is perceived to be higher than the source frequency. For example, a sound wave will have a higher pitch and the spectrum of a light wave will exhibit a blueshift. • Wave source moving away from the observer - To the observer this time, the recessional velocity has the effect of expanding the emitted waves such that a sound wave will have a lower pitch and the spectrum of a light wave will exhibit a redshift. Perhaps the most famous discovery involving the Doppler effect, is that made in 1929 by Edwin Hubble in connection with the Earth's distance from receding galaxies: the redshift of light coming from distant galaxies is proportional to their distance. This is known as Hubble's law. Transverse and longitudinal waves Transverse waves oscillate in the plane perpendicular to the direction of wave propagation. They include: seismic S (secondary) waves, and electromagnetic waves, E (electric field) and H (magnetic field), both of which oscillate perpendicularly to each other as well to the direction of propagation of energy. Light, an electromagnetic wave, can be polarized (oriented in a specific direction) by use of a polarizing filter. Longitudinal waves oscillate along the direction of wave propagation. They include sound waves (pressure, particle displacement, or particle velocity propagated in an elastic medium) and seismic P (earthquake or explosion) waves. Surface water waves however, are an example of waves that involve a combination of both longitudinal and transverse motion. Traveling waves Traveling-wave solutions [Pol-08], [Gri-11], by definition, are of the form \[\tag{72} u(x,t)=U(z),\quad z=kx-\lambda t\ ;\] where \(\lambda/k\) plays the role of the wave propagation velocity (the value \(\lambda=0 \,\) corresponds to a stationary solution, and the value \(k=0 \,\) corresponds to a space-homogeneous solution). Traveling-wave solutions are characterized by the fact that the profiles of these solutions at different time instants are obtained from one another by appropriate shifts (translations) along the \(\, x\)-axis. Consequently, a Cartesian coordinate system moving with a constant speed can be introduced in which the profile of the desired quantity is stationary. For \(\lambda>0 \,\) and \(k>0\ ,\) the wave described by equation (72) travels along the \(x\)-axis to the right (in the direction of increasing \(x \,\)). The term traveling-wave solution is also used in situations where the variable \(t \,\) plays the role of a spatial coordinate, \(y \,\ .\) Standing waves Figure 6: Figure 6: A standing wave\[\Re \left( \phi\left(x,t\right) \right)\ .\] Standing waves' occur when two traveling waves of equal amplitude and speed, but opposite direction, are superposed. The effect is that the wave amplitude varies with time but it does not move spatially. For example, consider two waves \(\phi_{1}\left(x,t\right)=\Phi_{1}\exp i\left(\omega t-kx\right)\) and \(\phi_{2}\left(x,t\right)=\Phi_{2}\exp i\left(\omega t+kx\right)\ ,\) where \(\phi_{1}\) moves to the right and \(\phi_{2}\) moves to the left. By definition we have \(\Phi_{2}=\Phi_{2}\ ,\) and by simple algebraic manipulation we obtain \[ \phi\left(x,t\right) = \phi_{1}\left(x,t\right)+\phi_{2}\left(x,t\right) ,\] \[ = \Phi_{1}\left[\exp i\left(\omega t-kx\right)+\exp i\left(\omega t+kx\right)\right] ,\] \[ \Downarrow \] \[\tag{73} = 2\Phi_{1}\exp i\omega t\;\cos kx.\] A standing wave is illustrated in figures (6) and (7) by a plot of the real part of equation (73), i.e. \(\Re \left( \phi\left(x,t\right) \right)=2\Phi_{1}\cos\omega t\cos kx\) with \(k=1\ ,\) \(\omega=1\) and \(\Phi_{1}=\frac{1}{2}\ .\) Figure 7: Figure 7: Animated standing wave\[\Re \left( \phi\left(x,t\right) \right)\ .\] The points at which \(\phi=0\) are called nodes and the points at which \(\phi=2\left|\Phi\right|\) are called antinodes. These points are fixed and occur at \(kx=\left(2n+1\right)\frac{ {\displaystyle \pi}}{{\displaystyle 2} }\) and \(kx=\left(2n+1\right)\pi\) respectively \(\left(n=\pm1,\pm2,\cdots\right)\ .\) Clearly, the existence of nonlinear standing waves can be demonstrated by application of Fourier analysis. The idea of a waveguide is to constrain a wave such that its energy is directed along a specific path. The path may be fixed or capable of being varied to suit a particular application. The operation of a waveguide is analyzed by solving the appropriate wave equation, subject to the prevailing boundary conditions. There will be multiple solutions, or modes, which are determined by the eigenfunctions associated with the particular wave equations, and the velocity of the wave as it propagates along the waveguide will be determined by the eigenvalues of the solution. • An electromagnetic waveguide is a physical structure, such as a hollow metal tube, solid dielectric rod or co-axial cable that guides electromagnetic waves in the sub-optical (non-visible) electromagnetic spectrum • An optical waveguide is a physical structure, such as an optical fiber, that guides waves in the optical (visible) part of the electromagnetic spectrum • An acoustic waveguide is a physical structure, such as a hollow tube or duct (a speaking tube), that guides acoustic waves in the audible frequency range. Musical wind instruments, such as a flute, can also be thought of as acoustic waveguides. For detailed analysis and further discussion refer to [Lio-03],[Oka-06]. Figure 8: Figure 8: Circular wave-fronts emanating from a point source. As a wave propagates through a medium, the wavefront represents the outward normal surface formed by points in space, at a particular instant, as the wave travels outwards from its origin. One of the simplest form of wavefront to envisage is an expanding circle where its radius \(r\ ,\) expands with velocity \(v\ ,\) i.e. \(r=vt.\) Simple circular sinusoidal wave-fronts propagating from a point source are shown in figure (8). They can be described by \[\tag{74} u\left(r,t\right)=Re\left\{ \exp\left[i(kr-\omega t+\pi/2-\psi\right]\right\} ,\] where \(k=\) wavenumber, \(r=\textrm{radius}\ ,\) \(t=\textrm{time}\ ,\) \(\omega=\textrm{frequency}\ ,\) \(\psi=\textrm{phase angle}\ .\) Figure 9: Figure 9: A snapshot from a simulation of the Indian Ocean tsunami that occurred on 26th December 2004 resulting from an earthquake off the west coast of Sumatra. The non-circular wave-fronts are clearly visible, which indicates curved rays. See animation here Depending upon the particular wave equation and medium in which the wave travels, the wavefront may not appear to be an expanding circle. The path upon which any point on the wave front has traveled is called a ray, and this can be a straight line or, more likely, a curve in space. In general, the wavefront is perpendicular to the ray path, and the ray curvature will depend on the circumstances of the particular physical situation. For example, its curvature will be influenced by: an anisotropic medium, refraction, diffraction, etc. Consider a water wave where wave height is very much smaller than water depth \(h\ .\) Its speed of propagation \(c\ ,\) or celerity, is given by \(c=\sqrt{gh}\ ;\) thus, for an ocean with varying depth the velocity will vary at different locations (refraction). This can result in waves having non-circular wave-fronts and hence curved rays. This situation, which occurs in many different applications, is illustrated in figure (9) where the curved wave-fronts are due to a combination of effects due to refraction, diffraction, reflection and a non-point disturbance. Huygens' principle Figure 10: Figure 10: Advancing envelope of wave-fronts \(\Phi_{q_{0}}\left(t\right)\ .\) We can consider all points of a wave-front of light in a vacuum or transparent medium to be new sources of wavelets that expand in every direction at a rate depending on their velocities. This idea was originally proposed by the Dutch mathematician, physicist, and astronomer, Christiaan Huygens, in 1690, and is a powerful method for studying various optical phenomena [Enc-09]. Thus, the points on a wave can be viewed as each emitting a circular wave which combine to propagate the wave-front \(\Phi_{q_{0}}\left(t\right)\ .\) The wave-front can be thought of as an advancing line tangential to these circular waves - see figure (10). The points on a wave-front propagate from the wave source along so-called rays. The Huygens' principle applies generally to wave-fronts and the laws of reflection and refraction can both be derived from Huygens' principle. These results can also be obtained from Maxwell's equations. For detailed analysis and proof of Huygens' principle, refer to [Arn-91]. Shock waves There are an extremely large number of types and forms of shock wave phenomena, and the following are representative of some subject areas where shocks occur: • Fluid mechanics: Shocks result when a disturbance is made to move through a fluid faster than the speed of sound (the celerity) of the medium. This can occur when a solid object is forced through a fluid, for example in supersonic flight. The effect is that the states of the fluid (velocity, pressure, density, temperature, entropy) exhibit a sudden transition, according to the appropriate conservation laws, in order to adjust locally to the disturbance. As the cause of the disturbance subsides, the shock wave energy is dissipated within the fluid and it reduces to a normal, subsonic, pressure wave. Note: A shock wave can result in local temperature increases of the fluid. This is a thermodynamic effect and should not be confused with heating due to friction. • Mechanics: Bull whips can generate shocks as the oscillating wave progresses from the handle to the tip. This is because the whip is tapered from handle to the tip and, when cracked, conservation of energy dictates that the wave speed increases as it progresses along the flexible cord. As the wave speed increases it reaches a point where its velocity exceeds that of sound, and a sharp crack is heard. • Continuum mechanics: Shocks result from a sudden impact, earthquake, or explosion. • Detonation: Shocks result from an extremely fast exothermic reaction. The expansion of the fluid, due to temperature and chemical changes force fluid velocities to reach supersonic speed, e.g. detonation of an explosive material such as TNT. But perhaps the most striking example would be the shock wave produced by a thermonuclear explosion. • Medical applications: A non-invasive treatment for kidney or gall bladder stones whereby they can be removed by use of a technique called extracorporeal lithotripsy. This procedure uses a focused, high-intensity, acoustic shock wave to shatter the stones to the point where they are reduced in size such that they may be passed through the body in a natural way! For further discussion relating to shock phenomena see ([Ben-00],[Whi-99]). We briefly introduce two topics below by way of example. Blast Wave - Sedov-Taylor Detonation A blast wave can be analyzed from the following equations, \[\tag{75} \frac{\partial\rho}{\partial t}+v\frac{\partial\rho}{\partial r}+\rho\left(\frac{\partial v}{\partial r}+\frac{2v}{r}\right) = 0,\] \[\tag{76} \quad \frac{\partial v}{\partial t}+v\frac{\partial v}{\partial r}+\frac{1}{\rho}\frac{\partial p}{\partial r} = 0,\] \[\tag{77} \quad \frac{\partial\left(p/\rho^{\gamma}\right)}{\partial t}+v\frac{\partial\left(p/\rho^{\gamma}\right)}{\partial r} = 0,\] where \(\rho\ ,\) \(v\ ,\) \(p\ ,\) \(r\ ,\) \(t\) and \(\gamma\) represent density of the medium in which the blast takes place (air), velocity of the blast front, blast pressure, blast radius, time and isentropic exponent (ratio of specific heats) of the medium respectively. Now, if we assume that, Figure 11: Figure 11: Time-lapse photographs with distance scales (100 m) of the first atomic bomb explosion in the New Mexico desert - 5.29 A.M. on 16th July, 1945. Times from instant of detonation are indicated in bottom left corner of each photograph (Top first - left column: 0.006s, 0.016s; right column: 0.025s, 0.09s). • the blast can be considered to result from a point source of energy; • the process is isentropic and the medium can be represented by the equation-of-state, \(\left(\gamma-1\right)e=p/\rho\ ,\) where \(e\) represents internal energy; • there is spherical symmetry; then, after some analysis, similarity considerations lead to the following equation [Tay-50b] \[\tag{78} E=c{\displaystyle \frac{R^{5}\rho}{t^{2}}},\] where \(c\) is a similarity constant, \(R\) is the radius of the wave front and \(E\) is the total energy released by the explosion. Back in 1945 Sir Geoffrey Ingram Taylor was asked by the British MAUD (Military Application of Uranium Detonation) Committee to deduce information regarding the power of the first atomic explosion in New Mexico. He derived this result, which was based on his earlier classified work [Tay-41], and was able to estimate, using only photographs of the blast (released into the public domain in 1947), that the yield of the bomb was equivalent to between \(16.8\) and \(22.9\) kilo-tons of TNT for values of \(\gamma\) equal to \(1.4\) and \(1.3\) respectively. Each of these photographs, crucially, contained a distance scale and precise time, see figure (11). Taylor used a value for the similarity constant of \(c=0.856\) that he obtained by a step-by-step method. However the correct analytical value for this constant was later shown to be \(0.8501\) [Sed-59]. This result was classified secret but, five years later he published the details [Tay-50a],[Tay-50b], much to the consternation of the British government. J. von Neumann and L. I. Sedov published similar independently derived results [Bet-47],[Sed-46]. For further discussion relating to the theory refer to [Kam-00],[Deb-58]. Sonic boom Figure 12: Figure 12: The N-wave sonic boom. As an aircraft proceeds in smooth flight at a speed greater than the speed of sound - the sound barrier - a shock wave is formed at its nose and finishes at its tail. The speed of sound is given by \(c=\sqrt{\gamma RT/MW}\ ,\) where \(\gamma\ ,\) \(R\ ,\) \(T\) and MW represent ratio of specific heats, universal gas constant, temperature and molecular weight respectively, and \(c\simeq330\)m/s at sea level for dry air at \(0^{o}\)C . The shock forms a high pressure, cone-shaped surface propagating with the aircraft. The half-angle (between direction of flight and the shock wave) \(\theta\) is given by \(\sin\left(\theta\right)=1/M\ ,\) where \(M=v_{aircraft}/c\) is known as the Mach number of the aircraft. Clearly, as \(v_{aircraft}\) increases, the cone becomes more pointed (\(\, \theta\) becomes smaller). Figure 13: Figure 13: The U-wave sonic boom. As the aircraft continues under steady flight conditions at high speed, there will be an abrupt rise in pressure at the aircraft's nose, which falls towards the tail when it then becomes negative. This is the so-called N-wave [Nak-08] - a pressure wave measured at sufficient distance such that it has lost its fine structure, see figure (12). A sonic boom occurs when the abrupt changes in pressure are of sufficient magnitude. Thus, steady supersonic flight results in two booms: one resulting from the rapid rise in pressure at the nose, and another when the pressure returns to normal as the tail passes the point vacated by the nose. This is the cause of the distinctive double boom from supersonic aircraft. At ground level typically, \(10<P_{max}<500\)Pa and \(\tau\simeq0.001-0.005\)s. The duration \(T\) varies from around 100 ms for a fighter plane to 500 ms for the Space Shuttle or Concorde. Figure 14: Figure 14:A USAF B1B makes a high speed pass at the Pensacola Beach airshow - Florida, July 12, 2002. Copyright © Gregg Stansbery, Stansbery Photography - reproduced with permission. Another form of sonic boom is the focused boom. These can result from high speed aircraft flight maneuvering operations. These result in the so-called U-waves which have positive shocks at the front and rear of the boom, see figure (13). Generally, U-waves result in higher peak over-pressures than N-waves - typically between 2 and 5 times. At ground level typically, \(20<P_{max}<2500\)Pa (although they can be much higher). The highest overpressure ever recorded was 6800 Pa [144 lbs/sq-ft] (source: USAF Fact Sheet 96-03). For further discussion related to sonic booms refer to [Kao-04]. As an aircraft passes through, or close to the sound barrier, water vapor in the air is compressed by the shock wave and becomes visible as a large cloud of condensation droplets formed as the air cools due to low pressure at the tail. A smaller shock wave can also form on top of the canopy. This phenomena is illustrated in figure (14). Solitary waves and solitons The correct term for a wave which is localized and retains its form over a long period of time is: solitary wave. However, a soliton is a solitary wave having the additional property that other solitons can pass through it without changing its shape. But, in the literature it is customary to refer to the solitary wave as a soliton, although this is strictly incorrect [Tao-08]. Figure 15: Figure 15: Evolution of a two-soliton solution of the KdV equation. Image illustrates the collision of two solitons that are both moving from left to right. The faster (taller) soliton overtakes the slower (shorter) soliton. Solitons are stable, nonlinear pulses which exhibit a fine balance between non-linearity and dispersion. They often result from real physical phenomena that can be described by PDEs that are completely integrable, i.e. they can be solved exactly. Such PDEs describe: shallow water waves, nonlinear optics, electrical network pulses, and many other applications that arise in mathematical physics. Where multiple solitons moving at different velocities occur within the same domain, collisions can take place with the unexpected phenomenon that, first they combine, then the faster soliton emerges to proceed on its way. Both solitons then continue to proceed in the the same direction and eventually reach a situation where their speeds and shapes are unchanged. Thus, we have a situation where a faster soliton can overtake a slower soliton. There are two effects that distinguishes this phenomena from that which occurs in a linear wave system. The first is that the maximum height of the combined solitons is not equal to the sum of the individual soliton heights. The second is that, following the collision, there is a phase shift between the two solitons, i.e. the linear trajectory of each soliton before and after the collision is seen to be shifted horizontally - see figure (15). Some additional discussion is given in section (The Korteweg-de Vries equation) and detailed technical overviews of the subject can be found in the works by Ablowitz & Clarkson [Abl-91], Drazin & Johnson [Dra-89] and Johnson [Joh-97]. Soliton theory is still an active area of research and a discussion on the various types of soliton solution that are known is given by Gerdjikov & Kaup [Ger-05]. Soliton types Soliton types generally fall into thee types: • Humps (pulses) - These are the classic bell-shaped curves that are typically associated with soliton phenomena. • Kinks - These are solitons characterized by either a monotonic positive shift (kink) or a monotonic negative shift (anti-kink) where the change in value occurs gradually in the shape of an s-type curve. • Breathers (bions) - These can be either stationary or travelling soliton humps that oscillate: becoming positive, negative, positive and so on. More details may be found in Drazin and Johnson [Dra-89]. The word tsunami is a Japanese term derived from the characters 津 (tsu) meaning harbor and 波 (nami) meaning wave. It is now generally accepted by the international scientific community to describe a series of traveling waves in water produced by the displacement of the sea floor associated with submarine earthquakes, volcanic eruptions, or landslides. They are also known as tidal waves. Tsunami are usually preceded by a leading-depression N-wave (LDN), one in which the trough reaches the shoreline first. Eyewitnesses in Banda Aceh who observed the effects of the December 2004 Sumatra Tsunami, see figure (9), resulting from a magnitude 9.3 seabed earthquake, described a series of three waves, beginning with a leading depression N wave [Bor-05]. Recent estimates indicate that this powerful tsunami resulted in excess of 275,000 deaths and extensive damage to property and infrastructure around the entire coast line of the Indian ocean [Kun-07]. Tsunami are long-wave phenomena and, because the wavelengths of tsunami in the ocean are long with respect to water depth, they can be considered shallow water waves. Thus, \(c_{p}=c_{g}=\sqrt{gh}\) and for a depth of 4km we see that the wave velocity is around 200 m/s. Hence, tsunami waves are often modelled using the shallow water equations, the Boussinesq equation, or other suitable equations that bring out in sufficient detail the required wave characteristics. However, one of the major challenges is to model shoreline inundation realistically, i.e. the effect of the wave when it encounters the shore - also known as run-up. As the wave approaches the shoreline, the water depth decreases sharply resulting in a greatly increased surge of water at the point where the wave strikes land. This requires special modeling techniques to be used, such as robust Riemann solvers [Tor-01],[Ran-06] or the level-set method [Set-99],[Osh-03], which can handle situations where dry regions become flooded and vice versa. The authors would like to thank reviewers Prof. Andrei Polyanin and Dr. Alexei Zhurov for their positive and constructive comments. • [Abl-91] Ablowitz, M. J. and P. A. Clarkson (1991), Solitons, Nonlinear Evolution Equations and Inverse Scattering, London Mathematical Society lecture Notes 149, Cambridge University press. • [Arn-91] Arnold, V. I. (1991), Mathematical Methods of Classical Mechanics, 2nd Ed., Springer. • [Bar-03] Barenblatt, G. I. (2003), Scaling, Cambridge University Press. • [Ben-00] Ben-Dor, G. (Ed), O. Igra (Ed) and T. Elperin (Ed) (2000), Handbook of Shock Waves, 3 vols, Academic Press. • [Bet-47] Bethe, H. A., K. Fuchs, J.O. Hirschfelder, J. L. Magee, R. E. Peierls and J. von Neumann (1947), Blast Wave, Los Alamos Scientific Laboratory Report LA-2000. • [Bor-05] Borrero, J. C. (2005). Field Data and Satellite Imagery of Tsunami Effects in Banda Aceh, Science 10 June, 308, p. 1596. • [Buc-83] Buckley, R. (1985), Oscillations and Waves, Adam Hilger Ltd., Bristol and Boston. • [Byn-84] Bynam, W. F., E. J. Browne and R. Porter Eds. (1984), Dictionary of The History of Science, Princton University Press. • [Bur-93] Burden, R. L. and Faires, J. D. (1993), Numerical Analysis, 5th Ed., PWS Publishing Company. • [Caj-61] Cajori, F. (1961), A History of Mathematics, MacMillan. • [Cia-88] Ciarlet, P. G. (1988), Mathematical Elasticity: Three-dimensional Elasticity, volume 1, Elsevier. • [Cla-89] Clarkson, P. A., A. S. Fokas and M. J. Ablowitz (1989), Hodograph transformations of linearizable partial differential equations, SIAM J. Appl. Mathematics, Vol. 49, No. 4, pp. 1188-1209. • [Col-71] Collocott, T. C. (Ed.) (1971), Chambers Dictionary of Science and technology, Chambers. • [Cor-05] Cornejo-Perez, O. and H. C. Rosu (2005), Nonlinear Second Order ODE's: Factorizations And Particular Solutions, Progress of Theoretical Physics, 114-3, pp. 533-538. • [Cou-62] Courant, R. and D. Hilbert (1962), Methods of Mathematical Physics - Vol II, Interscience Publishers. • [Dai-06] Dai, H. H. , E. G. Fan and X. G. Geng (2006), Periodic wave solutions of nonlinear equations by Hirota's bilinear method. Available on-line at: [1] • [Deb-58] Deb Ray, G. (1958), An Exact Solution of a Spherical Blast Under Terrestrial Conditions, Proc. Natn. Inst. Sci. India, A 24, pp. 106-112. • [Dis-08] DispersiveWiki (2008). An on-line collection of web pages concerned with the local and global well-posedness of various non-linear dispersive and wave equations. DispersiveWiki [2] • [Dra-89] Drazin, P. G. and R. S. Johnson (1989), Solitons: an Introduction, Cambridge University press. • [Elm-69] Elmore, W. C. and M. A. Heald (1969), Physics of Waves, Dover. • [Enc-09] Encyclopædia Britannica (2009), Encyclopædia Britannica On-line, [3] • [Far-93] Farlow, S. J. (1993), Partial Differential Equations for Scientists and Engineers, Chapter 17, Dover Publications, New York, New York. • [Fow-05] Fowler, A. C. (2005), Techniques of Applied Mathematics, Report of the Mathematical Institute, Oxford University. • [Gal-06] Galaktionov, V. A. and S. R. Svirshchevskii (2006), Exact Solutions and Invariant Subspaces of Nonlinear Partial Differential Equations in Mechanics and Physics, Chapman & Hall/CRC Press, Boca Raton. • [Ger-05] Gerdjikov, V. S. and D. Kaup (2005), How many types of soliton solutions do we know? Seventh International Conference on Geometry, Integrability and Quantization, June 2-10, Varna, Bulgaria. I. M. Mladenov and M. De Leon, Editors. SOFTEX, Sofia 2005, pp. 1-24. • [Gil-82] Gill, A. E. (1982), Atmosphere-Ocean Dynamics, Academic Press. • [God-54] Godunov, S. K. (1954), Ph.D. Dissertation: Different Methods for Shock Waves, Moscow State University. • [God-59] Godunov, S. K. (1959), A Difference Scheme for Numerical Solution of Discontinuous Solution of Hydrodynamic Equations, Math. Sbornik, 47, 271-306, translated US Joint Publ. Res. Service, JPRS 7226, 1969. • [Gri-11] Griffiths, G. W. and W. E. Schiesser (2011), Traveling Wave Solutions of Partial Differential Equations: Numerical and Analytical Methods with Matlab and Maple, Academic Press; see also http://www.pdecomp.net/ • [Had-23]Hadamard, J. (1923), Lectures on Cauchy's Problem in Linear Partial Differential Equations, Dover. • [Ham-07] Hamdi, S., W. E. Schiesser and G. W. Griffiths (2007), Method of Lines. Scholarpedia, 2(7) 2859. Available on-line at Scholarpedia: [4] • [He-06] He, J-H. and X-H. Wu (2006), Exp-function method for nonlinear wave equations, Chaos, Solitons & Fractals, Volume 30, Issue 3, November, pp. 700-708. • [Her-05] Herman, W. and W. Malfliet (2005), The Tanh Method: A Tool to Solve Nonlinear Partial Differential Equations with Symbolic Software, 9th World Multiconference on Systemics, Cybernetics, and Informatics (WMSCI 2005), Orlando, Florida, July 10-13, pp. 165-168. • [Hir-88] Hirsch, C. (1988), Numerical Computation of Internal and External Flows, Volume 1: Fundamentals of Numerical Discretization, Wiley. • [Hir-90] Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley. • [Ibr-94] Ibragimov, N. H. (1994 – 1995), CRC Handbook of Lie Group Analysis of Differential Equations, Volumes 1 & 2, CRC Press, Boca Raton. • [Inf-00] Infield, E. and G. Rowlands (2000). Nonlinear Waves, Solitons and Chaos, 2nd Ed., Cambridge University Press. • [Jag-06] de Jager, E. M. (2006), On the Origin of the Korteweg-de Vries Equation, arXiv e-print service. Available on-line at: arXiv.org [5] • [Joh-97] Johnson, R. S. (1999), A Modern Introduction to the Mathematical Theory of Water Waves, Cambridge University press. • [Kao-04] Kaouri, K. (2004), PhD Thesis: Secondary Sonic Booms, Somerville College, Oxford University. • [Kam-00] Kamm, J. R. (2000), Evaluation of the Sedov-von Neumann-Taylor Blast Wave Solution, Los Alamos National Laboratory Report LA-UR-00-6055. • [Kar-98] Karigiannis, S. (1998). Minor Thesis: The inverse scattering transform and integrability of nonlinear evolution equations, University of Waterloo. Available on-line at: [6] • [Kno-00] Knobel, R. A. (2000), An Introduction to the Mathematical Theory of Waves, American Mathematical Society. • [Kor95] Korteweg, D. J. and de Vries, F. (1895), On the Change of Form of Long Waves Advancing in a Rectangular Canal, and on a New Type of Long Stationary Waves, Phil. Mag. 39, pp. 422-443. • [Kre-04] Kreiss, H-O, and J. Lorenz (2004), Initial-Boundary Value problems and the Navier-Stokes Equations, Society for Industrial and Applied mathematics. • [Krey-93]Kreyszig, E. (1993), Advanced Engineering Mathematics - Seventh Edition, Wiley. • [Kun-07]Kundu, A. Ed. (2007), Tsunami and Nonlinear Waves, Springer. • [Lam-93] Lamb, Sir H. (1993), Hydrodynamics, 6th Ed., Cambridge University Press. • [Lan-98] Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press. • [Lev-02] LeVeque, R. J. (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press. • [Lev-07] LeVeque, R. J. (2007), Finite Difference Methods for Ordinary and Partial Differential Equations, Society for Industrial and Applied mathematics. • [Lig-78] Lighthill, Sir James (1978), Waves in Fluids, Cambridge University Press. • [Lio-03] Lioubtchenko, D., S. Tretyakov and S. Dudorov (2003), Millimeter-wave Waveguides, Springer. • [Mal-92] Mafliet, W. (1992), Solitary wave solutions of nonlinear wave equations, Am. J. Physics, 60(7), pp 650-654. • [Mal-96a] Mafliet, W. and W. Hereman (1996a), The Tanh Method I - Exact Solutions of Nonlinear Evolution Wave Equations, Physica Scripta, 54, pp. 563-568. • [Mal-96b] Mafliet, W. and W. Hereman (1996b), The Tanh Method II - Exact Solutions of Nonlinear Evolution Wave Equations, Physica Scripta, 54, pp. 569-575. • [Mic-07] Microsoft Corporation (2007), Encarta® World English Dictionary [North American Edition], Bloomsbury Publishing Plc. • [Mor-94] Morton, K. W. and D. F. Myers (1994), Numerical Solution of Partial Differential Equations, Cambridge. • [Mur-02] Murray, J. D. (2002), Mathematical Biology I: An Introduction, 3rd Ed., Springer. • [Mur-03] Murray, J. D. (2003), Mathematical Biology II: Spatial Models and Biomedical Applications, 3rd Ed., Springer. • [Nak-08] Naka, Y., Y. Makino and T. Ito (2008). Experimental study on the effects of N-wave sonic-boom signatures on window vibration, pp 6085-90, Acoustics 08, Paris. • [Oha-94] Ohanian, H. C. and R. Ruffini (1994), Gravitation and Spacetime, 2nd ed., Norton. • [Oka-06] Okamoto, K. (2006), Fundamentals of Optical Waveguides, Academic Press. • [Osh-03] Osher, S. and R. Fedkiw (2003). Level Set Methods and Dynamic Implicit Surfaces, Springer. • [Ost-94] Ostaszewski, A. (1994), Advanced Mathematical Methods, Cambridge University press. • [Pol-02] Polyanin, A. D., V. F. Zaitsev and A. Moussiaux (2002), Handbook of First Order Partial Differential Equations, Taylor & Francis. • [Pol-04] Polyanin, A. D. and V. F. Zaitsev (2004), Handbook of Nonlinear Partial Differential Equations, Chapman and Hall/CRC Press. • [Pol-07] Polyanin, A. D. and A. V. Manzhirov (2007), Handbook of Mathematics for Engineers and Scientists, Chapman and Hall/CRC Press. • [Pol-08] Polyanin, A. D., W. E. Schiesser and A. I Zhurov (2008), Partial Differential Equation. Scholarpedia, 3(10):4605. Available on-line at Scholarpedia: [7] • [Ran-06] Randall, D. L. and J. LeVeque (2006), Finite Volume Methods and Adaptive Refinement for Global Tsunami Propagation and Local Inundation, Science of Tsunami Hazards, 24(5), pp 319-328. • [Ros-88] Ross, J., S. C. Muller, C. Vidal (1988), Chemical Waves, Science, 240, pp. 460-465. • [Sch-94] Schiesser, W. E. (1994), Computational Mathematics in Engineering and Applied Science, CRC Press. • [Sch-09] Schiesser, W. E. and G. W. Griffiths (2009), A Compendium of Partial Differential Equation Models: Method of Lines Analysis with Matlab, Cambridge University Press; see also http://www.pdecomp.net/ • [Sco-44] Scott-Russell, J. (1844), Report on Waves, 14th Meeting of the British Association for the Advancement of Science, pp. 311-390, London. • [Sco-48] Scott-Russell, J. (1848), On Certain Effects Produced on Sound by The Rapid Motion of The Observer, 18th Meeting of the British Association for the Advancement of Science, pp. 37-38, London. • [Sed-46] Sedov, L. I. (1946), Propagation of strong shock waves, Journal of Applied Mathematics and Mechanics, 10, pp241-250. • [Sed-59] Sedov, L. I. (1959), Similarity and Dimensional Methods in Mechanics, Academic Press, New York. • [Set-99] Sethian, J. A. (1999), Level Set methods and Fast Marching Methods, Cambridge University Press. • [Sha-75] Shadowitz, A. (1975), The Electromagnetic Field, McGraw-Hill. • [Shu-09] Shu, C-W. (2009), High Order Weighted Essentially Non-oscillatory Schemes for Convection Dominated Problems, SIAM Review, Vol. 51, No. 1, pp. 82-126. • [Stra-92] Strauss, W. A. (1992), Partial Differential Equations: An Introduction, Wiley. • [Stre-97] Streeter, V., K. W. Bedford and E. B. Wylie (1997), Fluid Mechanics, 9th Ed., McGraw-Hill. • [Tan-97] Tannehill, J. C., et al. (1997), Computational Fluid Mechanics and Heat Transfer, 2nd Ed., Taylor and Francis. • [Tao-05] Tao, T. (2005), Nonlinear dispersive equations: local and global analysis, Monograph based on (and greatly expanded from) a lecture series given at the NSF-CBMS regional conference on nonlinear and dispersive wave equations at New Mexico State University, held in June 2005. Available on-line at ucla.edu: [8] • [Tao-08] Tao, T. (2008), Why are solitons stable?, arXiv:0802.2408v2 [math.AP]. Available on-line at arxiv.org: [9] • [Tay-41] Taylor, Sir Geoffrey Ingram (1941), The formation of a blast wave by a very intense explosion, British Civil Defence Research Committee, Report RC-210. • [Tay-50a] Taylor, Sir Geoffrey Ingram (1950), The Formation of a Blast Wave by a Very Intense Explosion. I. Theoretical Discussion, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Volume 201, Issue 1065, pp. 159-174. • [Tay-50b] Taylor, Sir Geoffrey Ingram (1950), The Formation of a Blast Wave by a Very Intense Explosion. II. The Atomic Explosion of 1945, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Volume 201, Issue 1065, pp. 175-186. • [Tor-99] Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. • [Tor-01] Toro, E. F. (2001), Shock-Capturing Methods for Free-Surface Shallow Flows, Wiley. • [van-79] van Leer, B. (1979), Towards the ultimate conservative difference scheme V. A second order sequel to Godunov`s method, J. Comput. Phys., vol. 32, pp. 101-136. • [Wes-01] Wesseling, P. (2001), Principles of Computational Fluid Dynamics, Springer, Berlin. • [Whi-99] Whitham, G. B. (1999), Linear and Nonlinear Waves, Wiley. • [Zie-77] Zienkiewicz, O. (1977) The Finite Element Method in Engineering Science. McGraw-Hill. • [Zwi-97] Zwillinger, D. (1997), Handbook of Differential Equations, 3rd Ed., Academic Press. Internal references Personal tools Focal areas
591f5119073848a1
An Intro To Electron - Desktop Apps with JavaScript An Intro To Electron - Desktop Apps with JavaScript Published: 2017/03/12 Channel: Traversy Media Build an Electron App in Under 60 Minutes Build an Electron App in Under 60 Minutes Published: 2017/09/14 Channel: Traversy Media The Electron: Crash Course Chemistry #5 The Electron: Crash Course Chemistry #5 Published: 2013/03/12 Channel: CrashCourse Electron: Desktop Apps with Web Languages - GitHub Universe 2016 Electron: Desktop Apps with Web Languages - GitHub Universe 2016 Published: 2016/09/28 Channel: GitHub Electron: the hard parts made easy Electron: the hard parts made easy Published: 2016/05/11 Channel: GitHub Training & Guides Scientists in Sweden film moving electron for the first time Scientists in Sweden film moving electron for the first time Published: 2008/02/26 Channel: amadeusk331 Creating Desktop Apps with Electron Creating Desktop Apps with Electron Published: 2015/04/29 Channel: Kyle Robinson Young The One-Electron Universe | Space Time The One-Electron Universe | Space Time Published: 2017/08/10 Channel: PBS Space Time The uncertain location of electrons - George Zaidan and Charles Morton The uncertain location of electrons - George Zaidan and Charles Morton Published: 2013/10/14 Channel: TED-Ed Electrons, Protons And Neutrons | Standard Model Of Particle Physics Electrons, Protons And Neutrons | Standard Model Of Particle Physics Published: 2009/11/19 Channel: Best0fScience Learn how to build an Electron App - Part 1 of 2 Learn how to build an Electron App - Part 1 of 2 Published: 2017/03/07 Channel: HackerEarth L'infiniment Petit Qu'est ce qu'un Electron ? Published: 2017/10/12 Channel: Jensky What is electron? A quick answer What is electron? A quick answer Published: 2017/05/15 Channel: Techi Engineers How Fast is an Electron and Electricity How Fast is an Electron and Electricity Published: 2012/12/21 Channel: RimstarOrg 2. Discovery of electron and nucleus, need for quantum mechanics 2. Discovery of electron and nucleus, need for quantum mechanics Published: 2009/07/01 Channel: MIT OpenCourseWare Electrons, protons and neutrons - standard model of particle physics Electrons, protons and neutrons - standard model of particle physics Published: 2015/06/02 Channel: Vidya-mitra How does the electron move around the atom? How does the electron move around the atom? Published: 2010/04/27 Channel: St. Mary's Physics Online Electron Live at the 2014 Catskill Chill 9/7/14 (Full Set HD) Published: 2015/05/27 Channel: Catskill Chill Native Desktop Apps with Angular and Electron Native Desktop Apps with Angular and Electron Published: 2017/09/21 Channel: Angular Firebase 30 amazing electron microscopic images - micro universe 30 amazing electron microscopic images - micro universe Published: 2017/02/07 Channel: Amazing Ten Getting Started with Electron 1.x Getting Started with Electron 1.x Published: 2016/06/30 Channel: Kyle Robinson Young Nero - Electron Nero - Electron Published: 2010/04/12 Channel: UKF Drum & Bass 7.1 How protons, electrons and neutrons were discovered. 7.1 How protons, electrons and neutrons were discovered. Published: 2013/08/04 Channel: Ian Stuart Making Electron Development Simpler, More Pleasant, and More Productive - GitHub Universe 2016 Published: 2016/09/28 Channel: GitHub Wave Model of an Electron Wave Model of an Electron Published: 2015/08/07 Channel: Bozeman Science First Steps with Electron First Steps with Electron Published: 2016/12/31 Channel: Thomas Stuetz Quantum Numbers, Atomic Orbitals, and Electron Configurations Quantum Numbers, Atomic Orbitals, and Electron Configurations Published: 2015/08/31 Channel: Professor Dave Explains Electron Gun - Backstage Science Electron Gun - Backstage Science Published: 2011/03/29 Channel: BackstageScience Felix Rieseberg - Building Apps with Electron / Script Felix Rieseberg - Building Apps with Electron / Script'17 Published: 2017/05/03 Channel: Script How does electron move around nucleus? Gryzinski How does electron move around nucleus? Gryzinski's free-fall atomic models for chemical elements Published: 2015/07/09 Channel: Barthek88 Electron Configuration Electron Configuration Published: 2013/08/04 Channel: Bozeman Science Quantum Mechanics Part 3 of 4 - The Electron Shells Quantum Mechanics Part 3 of 4 - The Electron Shells Published: 2012/02/13 Channel: TheAsianRepublican Drawing electron configuration diagrams | Chemistry for All | The Fuse School Drawing electron configuration diagrams | Chemistry for All | The Fuse School Published: 2014/08/10 Channel: FuseSchool - Global Education Creating an Electron Mac App Beginning to End: Stock Market Monitor Creating an Electron Mac App Beginning to End: Stock Market Monitor Published: 2016/06/25 Channel: trulycode #1 Electron - Como crear una App de Escritorio Multiplataforma - 2017 Published: 2017/01/19 Channel: Enupal Valence Electrons and the Periodic Table Valence Electrons and the Periodic Table Published: 2012/12/19 Channel: Tyler DeWitt Electron - Aprenda a Criar Aplicativos para Windows Electron - Aprenda a Criar Aplicativos para Windows Published: 2016/10/04 Channel: TekZoom 25 Amazing Electron Microscope Images 25 Amazing Electron Microscope Images Published: 2016/02/11 Channel: Crazycute Vids Electron Tutorial - Hello World App Electron Tutorial - Hello World App Published: 2016/12/07 Channel: Red Stapler The New Desktop: Electron, React, & Pixel-Perfect Native-Feeling Experiences - Forward 4 Web Summit Published: 2016/03/17 Channel: InfoQ Rocket Lab - Electron launch Rocket Lab - Electron launch Published: 2017/05/31 Channel: Rocket Lab Electron configuration Electron configuration Published: 2014/02/25 Channel: Sonya Birazian Electron: Desktop Apps with Web Languages - GitHub Universe 2015 Electron: Desktop Apps with Web Languages - GitHub Universe 2015 Published: 2016/02/04 Channel: GitHub Electron Configuration of Atoms + Shortcut Tutorial Video Electron Configuration of Atoms + Shortcut Tutorial Video Published: 2012/08/22 Channel: Leah4sci Electron Tutorial - Packaging the App Electron Tutorial - Packaging the App Published: 2016/12/07 Channel: Red Stapler Multi Window Electron Desktop Apps Multi Window Electron Desktop Apps Published: 2015/06/23 Channel: Kyle Robinson Young How to find the number of protons, neutrons and electrons in atoms and ions Published: 2014/11/19 Channel: sciencetutorial4u Electron $4000 Gaming PC Build - October 2014 Electron $4000 Gaming PC Build - October 2014 Published: 2014/10/13 Channel: Austin Evans Ultrafast Electron Diffraction: How It Works Ultrafast Electron Diffraction: How It Works Published: 2015/08/05 Channel: SLAC National Accelerator Laboratory What is electron shielding in atoms What is electron shielding in atoms Published: 2013/01/22 Channel: JR Ginex-Orinion GO TO RESULTS [51 .. 100] From Wikipedia, the free encyclopedia   (Redirected from Electrons) Jump to: navigation, search Hydrogen atom orbitals at different energy levels. The brighter areas are where one is most likely to find an electron at any given time. Composition Elementary particle[1] Statistics Fermionic Generation First Interactions Gravity, electromagnetic, weak Antiparticle Positron (also called antielectron) Theorized Richard Laming (1838–1851),[2] G. Johnstone Stoney (1874) and others.[3][4] Discovered J. J. Thomson (1897)[5] Mass 9.10938356(11)×10−31 kg[6] 5.48579909070(16)×10−4 u[6] [1822.8884845(14)]−1 u[note 1] 0.5109989461(31) MeV/c2[6] Mean lifetime stable ( > 6.6×1028 yr[7]) Electric charge −1 e[note 2] −1.6021766208(98)×10−19 C[6] −4.80320451(10)×10−10 esu Magnetic moment −1.00115965218091(26) μB[6] Spin 1/2 Weak isospin LH: −1/2, RH: 0 Weak hypercharge LH: -1, RH: −2 The electron is a subatomic particle, symbol , with a negative elementary electric charge.[8] Electrons belong to the first generation of the lepton particle family,[9] and are generally thought to be elementary particles because they have no known components or substructure.[1] The electron has a mass that is approximately 1/1836 that of the proton.[10] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. As it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[9] Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer De Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions.[11] Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer it will generate a magnetic field. Electromagnetic fields produced from other sources (not those self-produced) will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[12] In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.[5][13][14] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. [15] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed. [16] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron). In the early 1700s, Francis Hauksbee and French chemist Charles François de Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, Du Fay theorized that electricity consists of two electrical fluids, vitreous and resinous, that are separated by friction, and that neutralize each other when combined.[17] American scientist Ebenezer Kinnersley later also independently reached the same conclusion.[18]:118 A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively.[19] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[20] Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[2] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis.[21] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".[3] Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron.[22][23] The word electron is a combination of the words electric and ion.[24] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[25][26] A round glass vacuum tube with a glowing circular beam inside A beam of electrons deflected in a circle by a magnetic field[27] Electron detected in an isopropanol cloud chamber The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[28] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[29] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[30][31] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[32] In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.[34] In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[13] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[5] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[5][14] He showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[5][35] The name electron was again proposed for these particles by the Irish physicist George Johnstone Stoney, and the name has since gained universal acceptance. Robert Millikan While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[36] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[37] This evidence strengthened the view that electrons existed as components of atoms.[38][39] The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[5] using clouds of charged water droplets generated by electrolysis,[13] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[40] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[41] Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.[42] Atomic theory[edit] Three concentric circles about a nucleus, with an electron moving from the second to the first circle and releasing a photon The Bohr model of the atom, showing states of electron with energy quantized by the number n. An electron dropping to a lower orbit emits a photon equal to the energy difference between the orbits. By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[43] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[44] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[43] Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[45] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[46] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[47] In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[46] which were known to largely repeat themselves according to the periodic law.[48] In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.[49] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[43][50] This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[51] Quantum mechanics[edit] In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light.[52] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[53] The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927 George Paget Thomson, discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel.[54] A symmetrical blue cloud that decreases in intensity from the center outward In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point. De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[55] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[56] Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.[57] In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[58] In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[59] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants. In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[60] Particle accelerators[edit] With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[61] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.[62] With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[63] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[64] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[65][66] Confinement of individual electrons[edit] Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K).[67] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor. A table with four rows and four columns, with each cell containing a particle identifier Standard Model of elementary particles. The electron (symbol e) is on the left. Fundamental properties[edit] The invariant mass of an electron is approximately 9.109×10−31 kilograms,[70] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[10][71] Astronomical measurements show that the proton-to-electron mass ratio has held the same value for at least half the age of the universe, as is predicted by the Standard Model.[72] Electrons have an electric charge of −1.602×10−19 coulomb,[70] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of 2.2×10−8.[70] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[73] As the symbol e is used for the elementary charge, the electron is commonly symbolized by , where the minus sign indicates the negative charge. The positron is symbolized by because it has the same properties as the electron but with a positive rather than negative charge.[69][70] The electron has an intrinsic angular momentum or spin of 1/2.[70] This property is usually stated by referring to the electron as a spin-1/2 particle.[69] For such particles the spin magnitude is ‹The template Sqrt is being considered for merging.› 3/2 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[70] It is approximately equal to one Bohr magneton,[74][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[70] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[75] The electron has no known substructure[1][76] and it is assumed to be a point particle with a point charge and no spatial extent.[9] In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties contrasts to experimental observations in Penning traps which point to finite non-zero radius of the electron. A possible explanation of this paradoxical situation is given below in the "Virtual particles" subsection by taking into consideration the Foldy-Wouthuysen transformation. The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[77] Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.[78] The upper bound of the electron radius of 10−18 meters[79] can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[80][note 5] There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10−6 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[81] The experimental lower bound for the electron's mean lifetime is 6.6×1028 years, at a 90% confidence level.[7][82][83] Quantum properties[edit] A three dimensional projection of a two dimensional plot. There are symmetric hills along one axis and symmetric valleys along the other, roughly giving a saddle-shape Virtual particles[edit] In a simplified picture, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter.[85] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[86] A sphere with a minus sign at lower left symbolizes the electron, while pairs of spheres with plus and minus signs show the virtual particles While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[87][88] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[89] Virtual particles cause a comparable shielding effect for the mass of the electron.[90] The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[74][91] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[92] The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[93] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][94] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[87] An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law.[95]:58–61 When an electron is in motion, it generates a magnetic field.[84]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[96] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).[95]:429–434 A graph with arcs showing the motion of charged particles A particle with charge q (at left) is moving with velocity v through a magnetic field B that is oriented toward the viewer. For an electron, q is negative so it follows a curved trajectory toward the top. When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[84]:160[97][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[98] Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force.[99] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[100] A curve shows the motion of the electron, a red dot shows the nucleus, and a wiggly line the emitted photon Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 − E1 determines the frequency f of the emitted photon. An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[101] For an electron, it has a value of 2.43×10−12 m.[70] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or Linear Thomson scattering.[102] The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10−3, which is approximately equal to 1/137.[70] When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[103][104] On the other hand, high-energy photons can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[105][106] In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino-electron elastic scattering.[107] Atoms and molecules[edit] A table of five rows and five columns, with each cell portraying a color-coded probability density Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability of finding the electron at a given position. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[108] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[109] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[110] The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[112] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[12] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[113] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.[114] Four bolts of lightning strike the ground A lightning discharge consists primarily of a flow of electrons.[115] The electric potential needed for lightning can be generated by a triboelectric effect.[116][117] Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass.[119] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[120] Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed.[123] This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.[124] Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law,[122] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.[125] When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[126] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[127] However, the mechanism by which higher temperature superconductors operate remains uncertain. Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons.[128][129] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge. Motion and energy[edit] The plot starts at zero and curves sharply upward toward the right The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is: where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[131] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[52] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[132] A photon strikes the nucleus from the left, with the resulting electron and positron moving off to the right Pair production caused by the collision of a photon with an atomic nucleus The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[133] For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons: An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[134] For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[135][136] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[137] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, For about the next 300000400000 years, the excess electrons remained too energetic to bind with atomic nuclei.[138] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[139] Roughly one million years after the big bang, the first generation of stars began to form.[139] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[140] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60 A branching tree representing the particle production An extended air shower generated by an energetic cosmic ray striking the Earth's atmosphere At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[142] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants. When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[143] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[144] Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[145] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[146] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion. A muon, in turn, can decay to form an electron or positron.[147] A swirling green glow in the night sky above snow-covered ground Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[148] Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[149] The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[150][151] In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[110] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[152] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[153] The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[156] Plasma applications[edit] Particle beams[edit] A violet beam from above produces a blue glow about a Space shuttle model During a NASA wind tunnel test, a model of the Space Shuttle is targeted by a beam of electrons, simulating the effect of ionizing gases during re-entry.[157] Electron beams are used in welding.[158] They allow energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[159][160] Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[161] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[162] Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[163] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[164] Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[165][166] Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect.[note 8] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .[167] Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[168] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[169][170] The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[171] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[172] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[173] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[174] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain. Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[175][176][177] Other applications[edit] In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.[178] Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[179] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[180] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[181] See also[edit] 1. ^ The fractional version's denominator is the inverse of the decimal value (along with its relative standard uncertainty of 4.2×10−13 u). 2. ^ The electron's charge is the negative of elementary charge, which has a positive value for the proton. 3. ^ This magnitude is obtained from the spin quantum number as for quantum number s = 1/2. See: Gupta, M.C. (2001). Atomic and Molecular Spectroscopy. New Age Publishers. p. 81. ISBN 81-224-1300-5.  4. ^ Bohr magneton: 5. ^ The classical electron radius is derived as follows. Assume that the electron's charge is spread uniformly throughout a spherical volume. Since one part of the sphere would repel the other parts, the sphere contains electrostatic potential energy. This energy is assumed to equal the electron's rest energy, defined by special relativity (E = mc2). From electrostatics theory, the potential energy of a sphere with radius r and charge e is given by: where ε0 is the vacuum permittivity. For an electron with rest mass m0, the rest energy is equal to: where c is the speed of light in a vacuum. Setting them equal and solving for r gives the classical electron radius. See: Haken, H.; Wolf, H.C.; Brewer, W.D. (2005). The Physics of Atoms and Quanta: Introduction to Experiments and Theory. Springer. p. 70. ISBN 3-540-67274-5.  6. ^ Radiation from non-relativistic electrons is sometimes termed cyclotron radiation. 7. ^ The change in wavelength, Δλ, depends on the angle of the recoil, θ, as follows, where c is the speed of light in a vacuum and me is the electron mass. See Zombeck (2007: 393, 396). 8. ^ The polarization of an electron beam means that the spins of all electrons point into one direction. In other words, the projections of the spins of all electrons onto their momentum vector have the same sign. 1. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). "New Tests for Quark and Lepton Substructure". Physical Review Letters. 50 (11): 811–814. Bibcode:1983PhRvL..50..811E. doi:10.1103/PhysRevLett.50.811.  2. ^ a b Farrar, W.V. (1969). "Richard Laming and the Coal-Gas Industry, with His Views on the Structure of Matter". Annals of Science. 25 (3): 243–254. doi:10.1080/00033796900200141.  3. ^ a b c Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities. University of Chicago Press. pp. 70–74. ISBN 0-226-02421-0.  4. ^ Buchwald, J.Z.; Warwick, A. (2001). Histories of the Electron: The Birth of Microphysics. MIT Press. pp. 195–203. ISBN 0-262-52424-4.  5. ^ a b c d e f Thomson, J.J. (1897). "Cathode Rays". Philosophical Magazine. 44 (269): 293–316. doi:10.1080/14786449708621070.  6. ^ a b c d e P.J. Mohr, B.N. Taylor, and D.B. Newell, "The 2014 CODATA Recommended Values of the Fundamental Physical Constants". This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: [1]. National Institute of Standards and Technology, Gaithersburg, MD 20899. 7. ^ a b Agostini M. et al. (Borexino Coll.) (2015). "Test of Electric Charge Conservation with Borexino". Physical Review Letters. 115 (23): 231802. Bibcode:2015PhRvL.115w1802A. PMID 26684111. arXiv:1509.01223Freely accessible. doi:10.1103/PhysRevLett.115.231802.  8. ^ "JERRY COFF". Retrieved 10 September 2010.  9. ^ a b c d Curtis, L.J. (2003). Atomic Structure and Lifetimes: A Conceptual Approach. Cambridge University Press. p. 74. ISBN 0-521-53635-9.  10. ^ a b "CODATA value: proton-electron mass ratio". 2006 CODATA recommended values. National Institute of Standards and Technology. Retrieved 2009-07-18.  11. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 236–237. ISBN 0-691-13512-6.  12. ^ a b Pauling, L.C. (1960). The Nature of the Chemical Bond and the Structure of Molecules and Crystals: an introduction to modern structural chemistry (3rd ed.). Cornell University Press. pp. 4–10. ISBN 0-8014-0333-2.  13. ^ a b c Dahl (1997:122–185). 14. ^ a b Wilson, R. (1997). Astronomy Through the Ages: The Story of the Human Attempt to Understand the Universe. CRC Press. p. 138. ISBN 0-7484-0748-0.  15. ^ Shipley, J.T. (1945). Dictionary of Word Origins. The Philosophical Library. p. 133. ISBN 0-88029-751-4.  16. ^ Baigrie, B. (2006). Electricity and Magnetism: A Historical Perspective. Greenwood Press. pp. 7–8. ISBN 0-313-33358-0.  17. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. pp. 15, 20. ISBN 0-7803-1193-0.  18. ^ Florian Cajori (1917). A History of Physics in Its Elementary Branches: Including the Evolution of Physical Laboratories. Macmillan.  19. ^ "Benjamin Franklin (1706–1790)". Eric Weisstein's World of Biography. Wolfram Research. Retrieved 2010-12-16.  20. ^ Myers, R.L. (2006). The Basics of Physics. Greenwood Publishing Group. p. 242. ISBN 0-313-32857-9.  21. ^ Barrow, J.D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society. 24: 24–26. Bibcode:1983QJRAS..24...24B.  22. ^ Sōgo Okamura (1994). History of Electron Tubes. IOS Press. p. 11. ISBN 978-90-5199-145-1. Retrieved 29 May 2015. In 1881, Stoney named this electromagnetic 'electrolion'. It came to be called 'electron' from 1891. [...] In 1906, the suggestion to call cathode ray particles 'electrions' was brought up but through the opinion of Lorentz of Holland 'electrons' came to be widely used.  23. ^ Stoney, G.J. (1894). "Of the "Electron," or Atom of Electricity". Philosophical Magazine. 38 (5): 418–420. doi:10.1080/14786449408620653.  24. ^ "electron, n.2". OED Online. March 2013. Oxford University Press. Accessed 12 April 2013 [2] 25. ^ Soukhanov, A.H. ed. (1986). Word Mysteries & Histories. Houghton Mifflin Company. p. 73. ISBN 0-395-40265-4.  26. ^ Guralnik, D.B. ed. (1970). Webster's New World Dictionary. Prentice Hall. p. 450.  27. ^ Born, M.; Blin-Stoyle, R.J.; Radcliffe, J.M. (1989). Atomic Physics. Courier Dover. p. 26. ISBN 0-486-65984-4.  28. ^ Dahl (1997:55–58). 29. ^ DeKosky, R.K. (1983). "William Crookes and the quest for absolute vacuum in the 1870s". Annals of Science. 40 (1): 1–18. doi:10.1080/00033798300200101.  30. ^ a b Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover. pp. 221–222. ISBN 0-486-61053-5.  31. ^ Dahl (1997:64–78). 32. ^ Zeeman, P.; Zeeman, P. (1907). "Sir William Crookes, F.R.S". Nature. 77 (1984): 1–3. Bibcode:1907Natur..77....1C. doi:10.1038/077001a0.  33. ^ Dahl (1997:99). 34. ^ Frank Wilczek: "Happy Birthday, Electron" Scientific American, June 2012. 35. ^ Thomson, J.J. (1906). "Nobel Lecture: Carriers of Negative Electricity" (PDF). The Nobel Foundation. Retrieved 2008-08-25.  36. ^ Trenn, T.J. (1976). "Rutherford on the Alpha-Beta-Gamma Classification of Radioactive Rays". Isis. 67 (1): 61–75. JSTOR 231134. doi:10.1086/351545.  37. ^ Becquerel, H. (1900). "Déviation du Rayonnement du Radium dans un Champ Électrique". Comptes rendus de l'Académie des sciences (in French). 130: 809–815.  38. ^ Buchwald and Warwick (2001:90–91). 39. ^ Myers, W.G. (1976). "Becquerel's Discovery of Radioactivity in 1896". Journal of Nuclear Medicine. 17 (7): 579–582. PMID 775027.  40. ^ Kikoin, I.K.; Sominskiĭ, I.S. (1961). "Abram Fedorovich Ioffe (on his eightieth birthday)". Soviet Physics Uspekhi. 3 (5): 798–809. Bibcode:1961SvPhU...3..798K. doi:10.1070/PU1961v003n05ABEH005812.  Original publication in Russian: Кикоин, И.К.; Соминский, М.С. (1960). "Академик А.Ф. Иоффе" (PDF). Успехи Физических Наук. 72 (10): 303–321.  41. ^ Millikan, R.A. (1911). "The Isolation of an Ion, a Precision Measurement of its Charge, and the Correction of Stokes' Law". Physical Review. 32 (2): 349–397. Bibcode:1911PhRvI..32..349M. doi:10.1103/PhysRevSeriesI.32.349.  42. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). "A Report on the Wilson Cloud Chamber and Its Applications in Physics". Reviews of Modern Physics. 18 (2): 225–290. Bibcode:1946RvMP...18..225G. doi:10.1103/RevModPhys.18.225.  43. ^ a b c Smirnov, B.M. (2003). Physics of Atoms and Ions. Springer. pp. 14–21. ISBN 0-387-95550-X.  44. ^ Bohr, N. (1922). "Nobel Lecture: The Structure of the Atom" (PDF). The Nobel Foundation. Retrieved 2008-12-03.  45. ^ Lewis, G.N. (1916). "The Atom and the Molecule". Journal of the American Chemical Society. 38 (4): 762–786. doi:10.1021/ja02261a002.  46. ^ a b Arabatzis, T.; Gavroglu, K. (1997). "The chemists' electron". European Journal of Physics. 18 (3): 150–163. Bibcode:1997EJPh...18..150A. doi:10.1088/0143-0807/18/3/005.  47. ^ Langmuir, I. (1919). "The Arrangement of Electrons in Atoms and Molecules". Journal of the American Chemical Society. 41 (6): 868–934. doi:10.1021/ja02227a002.  48. ^ Scerri, E.R. (2007). The Periodic Table. Oxford University Press. pp. 205–226. ISBN 0-19-530573-6.  49. ^ Massimi, M. (2005). Pauli's Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8. ISBN 0-521-83911-4.  50. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). "Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons". Die Naturwissenschaften (in German). 13 (47): 953–954. Bibcode:1925NW.....13..953E. doi:10.1007/BF01558878.  51. ^ Pauli, W. (1923). "Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes". Zeitschrift für Physik (in German). 16 (1): 155–164. Bibcode:1923ZPhy...16..155P. doi:10.1007/BF01327386.  52. ^ a b de Broglie, L. (1929). "Nobel Lecture: The Wave Nature of the Electron" (PDF). The Nobel Foundation. Retrieved 2008-08-30.  53. ^ Falkenburg, B. (2007). Particle Metaphysics: A Critical Account of Subatomic Reality. Springer. p. 85. ISBN 3-540-33731-8.  54. ^ Davisson, C. (1937). "Nobel Lecture: The Discovery of Electron Waves" (PDF). The Nobel Foundation. Retrieved 2008-08-30.  55. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem". Annalen der Physik (in German). 385 (13): 437–490. Bibcode:1926AnP...385..437S. doi:10.1002/andp.19263851302.  56. ^ Rigden, J.S. (2003). Hydrogen. Harvard University Press. pp. 59–86. ISBN 0-674-01252-6.  57. ^ Reed, B.C. (2007). Quantum Mechanics. Jones & Bartlett Publishers. pp. 275–350. ISBN 0-7637-4451-4.  58. ^ Dirac, P.A.M. (1928). "The Quantum Theory of the Electron". Proceedings of the Royal Society A. 117 (778): 610–624. Bibcode:1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023.  59. ^ Dirac, P.A.M. (1933). "Nobel Lecture: Theory of Electrons and Positrons" (PDF). The Nobel Foundation. Retrieved 2008-11-01.  60. ^ "The Nobel Prize in Physics 1965". The Nobel Foundation. Retrieved 2008-11-04.  61. ^ Panofsky, W.K.H. (1997). "The Evolution of Particle Accelerators & Colliders" (PDF). Beam Line. Stanford University. 27 (1): 36–44. Retrieved 2008-09-15.  62. ^ Elder, F.R.; et al. (1947). "Radiation from Electrons in a Synchrotron". Physical Review. 71 (11): 829–830. Bibcode:1947PhRv...71..829E. doi:10.1103/PhysRev.71.829.5.  63. ^ Hoddeson, L.; et al. (1997). The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge University Press. pp. 25–26. ISBN 0-521-57816-7.  64. ^ Bernardini, C. (2004). "AdA: The First Electron–Positron Collider". Physics in Perspective. 6 (2): 156–183. Bibcode:2004PhP.....6..156B. doi:10.1007/s00016-003-0202-y.  65. ^ "Testing the Standard Model: The LEP experiments". CERN. 2008. Retrieved 2008-09-15.  66. ^ "LEP reaps a final harvest". CERN Courier. 40 (10). 2000.  67. ^ Prati, E.; De Michielis, M.; Belli, M.; Cocco, S.; Fanciulli, M.; Kotekar-Patil, D.; Ruoff, M.; Kern, D. P.; Wharam, D. A.; Verduijn, J.; Tettamanzi, G. C.; Rogge, S.; Roche, B.; Wacquez, R.; Jehl, X.; Vinet, M.; Sanquer, M. (2012). "Few electron limit of n-type metal oxide semiconductor single electron transistors". Nanotechnology. 23 (21): 215204. Bibcode:2012Nanot..23u5204P. PMID 22552118. arXiv:1203.4811Freely accessible. doi:10.1088/0957-4484/23/21/215204.  68. ^ Frampton, P.H.; Hung, P.Q.; Sher, Marc (2000). "Quarks and Leptons Beyond the Third Generation". Physics Reports. 330 (5–6): 263–348. Bibcode:2000PhR...330..263F. arXiv:hep-ph/9903387Freely accessible. doi:10.1016/S0370-1573(99)00095-2.  69. ^ a b c Raith, W.; Mulvey, T. (2001). Constituents of Matter: Atoms, Molecules, Nuclei and Particles. CRC Press. pp. 777–781. ISBN 0-8493-1202-7.  70. ^ a b c d e f g h i The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2006). "CODATA recommended values of the fundamental physical constants". Reviews of Modern Physics. 80 (2): 633–730. Bibcode:2008RvMP...80..633M. arXiv:0801.0028Freely accessible. doi:10.1103/RevModPhys.80.633.  Individual physical constants from the CODATA are available at: "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standards and Technology. Retrieved 2009-01-15.  71. ^ Zombeck, M.V. (2007). Handbook of Space Astronomy and Astrophysics (3rd ed.). Cambridge University Press. p. 14. ISBN 0-521-78242-2.  72. ^ Murphy, M.T.; et al. (2008). "Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe". Science. 320 (5883): 1611–1613. Bibcode:2008Sci...320.1611M. PMID 18566280. arXiv:0806.3081Freely accessible. doi:10.1126/science.1156352.  73. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). "Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron". Physical Review. 129 (6): 2566–2576. Bibcode:1963PhRv..129.2566Z. doi:10.1103/PhysRev.129.2566.  74. ^ a b Odom, B.; et al. (2006). "New Measurement of the Electron Magnetic Moment Using a One-Electron Quantum Cyclotron". Physical Review Letters. 97 (3): 030801. Bibcode:2006PhRvL..97c0801O. PMID 16907490. doi:10.1103/PhysRevLett.97.030801.  75. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 261–262. ISBN 0-691-13512-6.  76. ^ Gabrielse, G.; et al. (2006). "New Determination of the Fine Structure Constant from the Electron g Value and QED". Physical Review Letters. 97 (3): 030802(1–4). Bibcode:2006PhRvL..97c0802G. doi:10.1103/PhysRevLett.97.030802.  77. ^ Eduard Shpolsky, Atomic physics (Atomnaia fizika), second edition, 1951 78. ^ Dehmelt, H. (1988). "A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius". Physica Scripta. T22: 102–10. Bibcode:1988PhST...22..102D. doi:10.1088/0031-8949/1988/T22/016.  79. ^ Gerald Gabrielse webpage at Harvard University 80. ^ Meschede, D. (2004). Optics, light and lasers: The Practical Approach to Modern Aspects of Photonics and Laser Physics. Wiley-VCH. p. 168. ISBN 3-527-40364-7.  81. ^ Steinberg, R.I.; et al. (1999). "Experimental test of charge conservation and the stability of the electron". Physical Review D. 61 (2): 2582–2586. Bibcode:1975PhRvD..12.2582S. doi:10.1103/PhysRevD.12.2582.  82. ^ J. Beringer (Particle Data Group); et al. (2012). "Review of Particle Physics: [electron properties]" (PDF). Physical Review D. 86 (1): 010001. Bibcode:2012PhRvD..86a0001B. doi:10.1103/PhysRevD.86.010001.  83. ^ Back, H. O.; et al. (2002). "Search for electron decay mode e → γ + ν with prototype of Borexino detector". Physics Letters B. 525: 29–40. Bibcode:2002PhLB..525...29B. doi:10.1016/S0370-2693(01)01440-X.  84. ^ a b c d e Munowitz, M. (2005). Knowing, The Nature of Physical Law. Oxford University Press. ISBN 0-19-516737-6.  85. ^ Kane, G. (October 9, 2006). "Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics?". Scientific American. Retrieved 2008-09-19.  86. ^ Taylor, J. (1989). "Gauge Theories in Particle Physics". In Davies, Paul. The New Physics. Cambridge University Press. p. 464. ISBN 0-521-43831-4.  87. ^ a b Genz, H. (2001). Nothingness: The Science of Empty Space. Da Capo Press. pp. 241–243, 245–247. ISBN 0-7382-0610-5.  88. ^ Gribbin, J. (January 25, 1997). "More to electrons than meets the eye". New Scientist. Retrieved 2008-09-17.  89. ^ Levine, I.; et al. (1997). "Measurement of the Electromagnetic Coupling at Large Momentum Transfer". Physical Review Letters. 78 (3): 424–427. Bibcode:1997PhRvL..78..424L. doi:10.1103/PhysRevLett.78.424.  90. ^ Murayama, H. (March 10–17, 2006). Supersymmetry Breaking Made Easy, Viable and Generic. Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories. La Thuile, Italy. Bibcode:2007arXiv0709.3041M. arXiv:0709.3041Freely accessible. —lists a 9% mass difference for an electron that is the size of the Planck distance. 91. ^ Schwinger, J. (1948). "On Quantum-Electrodynamics and the Magnetic Moment of the Electron". Physical Review. 73 (4): 416–417. Bibcode:1948PhRv...73..416S. doi:10.1103/PhysRev.73.416.  92. ^ Huang, K. (2007). Fundamental Forces of Nature: The Story of Gauge Fields. World Scientific. pp. 123–125. ISBN 981-270-645-3.  93. ^ Foldy, L.L.; Wouthuysen, S. (1950). "On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit". Physical Review. 78: 29–36. Bibcode:1950PhRv...78...29F. doi:10.1103/PhysRev.78.29.  94. ^ Sidharth, B.G. (2008). "Revisiting Zitterbewegung". International Journal of Theoretical Physics. 48 (2): 497–506. Bibcode:2009IJTP...48..497S. arXiv:0806.0985Freely accessible. doi:10.1007/s10773-008-9825-8.  95. ^ a b Griffiths, David J. (1998), Introduction to Electrodynamics (3rd ed.), Prentice Hall, ISBN 0-13-805326-X  96. ^ Crowell, B. (2000). Electricity and Magnetism. Light and Matter. pp. 129–152. ISBN 0-9704670-4-4.  97. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). "Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field". The Astrophysical Journal. 465: 327–337. Bibcode:1996ApJ...465..327M. arXiv:astro-ph/9601073Freely accessible. doi:10.1086/177422.  98. ^ Rohrlich, F. (1999). "The Self-Force and Radiation Reaction". American Journal of Physics. 68 (12): 1109–1112. Bibcode:2000AmJPh..68.1109R. doi:10.1119/1.1286430.  99. ^ Georgi, H. (1989). "Grand Unified Theories". In Davies, Paul. The New Physics. Cambridge University Press. p. 427. ISBN 0-521-43831-4.  100. ^ Blumenthal, G.J.; Gould, R. (1970). "Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases". Reviews of Modern Physics. 42 (2): 237–270. Bibcode:1970RvMP...42..237B. doi:10.1103/RevModPhys.42.237.  101. ^ Staff (2008). "The Nobel Prize in Physics 1927". The Nobel Foundation. Retrieved 2008-09-28.  102. ^ Chen, S.-Y.; Maksimchuk, A.; Umstadter, D. (1998). "Experimental observation of relativistic nonlinear Thomson scattering". Nature. 396 (6712): 653–655. Bibcode:1998Natur.396..653C. arXiv:physics/9810036Freely accessible. doi:10.1038/25303.  103. ^ Beringer, R.; Montgomery, C.G. (1942). "The Angular Distribution of Positron Annihilation Radiation". Physical Review. 61 (5–6): 222–224. Bibcode:1942PhRv...61..222B. doi:10.1103/PhysRev.61.222.  104. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN 0-13-082444-5.  105. ^ Eichler, J. (2005). "Electron–positron pair production in relativistic ion–atom collisions". Physics Letters A. 347 (1–3): 67–72. Bibcode:2005PhLA..347...67E. doi:10.1016/j.physleta.2005.06.105.  106. ^ Hubbell, J.H. (2006). "Electron positron pair production by photons: A historical overview". Radiation Physics and Chemistry. 75 (6): 614–623. Bibcode:2006RaPC...75..614H. doi:10.1016/j.radphyschem.2005.10.008.  107. ^ Quigg, C. (June 4–30, 2000). The Electroweak Theory. TASI 2000: Flavor Physics for the Millennium. Boulder, Colorado. p. 80. Bibcode:2002hep.ph....4104Q. arXiv:hep-ph/0204104Freely accessible.  108. ^ Mulliken, R.S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science. 157 (3784): 13–24. Bibcode:1967Sci...157...13M. PMID 5338306. doi:10.1126/science.157.3784.13.  109. ^ Burhop, E.H.S. (1952). The Auger Effect and Other Radiationless Transitions. Cambridge University Press. pp. 2–3. ISBN 0-88275-966-3.  110. ^ a b Grupen, C. (2000). "Physics of Particle Detection". AIP Conference Proceedings. 536: 3–34. arXiv:physics/9906063Freely accessible. doi:10.1063/1.1361756.  111. ^ Jiles, D. (1998). Introduction to Magnetism and Magnetic Materials. CRC Press. pp. 280–287. ISBN 0-412-79860-3.  112. ^ Löwdin, P.O.; Erkki Brändas, E.; Kryachko, E.S. (2003). Fundamental World of Quantum Chemistry: A Tribute to the Memory of Per- Olov Löwdin. Springer. pp. 393–394. ISBN 1-4020-1290-X.  113. ^ McQuarrie, D.A.; Simon, J.D. (1997). Physical Chemistry: A Molecular Approach. University Science Books. pp. 325–361. ISBN 0-935702-99-7.  114. ^ Daudel, R.; et al. (1973). "The Electron Pair in Chemistry". Canadian Journal of Chemistry. 52 (8): 1310–1320. doi:10.1139/v74-201. [permanent dead link] 115. ^ Rakov, V.A.; Uman, M.A. (2007). Lightning: Physics and Effects. Cambridge University Press. p. 4. ISBN 0-521-03541-4.  116. ^ Freeman, G.R.; March, N.H. (1999). "Triboelectricity and some associated phenomena". Materials Science and Technology. 15 (12): 1454–1458. doi:10.1179/026708399101505464.  117. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). "Methodology for studying particle–particle triboelectrification in granular materials". Journal of Electrostatics. 67 (2–3): 178–183. doi:10.1016/j.elstat.2008.12.002.  118. ^ Weinberg, S. (2003). The Discovery of Subatomic Particles. Cambridge University Press. pp. 15–16. ISBN 0-521-82351-X.  119. ^ Lou, L.-F. (2003). Introduction to phonons and electrons. World Scientific. pp. 162, 164. ISBN 978-981-238-461-4.  120. ^ Guru, B.S.; Hızıroğlu, H.R. (2004). Electromagnetic Field Theory. Cambridge University Press. pp. 138, 276. ISBN 0-521-83016-8.  121. ^ Achuthan, M.K.; Bhat, K.N. (2007). Fundamentals of Semiconductor Devices. Tata McGraw-Hill. pp. 49–67. ISBN 0-07-061220-X.  122. ^ a b Ziman, J.M. (2001). Electrons and Phonons: The Theory of Transport Phenomena in Solids. Oxford University Press. p. 260. ISBN 0-19-850779-8.  123. ^ Main, P. (June 12, 1993). "When electrons go with the flow: Remove the obstacles that create electrical resistance, and you get ballistic electrons and a quantum surprise". New Scientist. 1887: 30. Retrieved 2008-10-09.  124. ^ Blackwell, G.R. (2000). The Electronic Packaging Handbook. CRC Press. pp. 6.39–6.40. ISBN 0-8493-8591-1.  125. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. pp. 43, 71–78. ISBN 0-7503-0721-8.  126. ^ Staff (2008). "The Nobel Prize in Physics 1972". The Nobel Foundation. Retrieved 2008-10-13.  127. ^ Kadin, A.M. (2007). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism. 20 (4): 285–292. arXiv:cond-mat/0510279Freely accessible. doi:10.1007/s10948-006-0198-z.  128. ^ "Discovery About Behavior Of Building Block Of Nature Could Lead To Computer Revolution". ScienceDaily. July 31, 2009. Retrieved 2009-08-01.  129. ^ Jompol, Y.; et al. (2009). "Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid". Science. 325 (5940): 597–601. Bibcode:2009Sci...325..597J. PMID 19644117. arXiv:1002.2782Freely accessible. doi:10.1126/science.1171769.  130. ^ Staff (2008). "The Nobel Prize in Physics 1958, for the discovery and the interpretation of the Cherenkov effect". The Nobel Foundation. Retrieved 2008-09-25.  131. ^ Staff (August 26, 2008). "Special Relativity". Stanford Linear Accelerator Center. Retrieved 2008-09-25.  132. ^ Adams, S. (2000). Frontiers: Twentieth Century Physics. CRC Press. p. 215. ISBN 0-7484-0840-1.  133. ^ Lurquin, P. F. (2003). The Origins of Life and the Universe. Columbia University Press. p. 2. ISBN 0-231-12655-7.  134. ^ Silk, J. (2000). The Big Bang: The Creation and Evolution of the Universe (3rd ed.). Macmillan. pp. 110–112, 134–137. ISBN 0-8050-7256-X.  135. ^ Kolb, E. W.; Wolfram, Stephen (1980). "The Development of Baryon Asymmetry in the Early Universe". Physics Letters B. 91 (2): 217–221. Bibcode:1980PhLB...91..217K. doi:10.1016/0370-2693(80)90435-9.  136. ^ Sather, E. (Spring–Summer 1996). "The Mystery of Matter Asymmetry" (PDF). Beam Line. University of Stanford. Retrieved 2008-11-01.  137. ^ Burles, S.; Nollett, K. M.; Turner, M. S. (1999). "Big-Bang Nucleosynthesis: Linking Inner Space and Outer Space". arXiv:astro-ph/9903300Freely accessible [astro-ph].  138. ^ Boesgaard, A. M.; Steigman, G. (1985). "Big bang nucleosynthesis – Theories and observations". Annual Review of Astronomy and Astrophysics. 23 (2): 319–378. Bibcode:1985ARA&A..23..319B. doi:10.1146/annurev.aa.23.090185.001535.  139. ^ a b Barkana, R. (2006). "The First Stars in the Universe and Cosmic Reionization". Science. 313 (5789): 931–934. Bibcode:2006Sci...313..931B. PMID 16917052. arXiv:astro-ph/0608450Freely accessible. doi:10.1126/science.1125644.  140. ^ Burbidge, E. M.; et al. (1957). "Synthesis of Elements in Stars". Reviews of Modern Physics. 29 (4): 548–647. Bibcode:1957RvMP...29..547B. doi:10.1103/RevModPhys.29.547.  141. ^ Rodberg, L. S.; Weisskopf, V. (1957). "Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature". Science. 125 (3249): 627–633. Bibcode:1957Sci...125..627R. PMID 17810563. doi:10.1126/science.125.3249.627.  142. ^ Fryer, C. L. (1999). "Mass Limits For Black Hole Formation". The Astrophysical Journal. 522 (1): 413–418. Bibcode:1999ApJ...522..413F. arXiv:astro-ph/9902315Freely accessible. doi:10.1086/307647.  143. ^ Parikh, M. K.; Wilczek, F. (2000). "Hawking Radiation As Tunneling". Physical Review Letters. 85 (24): 5042–5045. Bibcode:2000PhRvL..85.5042P. PMID 11102182. arXiv:hep-th/9907001Freely accessible. doi:10.1103/PhysRevLett.85.5042.  144. ^ Hawking, S. W. (1974). "Black hole explosions?". Nature. 248 (5443): 30–31. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0.  145. ^ Halzen, F.; Hooper, D. (2002). "High-energy neutrino astronomy: the cosmic ray connection". Reports on Progress in Physics. 66 (7): 1025–1078. Bibcode:2002RPPh...65.1025H. arXiv:astro-ph/0204527Freely accessible. doi:10.1088/0034-4885/65/7/201.  146. ^ Ziegler, J. F. (1998). "Terrestrial cosmic ray intensities". IBM Journal of Research and Development. 42 (1): 117–139. doi:10.1147/rd.421.0117.  147. ^ Sutton, C. (August 4, 1990). "Muons, pions and other strange particles". New Scientist. Retrieved 2008-08-28.  148. ^ Wolpert, S. (July 24, 2008). "Scientists solve 30-year-old aurora borealis mystery". University of California. Archived from the original on August 17, 2008. Retrieved 2008-10-11.  149. ^ Gurnett, D.A.; Anderson, R. (1976). "Electron Plasma Oscillations Associated with Type III Radio Bursts". Science. 194 (4270): 1159–1162. Bibcode:1976Sci...194.1159G. PMID 17790910. doi:10.1126/science.194.4270.1159.  150. ^ Martin, W.C.; Wiese, W.L. (2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Retrieved 2007-01-08.  151. ^ Fowles, G.R. (1989). Introduction to Modern Optics. Courier Dover. pp. 227–233. ISBN 0-486-65957-7.  152. ^ Staff (2008). "The Nobel Prize in Physics 1989". The Nobel Foundation. Retrieved 2008-09-24.  153. ^ Ekstrom, P.; Wineland, David (1980). "The isolated Electron" (PDF). Scientific American. 243 (2): 91–101. Bibcode:1980SciAm.243b.104E. doi:10.1038/scientificamerican0880-104. Retrieved 2008-09-24.  154. ^ Mauritsson, J. "Electron filmed for the first time ever" (PDF). Lund University. Archived from the original (PDF) on March 25, 2009. Retrieved 2008-09-17.  155. ^ Mauritsson, J.; et al. (2008). "Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope". Physical Review Letters. 100 (7): 073003. Bibcode:2008PhRvL.100g3003M. PMID 18352546. arXiv:0708.1060Freely accessible. doi:10.1103/PhysRevLett.100.073003.  156. ^ Damascelli, A. (2004). "Probing the Electronic Structure of Complex Systems by ARPES". Physica Scripta. T109: 61–74. Bibcode:2004PhST..109...61D. arXiv:cond-mat/0307085Freely accessible. doi:10.1238/Physica.Topical.109a00061.  157. ^ Staff (April 4, 1975). "Image # L-1975-02972". Langley Research Center, NASA. Archived from the original on December 7, 2008. Retrieved 2008-09-20.  158. ^ Elmer, J. (March 3, 2008). "Standardizing the Art of Electron-Beam Welding". Lawrence Livermore National Laboratory. Retrieved 2008-10-16.  159. ^ Schultz, H. (1993). Electron Beam Welding. Woodhead Publishing. pp. 2–3. ISBN 1-85573-050-2.  160. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing. 19. CRC Press. p. 273. ISBN 0-8247-7352-7.  161. ^ Ozdemir, F.S. (June 25–27, 1979). Electron beam lithography. Proceedings of the 16th Conference on Design automation. San Diego, CA, USA: IEEE Press. pp. 383–391. Retrieved 2008-10-16.  162. ^ Madou, M.J. (2002). Fundamentals of Microfabrication: the Science of Miniaturization (2nd ed.). CRC Press. pp. 53–54. ISBN 0-8493-0826-7.  163. ^ Jongen, Y.; Herer, A. (May 2–5, 1996). Electron Beam Scanning in Industrial Applications. APS/AAPT Joint Meeting. American Physical Society. Bibcode:1996APS..MAY.H9902J.  164. ^ Mobus, G.; et al. (2010). "Nano-scale quasi-melting of alkali-borosilicate glasses under electron irradiation". Journal of Nuclear Materials. 396 (2–3): 264–271. Bibcode:2010JNuM..396..264M. doi:10.1016/j.jnucmat.2009.11.020.  165. ^ Beddar, A.S.; Domanovic, Mary Ann; Kubu, Mary Lou; Ellis, Rod J.; Sibata, Claudio H.; Kinsella, Timothy J. (2001). "Mobile linear accelerators for intraoperative radiation therapy". AORN Journal. 74 (5): 700–705. doi:10.1016/S0001-2092(06)61769-9.  166. ^ Gazda, M.J.; Coia, L.R. (June 1, 2007). "Principles of Radiation Therapy" (PDF). Retrieved 2013-10-31.  167. ^ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3.  168. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer. pp. 1–45. ISBN 3-540-00545-5.  169. ^ Ichimiya, A.; Cohen, P.I. (2004). Reflection High-energy Electron Diffraction. Cambridge University Press. p. 1. ISBN 0-521-45373-9.  170. ^ Heppell, T.A. (1967). "A combined low energy and reflection high energy electron diffraction apparatus". Journal of Scientific Instruments. 44 (9): 686–688. Bibcode:1967JScI...44..686H. doi:10.1088/0950-7671/44/9/311.  171. ^ McMullan, D. (1993). "Scanning Electron Microscopy: 1928–1965". University of Cambridge. Retrieved 2009-03-23.  172. ^ Slayter, H.S. (1992). Light and electron microscopy. Cambridge University Press. p. 1. ISBN 0-521-33948-0.  173. ^ Cember, H. (1996). Introduction to Health Physics. McGraw-Hill Professional. pp. 42–43. ISBN 0-07-105461-8.  174. ^ Erni, R.; et al. (2009). "Atomic-Resolution Imaging with a Sub-50-pm Electron Probe". Physical Review Letters. 102 (9): 096101. Bibcode:2009PhRvL.102i6101E. PMID 19392535. doi:10.1103/PhysRevLett.102.096101.  175. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists. Jones & Bartlett Publishers. pp. 12, 197–199. ISBN 0-7637-0192-0.  176. ^ Flegler, S.L.; Heckman Jr., J.W.; Klomparens, K.L. (1995). Scanning and Transmission Electron Microscopy: An Introduction (Reprint ed.). Oxford University Press. pp. 43–45. ISBN 0-19-510751-9.  177. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists (2nd ed.). Jones & Bartlett Publishers. p. 9. ISBN 0-7637-0192-0.  178. ^ Freund, H.P.; Antonsen, T. (1996). Principles of Free-Electron Lasers. Springer. pp. 1–30. ISBN 0-412-72540-1.  179. ^ Kitzmiller, J.W. (1995). Television Picture Tubes and Other Cathode-Ray Tubes: Industry and Trade Summary. DIANE Publishing. pp. 3–5. ISBN 0-7881-2100-6.  180. ^ Sclater, N. (1999). Electronic Technology Handbook. McGraw-Hill Professional. pp. 227–228. ISBN 0-07-058048-0.  181. ^ Staff (2008). "The History of the Integrated Circuit". The Nobel Foundation. Retrieved 2008-10-18.  External links[edit] Powered by YouTube Wikipedia content is licensed under the GFDL and (CC) license
3e283d4b3e57acae
Dismiss Notice Join Physics Forums Today! Wave function with a certain wavelength 1. Feb 3, 2015 #1 I have a number of questions about the wave function - 1. Do photons have wave functions like the one in Schrodinger equation? 2. If they do, when you send out a wave function with a certain wavelength, then because you know the momentum with no uncertainty the uncertainty of the position becomes infinite and you don't know where the photon is. What happens then? For example if you send out that wave to experiment the photoelectric effect, when the light(photon) hits the particle then the particle 'knows' where the photon is and therefore its uncertainty in the position becomes very small, and as a consequence the uncertainty in the momentum becomes very large; does this mean that the light will suddenly have various wavelengths? My guess for question 2 is that you can't send out a light of definite wavelength. 2. jcsd 3. Feb 3, 2015 #2 User Avatar Staff Emeritus Science Advisor Homework Helper Gold Member Any wave packet will contain several wavelengths unless it is infinitely extended. This is true for all waves. However, note that ##\hbar## is very small. You can have a wave with a very small uncertainty in wavelength even if your uncertainty in position is of the order of magnitude you would expect from the photoelectric effect. Also, photons definitely do not follow the Schrödinger equation. The Schrödinger equation describes a non-relativistic massive particle, which the photon most certainly is not. It is about as far away from it as you can get. Similar Discussions: Wave function with a certain wavelength 1. Wave function (Replies: 11) 2. Wave function (Replies: 1)
f49db7d114c4fd57
Open main menu Wikipedia β   (Redirected from Exponent) Graphs of y = bx for various bases b: base 10 (green), base e (red), base 2 (blue), and base 1/2 (cyan). Each curve passes through the point (0, 1) because any nonzero number raised to the power of 0 is 1. At x = 1, the value of y equals the base because any number raised to the power of 1 is the number itself. Calculation results Addition (+) Subtraction (−) Multiplication (×) Division (÷) Modulo (mod) nth root (√) Logarithm (log) Exponentiation is a mathematical operation, written as bn, involving two numbers, the base b and the exponent n. When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases: The exponent is usually shown as a superscript to the right of the base. In that case, bn is called "b raised to the n-th power", "b raised to the power of n", or "the n-th power of b". When n is a positive integer and b is not zero, bn is naturally defined as 1/bn, preserving the property bn × bm = bn + m. With exponent −1, b−1 is equal to 1/b, and is the reciprocal of b. The definition of exponentiation can be extended to allow any real or complex exponent. Exponentiation by integer exponents can also be defined for a wide variety of algebraic structures, including matrices. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. History of the notationEdit The term power was used by the Greek mathematician Euclid for the square of a line.[1] Archimedes discovered and proved the law of exponents, 10a 10b = 10a+b, necessary to manipulate powers of 10.[2] In the 9th century, the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī used the terms mal for a square and kahb for a cube, which later Islamic mathematicians represented in mathematical notation as m and k, respectively, by the 15th century, as seen in the work of Abū al-Hasan ibn Alī al-Qalasādī.[3] In the late 16th century, Jost Bürgi used Roman numerals for exponents.[4] Early in the 17th century, the first form of our modern exponential notation was introduced by Rene Descartes in his text titled La Géométrie; there, the notation is introduced in Book I.[5] Nicolas Chuquet used a form of exponential notation in the 15th century, which was later used by Henricus Grammateus and Michael Stifel in the 16th century. The word "exponent" was coined in 1544 by Michael Stifel.[6] Samuel Jeake introduced the term indices in 1696.[1] In the 16th century Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth).[7] Biquadrate has been used to refer to the fourth power as well. Some mathematicians (e.g., Isaac Newton) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d. Another historical synonym, involution,[8] is now rare and should not be confused with its more common meaning. In 1748 Leonhard Euler wrote "consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant."[9] With this introduction of transcendental functions, Euler laid the foundation for the modern introduction of natural logarithm as the inverse function for y = ex. The expression b2 = bb is called the square of b or b squared because the area of a square with side-length b is b2. The expression b3 = bbb is called the cube of b or b cubed because the volume of a cube with side-length b is b3. The exponent indicates how many copies of the base are multiplied together. For example, 35 = 3 ⋅ 3 ⋅ 3 ⋅ 3 ⋅ 3 = 243. The base 3 appears 5 times in the repeated multiplication, because the exponent is 5. Here, 3 is the base, 5 is the exponent, and 243 is the power or, more specifically, 3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so 35 can also be read 3 to the 5th or 3 to the 5. Therefore, the exponentiation bn can be expressed as b to the power of n, b to the n-th power, b to the n-th, or most briefly as b to the n. Integer exponentsEdit The exponentiation operation with integer exponents requires only elementary algebra. Positive integer exponentsEdit Formally, powers with positive integer exponents may be defined by the initial condition[10] and the recurrence relation From the associativity of multiplication, it follows that for any positive integers m and n, Zero exponentEdit Any nonzero number raised by the exponent 0 is 1[11]: One interpretation of such a power is as an empty product. The case of 00 is discussed below. Negative exponentsEdit The following identity holds for an arbitrary integer n and nonzero b: Raising 0 to a negative exponent is left undefined. The identity above may be derived through a definition aimed at extending the range of exponents to negative integers. For non-zero b and positive n, the recurrence relation from the previous subsection can be rewritten as By defining this relation as valid for all integer n and nonzero b, it follows that and more generally for any nonzero b and any nonnegative integer n, This is then readily shown to be true for every integer n. Combinatorial interpretationEdit For nonnegative integers n and m, the power nm is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m-tuples from an n-element set (or as m-letter words from an n-letter alphabet). 05 = │ { } │ = 0 There is no 5-tuple from the empty set. 14 = │ { (1,1,1,1) } │ = 1 There is one 4-tuple from a one-element set. 23 = │ { (1,1,1), (1,1,2), (1,2,1), (1,2,2), (2,1,1), (2,1,2), (2,2,1), (2,2,2) } │ = 8 There are eight 3-tuples from a two-element set. 32 = │ { (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3) } │ = 9 There are nine 2-tuples from a three-element set. 41 = │ { (1), (2), (3), (4) } │ = 4 There are four 1-tuples from a four-element set. 50 = │ { () } │ = 1 There is exactly one 0-tuple. Identities and propertiesEdit Exponentiation is not commutative. This contrasts with addition and multiplication, which are. For example, 2 + 3 = 3 + 2 = 5 and 2 ⋅ 3 = 3 ⋅ 2 = 6, but 23 = 8, whereas 32 = 9. Exponentiation is not associative either. Addition and multiplication are. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 ⋅ 3) ⋅ 4 = 2 ⋅ (3 ⋅ 4) = 24, but 23 to the 4 is 84 or 4096, whereas 2 to the 34 is 281 or 2417851639229258349412352. Without parentheses to modify the order of calculation, by convention the order is top-down (or right-associative), not bottom-up[citation needed] (or left-associative): While Google and WolframAlpha follow the above convention, note that some computer programs such as Microsoft Office Excel or Matlab associate to the left (bottom-up) instead, i.e. a^b^c is evaluated as (a^b)^c. Particular basesEdit Powers of tenEdit In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, 103 = 1000 and 10−4 = 0.0001. Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299792458 m/s (the speed of light in vacuum, in metres per second) can be written as 2.99792458×108 m/s and then approximated as 2.998×108 m/s. SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means 103 = 1000, so a kilometre is 1000 m. Powers of twoEdit The positive powers of 2 are important in computer science because there are 2n possible values for an n-bit binary number. Powers of 2 are important in set theory since a set with n members has a power set, or set of all subsets of the original set, with 2n members. The negative powers of 2 are commonly used, and the first two have special names: half and quarter. In the base 2 (binary) number system, integer powers of 2 are written as 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, two to the power of three is written as 1000 in binary. Powers of oneEdit The powers of one are all one: 1n = 1. Powers of zeroEdit If the exponent is positive, the power of zero is zero: 0n = 0, where n > 0. If the exponent is negative, the power of zero (0n, where n < 0) is undefined, because division by zero is implied. If the exponent is zero, some authors define 00 = 1, whereas others leave it undefined, as discussed below under § Zero to the power of zero. Powers of minus oneEdit If n is an even integer, then (−1)n = 1. If n is an odd integer, then (−1)n = −1. Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see § Powers of complex numbers. Large exponentsEdit The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound: bn → ∞ as n → ∞ when b > 1 This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: bn → 0 as n → ∞ when |b| < 1 Any power of one is always one: bn = 1 for all n if b = 1 Powers of –1 alternate between 1 and –1 as n alternates between even and odd, and thus do not tend to any limit as n grows. If b < –1, bn, alternates between larger and larger positive and negative numbers as n alternates between even and odd, and thus does not tend to any limit as n grows. If the exponentiated number varies while tending to 1 as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is (1 + 1/n)ne as n → ∞ See § The exponential function below. Other limits, in particular those of expressions that take on an indeterminate form, are described in § Limits of powers below. Rational exponentsEdit From top to bottom: x1/8, x1/4, x1/2, x1, x2, x4, x8. An nth root of a number b is a number x such that xn = b. If b is a positive real number and n is a positive integer, then there is exactly one positive real solution to xn = b. This solution is called the principal nth root of b. It is denoted nb, where    is the radical symbol; alternatively, the principal root may be written b1/n. For example: 41/2 = 2, 81/3 = 2. The fact that   solves   follows from noting that If n is even, then xn = b has two real solutions if b is positive, which are the positive and negative nth roots, i.e., b1/n > 0 and -(b1/n) < 0. If b is negative, the equation has no solution in real numbers for even n. If n is odd, then xn = b has exactly one real solution. The solution b1/n is positive if b is positive and negative if b is negative. Taking a positive real number b to a rational exponent u/v, where u is an integer and v is a positive integer, and considering principal roots only, yields Taking a negative real number b to a rational power u/v, where u/v is in lowest terms, yields a positive real result if u is even, and hence v is odd, because then bu is positive; and yields a negative real result, if u and v are both odd, because then bu is negative. The case of odd u and even v cannot be treated this way within the reals, since there is no real number x such that x2k = −1, the value of bu/v in this case must use the imaginary unit i, as described more fully in the section § Powers of complex numbers. Thus we have (−27)1/3 = −3 and (−27)2/3 = 9. The number 4 has two 3/2th powers, namely 8 and −8; however, by convention the notation 43/2 employs the principal root, and results in 8. For employing the v-th root the u/v-th power is also called the u/v-th root, and for even v the term principal root denotes also the positive result. This sign ambiguity needs to be taken care of when applying the power identities. For instance: is clearly wrong. The problem starts already in the first equality by introducing a standard notation for an inherently ambiguous situation –asking for an even root– and simply relying wrongly on only one, the conventional or principal interpretation. The same problem occurs also with an inappropriately introduced surd-notation, inherently enforcing a positive result: instead of In general the same sort of problems occur for complex numbers as described in the section § Failure of power and logarithm identities. Real exponentsEdit The identities and properties shown above for integer exponents are true for positive real numbers with non-integer exponents as well. However the identity cannot be extended consistently to cases where b is a negative real number (see § Real exponents with negative bases). The failure of this identity is the basis for the problems with complex number powers detailed under § Failure of power and logarithm identities. Exponentiation to real powers of positive real numbers can be defined either by extending the rational powers to reals by continuity, or more usually as given in § Powers via logarithms below. Limits of rational exponentsEdit Because the exponential function is continuous we find   for convergent sequences (xn). This is shown here for xn = 1/n. Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule[12] where the limit as r gets close to x is taken only over rational values of r. This limit only exists for positive b. The (εδ)-definition of limit is used; this involves showing that for any desired accuracy of the result bx one can choose a sufficiently small interval around x so all the rational powers in the interval are within the desired accuracy. For example, if x = π, the nonterminating decimal representation π = 3.14159... can be used (based on strict monotonicity of the rational power) to obtain the intervals bounded by rational powers  ,  ,  ,  ,  ,  , ... The bounded intervals converge to a unique real number, denoted by  . This technique can be used to obtain the power of a positive real number b for any irrational exponent. The function fb(x) = bx is thus defined for any real number x. The exponential functionEdit The important mathematical constant e, sometimes called Euler's number, is approximately equal to 2.718 and is the base of the natural logarithm. Although exponentiation of e could, in principle, be treated the same as exponentiation of any other real number, such exponentials turn out to have particularly elegant and useful properties. Among other things, these properties allow exponentials of e to be generalized in a natural way to other types of exponents, such as complex numbers or even matrices, while coinciding with the familiar meaning of exponentiation with rational exponents. As a consequence, the notation ex usually denotes a generalized exponentiation definition called the exponential function, exp(x), which can be defined in many equivalent ways, for example by: Among other properties, exp satisfies the exponential identity The exponential function is defined for all integer, fractional, real, and complex values of x. In fact, the matrix exponential is well-defined for square matrices (in which case this exponential identity only holds when x and y commute), and is useful for solving systems of linear differential equations. Since exp(1) is equal to e and exp(x) satisfies this exponential identity, it immediately follows that exp(x) coincides with the repeated-multiplication definition of ex for integer x, and it also follows that rational powers denote (positive) roots as usual, so exp(x) coincides with the ex definitions in the previous section for all real x by continuity. Powers via logarithmsEdit The natural logarithm ln(x) is the inverse of the exponential function ex. It is defined for b > 0, and satisfies If bx is to preserve the logarithm and exponent rules, then one must have for each real number x. This can be used as an alternative definition of the real number power bx and agrees with the definition given above using rational exponents and continuity. The definition of exponentiation using logarithms is more common in the context of complex numbers, as discussed below. Real exponents with negative basesEdit Powers of a positive real number are always positive real numbers. The solution of x2 = 4, however, can be either 2 or −2. The principal value of 41/2 is 2, but −2 is also a valid square root. If the definition of exponentiation of real numbers is extended to allow negative results then the result is no longer well-behaved. Neither the logarithm method nor the rational exponent method can be used to define br as a real number for a negative real number b and an arbitrary real number r. Indeed, er is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0. The rational exponent method cannot be used for negative values of b because it relies on continuity. The function f(r) = br has a unique continuous extension[12] from the rational numbers to the real numbers for each b > 0. But when b < 0, the function f is not even continuous on the set of rational numbers r for which it is defined. For example, consider b = −1. The nth root of −1 is −1 for every odd natural number n. So if n is an odd positive integer, (−1)(m/n) = −1 if m is odd, and (−1)(m/n) = 1 if m is even. Thus the set of rational numbers q for which (−1)q = 1 is dense in the rational numbers, as is the set of q for which (−1)q = −1. This means that the function (−1)q is not continuous at any rational number q where it is defined. On the other hand, arbitrary complex powers of negative numbers b can be defined by choosing a complex logarithm of b. Irrational exponentsEdit If a is a positive algebraic number, and b is a rational number, it has been shown above that ab is algebraic. This remains true even if one accepts any algebraic number for a, with the only difference that ab may take several values (see below), all algebraic. The Gelfond–Schneider theorem provides some information on the nature of ab when b is irrational (that is, not rational). It states: If a is an algebraic number different from 0 and 1, and b an irrational algebraic number, then all the values of ab are transcendental numbers (that is, not algebraic). Complex exponents with positive real basesEdit Imaginary exponents with base eEdit The exponential function ez can be defined as the limit of (1 + z/N)N, as N approaches infinity, and thus e is the limit of (1 + /N)N. In this animation N takes values increasing from 1 to 100. The computation of (1 + /N)N is displayed as the combined effect of N repeated multiplications in the complex plane, so that (1 + /N)k, k = 0 ... N are the vertices of a polygonal path whose final, leftmost endpoint is the actual value of (1 + /N)N. It can be seen that as N gets larger (1 + /N)N approaches a limit of −1. Therefore, e = −1, which is known as Euler's identity. A complex number is an expression of the form  , where x and y are real numbers, and i is the so-called imaginary unit, a number that satisfies the rule  . A complex number can be visualized as a point in the (x,y) plane. The polar coordinates of a point in the (x,y) plane consist of a non-negative real number r and angle θ such that x = r cos θ and y = r sin θ. So The product of two complex numbers z1 = x1 + iy1, z2 = x2 + iy2 is obtained by expanding out the product of the binomials and simplifying using the rule  : As a consequence of the angle sum formulas of trigonometry, if z1 and z2 have polar coordinates (r1, θ1), (r2, θ2), then their product z1z2 has polar coordinates equal to (r1r2, θ1 + θ2). Consider the right triangle in the complex plane which has 0, 1, 1 + ix/n as vertices. For large values of n, the triangle is almost a circular sector with a radius of 1 and a small central angle equal to x/n radians. 1 + ix/n may then be approximated by the number with polar coordinates (1, x/n). So, in the limit as n approaches infinity, (1 + ix/n)n approaches (1, x/n)n = (1n, nx/n) = (1, x), the point on the unit circle whose angle from the positive real axis is x radians. The Cartesian coordinates of this point are (cos x, sin x). So e ix = cos x + isin x; this is Euler's formula, connecting algebra to trigonometry by means of complex numbers. The solutions to the equation ez = 1 are the integer multiples of 2πi: More generally, if ev = w, then every solution to ez = w can be obtained by adding an integer multiple of 2πi to v: Thus the complex exponential function is a periodic function with period 2πi. More simply: e = −1; ex + iy = ex(cos y + i sin y). Trigonometric functionsEdit It follows from Euler's formula stated above that the trigonometric functions cosine and sine are Before the invention of complex numbers, cosine and sine were defined geometrically. The above formula reduces the complicated formulas for trigonometric functions of a sum into the simple exponentiation formula Using exponentiation with complex exponents may reduce problems in trigonometry to algebra. Complex exponents with base eEdit The power z = ex + iy can be computed as exeiy. The real factor ex is the absolute value of z and the complex factor eiy identifies the direction of z. Complex exponents with positive real basesEdit If b is a positive real number, and z is any complex number, the power bz is defined as ez ⋅ ln(b), where x = ln(b) is the unique real solution to the equation ex = b. So the same method working for real exponents also works for complex exponents. For example: The identity (bz)u=bzu is not generally valid for complex powers. The power bz is a complex number and any power of it has to follow the rules for powers of complex numbers below. A simple counterexample is given by: The identity is, however, valid for arbitrary complex   when   is an integer. Powers of complex numbersEdit Integer powers of nonzero complex numbers are defined by repeated multiplication or division as above. If i is the imaginary unit and n is an integer, then in equals 1, i, −1, or −i, according to whether the integer n is congruent to 0, 1, 2, or 3 modulo 4. Because of this, the powers of i are useful for expressing sequences of period 4. Complex powers of positive reals are defined via ex as in section Complex exponents with positive real bases above. These are continuous functions. Trying to extend these functions to the general case of noninteger powers of complex numbers that are not positive reals leads to difficulties. Either we define discontinuous functions or multivalued functions. Neither of these options is entirely satisfactory. The rational power of a complex number must be the solution to an algebraic equation. Therefore, it always has a finite number of possible values. For example, w = z1/2 must be a solution to the equation w2 = z. But if w is a solution, then so is −w, because (−1)2 = 1. A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for nonrational powers. Complex powers and logarithms are more naturally handled as single valued functions on a Riemann surface. Single valued versions are defined by choosing a sheet. The value has a discontinuity along a branch cut. Choosing one out of many solutions as the principal value leaves us with functions that are not continuous, and the usual rules for manipulating powers can lead us astray. Any nonrational power of a complex number has an infinite number of possible values because of the multi-valued nature of the complex logarithm. The principal value is a single value chosen from these by a rule which, amongst its other properties, ensures powers of complex numbers with a positive real part and zero imaginary part give the same value as does the rule defined above for the corresponding real base. Exponentiating a real number to a complex power is formally a different operation from that for the corresponding complex number. However, in the common case of a positive real number the principal value is the same. The powers of negative real numbers are not always defined and are discontinuous even where defined. In fact, they are only defined when the exponent is a rational number with the denominator being an odd integer. When dealing with complex numbers the complex number operation is normally used instead. Complex exponents with complex basesEdit For complex numbers w and z with w ≠ 0, the notation wz is ambiguous in the same sense that log w is. To obtain a value of wz, first choose a logarithm of w; call it log w. Such a choice may be the principal value Log w (the default, if no other specification is given), or perhaps a value given by some other branch of log w fixed in advance. Then, using the complex exponential function one defines because this agrees with the earlier definition in the case where w is a positive real number and the (real) principal value of log w is used. If z is an integer, then the value of wz is independent of the choice of log w, and it agrees with the earlier definition of exponentiation with an integer exponent. If z is a rational number m/n in lowest terms with z > 0, then the countably infinitely many choices of log w yield only n different values for wz; these values are the n complex solutions s to the equation sn = wm. If z is an irrational number, then the countably infinitely many choices of log w lead to infinitely many distinct values for wz. The computation of complex powers is facilitated by converting the base w to polar form, as described in detail below. A similar construction is employed in quaternions. Complex roots of unityEdit The three 3rd roots of 1 A complex number w such that wn = 1 for a positive integer n is an nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1. If wn = 1 but wk ≠ 1 for all natural numbers k such that 0 < k < n, then w is called a primitive nth root of unity. The negative unit −1 is the only primitive square root of unity. The imaginary unit i is one of the two primitive 4th roots of unity; the other one is −i. The number e2πi/n is the primitive nth root of unity with the smallest positive argument. (It is sometimes called the principal nth root of unity, although this terminology is not universal and should not be confused with the principal value of n1, which is 1.[13]) The other nth roots of unity are given by for 2 ≤ kn. Roots of arbitrary complex numbersEdit Although there are infinitely many possible values for a general complex logarithm, there are only a finite number of values for the power wq in the important special case where q = 1/n and n is a positive integer. These are the nth roots of w; they are solutions of the equation zn = w. As with real roots, a second root is also called a square root and a third root is also called a cube root. It is conventional in mathematics to define w1/n as the principal value of the root. If w is a positive real number, it is also conventional to select a positive real number as the principal value of the root w1/n. For general complex numbers, the nth root with the smallest argument is often selected as the principal value of the nth root operation, as with principal values of roots of unity. The set of nth roots of a complex number w is obtained by multiplying the principal value w1/n by each of the nth roots of unity. For example, the fourth roots of 16 are 2, −2, 2i, and −2i, because the principal value of the fourth root of 16 is 2 and the fourth roots of unity are 1, −1, i, and −i. Computing complex powersEdit It is often easier to compute complex powers by writing the number to be exponentiated in polar form. Every complex number z can be written in the polar form where r is a nonnegative real number and θ is the (real) argument of z. The polar form has a simple geometric interpretation: if a complex number u + iv is thought of as representing a point (u, v) in the complex plane using Cartesian coordinates, then (r, θ) is the same point in polar coordinates. That is, r is the "radius" r2 = u2 + v2 and θ is the "angle" θ = atan2(v, u). The polar angle θ is ambiguous since any integer multiple of 2π could be added to θ without changing the location of the point. Each choice of θ gives in general a different possible value of the power. A branch cut can be used to choose a specific value. The principal value (the most common branch cut), corresponds to θ chosen in the interval (−π, π]. For complex numbers with a positive real part and zero imaginary part using the principal value gives the same result as using the corresponding real number. In order to compute the complex power wz, write w in polar form: and thus If z is decomposed as c + di, then the formula for wz can be written more explicitly as This final formula allows complex powers to be computed easily from decompositions of the base into polar form and the exponent into Cartesian form. It is shown here both in polar form and in Cartesian form (via Euler's identity). The following examples use the principal value, the branch cut which causes θ to be in the interval (−π, π]. To compute ii, write i in polar and Cartesian forms: Then the formula above, with r = 1, θ = π/2, c = 0, and d = 1, yields: Similarly, to find (−2)3 + 4i, compute the polar form of −2, and use the formula above to compute The value of a complex power depends on the branch used. For example, if the polar form i = 1e5πi/2 is used to compute ii, the power is found to be e−5π/2; the principal value of ii, computed above, is e−π/2. The set of all possible values for ii is given by:[14] So there is an infinity of values which are possible candidates for the value of ii, one for each integer k. All of them have a zero imaginary part so one can say ii has an infinity of valid real values. Failure of power and logarithm identitiesEdit Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example: • The identity log(bx) = x ⋅ log b holds whenever b is a positive real number and x is a real number. But for the principal branch of the complex logarithm one has Regardless of which branch of the logarithm is used, a similar failure of the identity will exist. The best that can be said (if only using this result) is that: This identity does not hold even when considering log as a multivalued function. The possible values of log(wz) contain those of z ⋅ log w as a subset. Using Log(w) for the principal value of log(w) and m, n as any integers the possible values of both sides are: • The identities (bc)x = bxcx and (b/c)x = bx/cx are valid when b and c are positive real numbers and x is a real number. But a calculation using principal branches shows that On the other hand, when x is an integer, the identities are valid for all nonzero complex numbers. If exponentiation is considered as a multivalued function then the possible values of (−1×−1)1/2 are {1, −1}. The identity holds but saying {1} = {(−1×−1)1/2} is wrong. • The identity (ex)y = exy holds for real numbers x and y, but assuming its truth for complex numbers leads to the following paradox, discovered in 1827 by Clausen:[15] For any integer n, we have: but this is false when the integer n is nonzero. There are a number of problems in the reasoning: The major error is that changing the order of exponentiation in going from line two to three changes what the principal value chosen will be. From the multi-valued point of view, the first error occurs even sooner. Implicit in the first line is that e is a real number, whereas the result of e1+2πin is a complex number better represented as e+0i. Substituting the complex number for the real on the second line makes the power have multiple possible values. Changing the order of exponentiation from lines two to three also affects how many possible values the result can have.  , but rather   multivalued over integers n. Exponentiation can be defined in any monoid.[16] A monoid is an algebraic structure consisting of a set X together with a rule for composition ("multiplication") satisfying an associative law and a multiplicative identity, denoted by 1. Exponentiation is defined inductively by: •   for all   •   for all   and non-negative integers n • If n is a negative integer then   is only defined[17] if   has an inverse in X. Monoids include many structures of importance in mathematics, including groups and rings (under multiplication), with more specific examples of the latter being matrix rings and fields. Matrices and linear operatorsEdit If A is a square matrix, then the product of A with itself n times is called the matrix power. Also   is defined to be the identity matrix,[18] and if A is invertible, then  . Matrix powers appear often in the context of discrete dynamical systems, where the matrix A expresses a transition from a state vector x of some system to the next state Ax of the system.[19] This is the standard interpretation of a Markov chain, for example. Then   is the state of the system after two time steps, and so forth:   is the state of the system after n time steps. The matrix power   is the transition matrix between the state now and the state at a time n steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors. Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus,  , which is a linear operator acting on functions   to give a new function  . The n-th power of the differentiation operator is the n-th derivative: These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups.[20] Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus. Finite fieldsEdit A field is an algebraic structure in which multiplication, addition, subtraction, and division are all well-defined and satisfy their familiar properties. The real numbers, for example, form a field, as do the complex numbers and rational numbers. Unlike these familiar examples of fields, which are all infinite sets, some fields have only finitely many elements. The simplest example is the field with two elements   with addition defined by   and  , and multiplication   and  . Exponentiation in finite fields has applications in public key cryptography. For example, the Diffie–Hellman key exchange uses the fact that exponentiation is computationally inexpensive in finite fields, whereas the discrete logarithm (the inverse of exponentiation) is computationally expensive. Any finite field F has the property that there is a unique prime number p such that   for all x in F; that is, x added to itself p times is zero. For example, in  , the prime number p = 2 has this property. This prime number is called the characteristic of the field. Suppose that F is a field of characteristic p, and consider the function   that raises each element of F to the power p. This is called the Frobenius automorphism of F. It is an automorphism of the field because of the Freshman's dream identity  . The Frobenius automorphism is important in number theory because it generates the Galois group of F over its prime subfield. In abstract algebraEdit Exponentiation for integer exponents can be defined for quite general structures in abstract algebra. Let X be a set with a power-associative binary operation which is written multiplicatively. Then xn is defined for any element x of X and any nonzero natural number n as the product of n copies of x, which is recursively defined by One has the following properties If the operation has a two-sided identity element 1, then x0 is defined to be equal to 1 for any x.  [citation needed] If the operation also has two-sided inverses and is associative, then the magma is a group. The inverse of x can be denoted by x−1 and follows all the usual rules for exponents. If the multiplication operation is commutative (as for instance in abelian groups), then the following holds: If the binary operation is written additively, as it often is for abelian groups, then "exponentiation is repeated multiplication" can be reinterpreted as "multiplication is repeated addition". Thus, each of the laws of exponentiation above has an analogue among laws of multiplication. When there are several power-associative binary operations defined on a set, any of which might be iterated, it is common to indicate which operation is being repeated by placing its symbol in the superscript. Thus, xn is x ∗ ... ∗ x, while x#n is x # ... # x, whatever the operations ∗ and # might be. Superscript notation is also used, especially in group theory, to indicate conjugation. That is, gh = h−1gh, where g and h are elements of some group. Although conjugation obeys some of the same laws as exponentiation, it is not an example of repeated multiplication in any sense. A quandle is an algebraic structure in which these laws of conjugation play a central role. Over setsEdit If n is a natural number and A is an arbitrary set, the expression An is often used to denote the set of ordered n-tuples of elements of A. This is equivalent to letting An denote the set of functions from the set {0, 1, 2, ..., n−1} to the set A; the n-tuple (a0, a1, a2, ..., an−1) represents the function that sends i to ai. For an infinite cardinal number κ and a set A, the notation Aκ is also used to denote the set of all functions from a set of size κ to A. This is sometimes written κA to distinguish it from cardinal exponentiation, defined below. This generalized exponential can also be defined for operations on sets or for sets with extra structure. For example, in linear algebra, it makes sense to index direct sums of vector spaces over arbitrary index sets. That is, we can speak of where each Vi is a vector space. Then if Vi = V for each i, the resulting direct sum can be written in exponential notation as VN, or simply VN with the understanding that the direct sum is the default. We can again replace the set N with a cardinal number n to get Vn, although without choosing a specific standard set with cardinality n, this is defined only up to isomorphism. Taking V to be the field R of real numbers (thought of as a vector space over itself) and n to be some natural number, we get the vector space that is most commonly studied in linear algebra, the real vector space Rn. If the base of the exponentiation operation is a set, the exponentiation operation is the Cartesian product unless otherwise stated. Since multiple Cartesian products produce an n-tuple, which can be represented by a function on a set of appropriate cardinality, SN becomes simply the set of all functions from N to S in this case: This fits in with the exponentiation of cardinal numbers, in the sense that |SN| = |S||N|, where |X| is the cardinality of X. When "2" is defined as {0, 1}, we have |2X| = 2|X|, where 2X, usually denoted by P(X), is the power set of X; each subset Y of X corresponds uniquely to a function on X taking the value 1 for xY and 0 for xY. In category theoryEdit In a Cartesian closed category, the exponential operation can be used to raise an arbitrary object to the power of another object. This generalizes the Cartesian product in the category of sets. If 0 is an initial object in a Cartesian closed category, then the exponential object 00 is isomorphic to any terminal object 1. Of cardinal and ordinal numbersEdit In set theory, there are exponential operations for cardinal and ordinal numbers. If κ and λ are cardinal numbers, the expression κλ represents the cardinality of the set of functions from any set of cardinality λ to any set of cardinality κ.[21] If κ and λ are finite, then this agrees with the ordinary arithmetic exponential operation. For example, the set of 3-tuples of elements from a 2-element set has cardinality 8 = 23. In cardinal arithmetic, κ0 is always 1 (even if κ is an infinite cardinal or zero). Exponentiation of cardinal numbers is distinct from exponentiation of ordinal numbers, which is defined by a limit process involving transfinite induction. Repeated exponentiationEdit Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7625597484987 (= 327 = 333 = 33) respectively. Zero to the power of zeroEdit Discrete exponentsEdit There are many widely used formulas having terms involving natural-number exponents that require 00 to be evaluated to 1. For example, regarding b0 as an empty product assigns it the value 1, even when b = 0. Alternatively, the combinatorial interpretation of b0 is the number of empty tuples of elements from a set with b elements; there is exactly one empty tuple, even if b = 0. Equivalently, the set-theoretic interpretation of 00 is the number of functions from the empty set to the empty set; there is exactly one such function, the empty function.[21] Polynomials and power seriesEdit Likewise, when working with polynomials, it is often necessary to assign   the value 1. A polynomial is an expression of the form   where x is an indeterminate, and the coefficients   are real numbers (or, more generally, elements of some ring). The set of all real polynomials in x is denoted by  . Polynomials are added termwise, and multiplied by the applying the usual rules for exponents in the indeterminate x (see Cauchy product). With these algebraic rules for manipulation, polynomials form a polynomial ring. The polynomial   is the identity element of the polynomial ring, meaning that it is the (unique) element such that the product of   with any polynomial   is just  .[22] Polynomials can be evaluated by specializing the indeterminate x to be a real number. More precisely, for any given real number   there is a unique unital ring homomorphism   such that  .[23] This is called the evaluation homomorphism. Because it is a unital homomorphism, we have   That is,   for all specializations of x to a real number (including zero). This perspective is significant for many polynomial identities appearing in combinatorics. For example, the binomial theorem   is not valid for x = 0 unless 00 = 1.[24] Similarly, rings of power series require   to be true for all specializations of x. Thus identities like   and   are only true as functional identities (including at x = 0) if 00 = 1. In differential calculus, the power rule   is not valid for n = 1 at x = 0 unless 00 = 1. Continuous exponentsEdit Plot of z = xy. The red curves (with z constant) yield different limits as (x, y) approaches (0, 0). The green curves (of finite constant slope, y = ax) all yield a limit of 1. Limits involving algebraic operations can often be evaluated by replacing subexpressions by their limits; if the resulting expression does not determine the original limit, the expression is known as an indeterminate form.[25] In fact, when f(t) and g(t) are real-valued functions both approaching 0 (as t approaches a real number or ±∞), with f(t) > 0, the function f(t)g(t) need not approach 1; depending on f and g, the limit of f(t)g(t) can be any nonnegative real number or +∞, or it can diverge. For example, the functions below are of the form f(t)g(t) with f(t), g(t) → 0 as t → 0+, but the limits are different: Thus, the two-variable function xy, though continuous on the set {(x, y) : x > 0}, cannot be extended to a continuous function on any set containing (0, 0), no matter how one chooses to define 00.[26] However, under certain conditions, such as when f and g are both analytic functions and f is positive on the open interval (0, b) for some positive b, the limit approaching from the right is always 1.[27][28][29] Complex exponentsEdit In the complex domain, the function zw may be defined for nonzero z by choosing a branch of log z and defining zw as ew log z. This does not define 0w since there is no branch of log z defined at z = 0, let alone in a neighborhood of 0.[30][31][32] History of differing points of viewEdit The debate over the definition of   has been going on at least since the early 19th century. At that time, most mathematicians agreed that  , until in 1821 Cauchy[33] listed   along with expressions like   in a table of indeterminate forms. In the 1830s Libri[34][35] published an unconvincing argument for  , and Möbius[36] sided with him, erroneously claiming that   whenever  . A commentator who signed his name simply as "S" provided the counterexample of  , and this quieted the debate for some time. More historical details can be found in Knuth (1992).[37] More recent authors interpret the situation above in different ways: • Some argue that the best value for   depends on context, and hence that defining it once and for all is problematic.[38] According to Benson (1999), "[t]he choice whether to define   is based on convenience, not on correctness. If we refrain from defining  , then certain assertions become unnecessarily awkward. [...] The consensus is to use the definition  , although there are textbooks that refrain from defining  ."[39] • Others argue that   should be defined as 1. Knuth (1992) contends strongly that   "has to be 1", drawing a distinction between the value  , which should equal 1 as advocated by Libri, and the limiting form   (an abbreviation for a limit of   where  ), which is necessarily an indeterminate form as listed by Cauchy: "Both Cauchy and Libri were right, but Libri and his defenders did not understand why truth was on their side."[37] Treatment on computersEdit IEEE floating point standardEdit The IEEE 754-2008 floating point standard is used in the design of most floating point libraries. It recommends a number of functions for computing a power:[40] • pow treats 00 as 1. This is the oldest defined version. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases). Programming languagesEdit Most programming language with a power function are implemented using the IEEE pow function and therefore evaluate 00 as 1. The later C[41] and C++ standards describe this as the normative behaviour. The Java standard[42] mandates this behavior. The .NET Framework method System.Math.Pow also treats 00 as 1.[43] Mathematics softwareEdit • SageMath simplifies b0 to 1, even if no constraints are placed on b.[44] It takes 00 to be 1, but does not simplify 0x for other x. • Maple distinguishes between integers 0, 1, ... and the corresponding floats 0.0, 1.0, ... (usually denoted 0., 1., ...). If x does not evaluate to a number, then x0 and x0.0 are respectively evaluated to 1 (integer) and 1.0 (float); on the other hand, 0x is evaluated to the integer 0, while 0.0x is evaluated as 0.x. If both the base and the exponent are zero (or are evaluated to zero), the result is Float(undefined) if the exponent is the float 0.0; with an integer as exponent, the evaluation of 00 results in the integer 1, while that of 0.0 results in the float 1.0. • Macsyma also simplifies b0 to 1 even if no constraints are placed on b, but issues an error for 00. For x>0, it simplifies 0x to 0.[citation needed] • Mathematica and Wolfram Alpha simplify b0 into 1, even if no constraints are placed on b.[45] While Mathematica does not simplify 0x, Wolfram Alpha returns two results, 0 for x > 0, and "indeterminate" for real x.[46] Both Mathematica and Wolfram Alpha take 00 to be "(indeterminate)".[47] • Matlab, Python, Magma, GAP, singular, PARI/GP and the Google and iPhone calculators evaluate 00 as 1. Limits of powersEdit The section § Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function xy has no limit at the point (0, 0). One may consider at what points this function does have a limit. More precisely, consider the function f(x, y) = xy defined on D = {(x, y) ∈ R2 : x > 0}. Then D can be viewed as a subset of R2 (that is, the set of all pairs (x, y) with x, y belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function f has a limit. In fact, f has a limit at all accumulation points of D, except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞).[48] Accordingly, this allows one to define the powers xy by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms. Under this definition by continuity, we obtain: • x+∞ = +∞ and x−∞ = 0, when 1 < x ≤ +∞. • x+∞ = 0 and x−∞ = +∞, when 0 ≤ x < 1. • 0y = 0 and (+∞)y = +∞, when 0 < y ≤ +∞. • 0y = +∞ and (+∞)y = 0, when −∞ ≤ y < 0. These powers are obtained by taking limits of xy for positive values of x. This method does not permit a definition of xy when x < 0, since pairs (x, y) with x < 0 are not accumulation points of D. On the other hand, when n is an integer, the power xn is already meaningful for all values of x, including negative ones. This may make the definition 0n = +∞ obtained above for negative n problematic when n is odd, since in this case xn → +∞ as x tends to 0 through positive values, but not negative ones. Efficient computation with integer exponentsEdit Computing bn using iterated multiplication requires n − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2100, note that 100 = 64 + 32 + 4. Compute the following in order: 1. 22 = 4 2. (22)2 = 24 = 16 3. (24)2 = 28 = 256 4. (28)2 = 216 = 65,536 5. (216)2 = 232 = 4,294,967,296 6. (232)2 = 264 = 18,446,744,073,709,551,616 7. 264 232 24 = 2100 = 1,267,650,600,228,229,401,496,703,205,376 This series of steps only requires 8 multiplication operations instead of 99 (since the last product above takes 2 multiplications). In general, the number of multiplication operations required to compute bn can be reduced to Θ(log n) by using exponentiation by squaring or (more generally) addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for bn is a difficult problem for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available.[49] Exponential notation for function namesEdit Placing an integer superscript after the name or symbol of a function, as if the function were being raised to a power, commonly refers to repeated function composition rather than repeated multiplication. Thus, f 3(x) may mean f(f(f(x))); in particular, f −1(x) usually denotes the inverse function of f. Iterated functions are of interest in the study of fractals and dynamical systems. Babbage was the first to study the problem of finding a functional square root f 1/2(x). For historical reasons, this notation applied to the trigonometric and hyperbolic functions has a specific and diverse interpretation: a positive exponent applied to the function's abbreviation means that the result is raised to that power, while an exponent of −1 denotes the inverse function. That is, sin2 x is just a shorthand way to write (sin x)2 without using parentheses, whereas sin−1 x refers to the inverse function of the sine, also called arcsin x. Each trigonometric and hyperbolic has its own name and abbreviation both for the reciprocal; for example, 1/(sin x) = (sin x)−1 = csc x, as well as for its inverse, for example cosh−1 x = arcosh x. A similar convention applies to logarithms, where log2 x usually means (log x)2, not log log x. In programming languagesEdit The superscript notation xy is convenient in handwriting but inconvenient for typewriters and computer terminals that align the baselines of all characters on each line. Many programming languages have alternate ways of expressing exponentiation that do not use superscripts: Many other programming languages lack syntactic support for exponentiation, but provide library functions: • pow(x, y): C, C++ • Math.Pow(x, y): C# For certain exponents there are special ways to compute xy much faster than through generic exponentiation. These cases include small positive and negative integers (prefer x*x over x2; prefer 1/x over x−1) and roots (prefer sqrt(x) over x0.5, prefer cbrt(x) over x1/3). List of whole-number powersEdit n n2 n3 n4 n5 n6 n7 n8 n9 n10 2 4 8 16 32 64 128 256 512 1,024 3 9 27 81 243 729 2,187 6,561 19,683 59,049 4 16 64 256 1,024 4,096 16,384 65,536 262,144 1,048,576 5 25 125 625 3,125 15,625 78,125 390,625 1,953,125 9,765,625 6 36 216 1,296 7,776 46,656 279,936 1,679,616 10,077,696 60,466,176 7 49 343 2,401 16,807 117,649 823,543 5,764,801 40,353,607 282,475,249 8 64 512 4,096 32,768 262,144 2,097,152 16,777,216 134,217,728 1,073,741,824 9 81 729 6,561 59,049 531,441 4,782,969 43,046,721 387,420,489 3,486,784,401 10 100 1,000 10,000 100,000 1,000,000 10,000,000 100,000,000 1,000,000,000 10,000,000,000 See alsoEdit 1. ^ a b O'Connor, John J.; Robertson, Edmund F., "Etymology of some common mathematical terms", MacTutor History of Mathematics archive, University of St Andrews . 2. ^ For further analysis see The Sand Reckoner. 4. ^ Cajori, Florian (2007). A History of Mathematical Notations; Vol I. Cosimo Classics. Pg 344 ISBN 1602066841 6. ^ See: • Earliest Known Uses of Some of the Words of Mathematics • Michael Stifel, Arithmetica integra (Nuremberg ("Norimberga"), (Germany): Johannes Petreius, 1544), Liber III (Book 3), Caput III (Chapter 3): De Algorithmo numerorum Cossicorum. (On algorithms of algebra.), page 236. Stifel was trying to conveniently represent the terms of geometric progressions. He devised a cumbersome notation for doing that. On page 236, he presented the notation for the first eight terms of a geometric progression (using 1 as a base) and then he wrote: "Quemadmodum autem hic vides, quemlibet terminum progressionis cossicæ, suum habere exponentem in suo ordine (ut 1ze habet 1. 1ʓ habet 2 &c.) sic quilibet numerus cossicus, servat exponentem suæ denominationis implicite, qui ei serviat & utilis sit, potissimus in multiplicatione & divisione, ut paulo inferius dicam." (However, you see how each term of the progression has its exponent in its order (as 1ze has a 1, 1ʓ has a 2, etc.), so each number is implicitly subject to the exponent of its denomination, which [in turn] is subject to it and is useful mainly in multiplication and division, as I will mention just below.) [Note: Most of Stifel's cumbersome symbols were taken from Christoff Rudolff, who in turn took them from Leonardo Fibonacci's Liber Abaci (1202), where they served as shorthand symbols for the Latin words res/radix (x), census/zensus (x2), and cubus (x3).] 7. ^ Quinion, Michael. "Zenzizenzizenzic - the eighth power of a number". World Wide Words. Retrieved 2010-03-19.  8. ^ This definition of "involution" appears in the OED second edition, 1989, and Merriam-Webster online dictionary [1]. The most recent usage in this sense cited by the OED is from 1806. 9. ^ Leonhard Euler (1748) Introduction to the Analysis of the Infinite, English version, page 75 10. ^ Hodge, Jonathan K.; Schlicker, Steven; Sundstorm, Ted (2014). Abstract Algebra: an inquiry based approach. CRC Press. p. 94. ISBN 978-1-4665-6706-1.  11. ^ Achatz, Thomas (2005). Technical Shop Mathematics (3rd ed.). Industrial Press. p. 101. ISBN 0-8311-3086-5.  12. ^ a b Denlinger, Charles G. (2011). Elements of Real Analysis. Jones and Bartlett. pp. 278–283. ISBN 978-0-7637-7947-4.  13. ^ This definition of a principal root of unity can be found in: 14. ^ Complex number to a complex power may be real at Cut The Knot gives some references to ii 15. ^ Steiner J, Clausen T, Abel NH (1827). "Aufgaben und Lehrsätze, erstere aufzulösen, letztere zu beweisen" [Problems and propositions, the former to solve, the later to prove]. Journal für die reine und angewandte Mathematik. 2: 286–287.  16. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , I.2 17. ^ David M. Bloom (1979). Linear Algebra and Geometry. p. 45. ISBN 0521293243.  18. ^ Chapter 1, Elementary Linear Algebra, 8E, Howard Anton 19. ^ Strang, Gilbert (1988), Linear algebra and its applications (3rd ed.), Brooks-Cole , Chapter 5. 20. ^ E Hille, R S Phillips: Functional Analysis and Semi-Groups. American Mathematical Society, 1975. 21. ^ a b N. Bourbaki, Elements of Mathematics, Theory of Sets, Springer-Verlag, 2004, III.§3.5. 22. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , §III.2 No. 9: "L'unique monôme de degré 0 est l'élément unité de  ; on l'identifie souvent à l'élément unité 1 de  ". 23. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , §IV.1 No. 3. 24. ^ "Some textbooks leave the quantity 00 undefined, because the functions x0 and 0x have different limiting values when x decreases to 0. But this is a mistake. We must define x0 = 1, for all x, if the binomial theorem is to be valid when x = 0, y = 0, and/or x = −y. The binomial theorem is too important to be arbitrarily restricted! By contrast, the function 0x is quite unimportant".Ronald Graham, Donald Knuth, and Oren Patashnik (1989-01-05). "Binomial coefficients". Concrete Mathematics (1st ed.). Addison Wesley Longman Publishing Co. p. 162. ISBN 0-201-14236-8.  25. ^ Malik, S. C.; Savita Arora (1992). Mathematical Analysis. New York: Wiley. p. 223. ISBN 978-81-224-0323-7. In general the limit of φ(x)/ψ(x) when 1=x = a in case the limits of both the functions exist is equal to the limit of the numerator divided by the denominator. But what happens when both limits are zero? The division (0/0) then becomes meaningless. A case like this is known as an indeterminate form. Other such forms are ∞/∞ 0 × ∞, ∞ − ∞, 00, 1 and ∞0.  26. ^ L. J. Paige (March 1954). "A note on indeterminate forms". American Mathematical Monthly. 61 (3): 189–190. JSTOR 2307224. doi:10.2307/2307224.  27. ^ sci.math FAQ: What is 0^0? 28. ^ Rotando, Louis M.; Korn, Henry (1977). "The Indeterminate Form 00". Mathematics Magazine. Mathematical Association of America. 50 (1): 41–42. JSTOR 2689754. doi:10.2307/2689754.  29. ^ Lipkin, Leonard J. (2003). "On the Indeterminate Form 00". The College Mathematics Journal. Mathematical Association of America. 34 (1): 55–56. JSTOR 3595845. doi:10.2307/3595845.  30. ^ "Since log(0) does not exist, 0z is undefined. For Re(z) > 0, we define it arbitrarily as 0." George F. Carrier, Max Krook and Carl E. Pearson, Functions of a Complex Variable: Theory and Technique , 2005, p. 15 31. ^ "For z = 0, w ≠ 0, we define 0w = 0, while 00 is not defined." Mario Gonzalez, Classical Complex Analysis, Chapman & Hall, 1991, p. 56. 32. ^ "... Let's start at x = 0. Here xx is undefined." Mark D. Meyerson, The xx Spindle, Mathematics Magazine 69, no. 3 (June 1996), 198-206. 33. ^ Augustin-Louis Cauchy, Cours d'Analyse de l'École Royale Polytechnique (1821). In his Oeuvres Complètes, series 2, volume 3. 34. ^ Guillaume Libri, Note sur les valeurs de la fonction 00x, Journal für die reine und angewandte Mathematik 6 (1830), 67–72. 35. ^ Guillaume Libri, Mémoire sur les fonctions discontinues, Journal für die reine und angewandte Mathematik 10 (1833), 303–316. 36. ^ A. F. Möbius (1834). "Beweis der Gleichung 00 = 1, nach J. F. Pfaff" [Proof of the equation 00 = 1, according to J. F. Pfaff]. Journal für die reine und angewandte Mathematik. 12: 134–136.  37. ^ a b Donald E. Knuth, Two notes on notation, Amer. Math. Monthly 99 no. 5 (May 1992), 403–422 (arXiv:math/9205211 [math.HO]). 38. ^ Examples include Edwards and Penny (1994). Calculus, 4th ed, Prentice-Hall, p. 466, and Keedy, Bittinger, and Smith (1982). Algebra Two. Addison-Wesley, p. 32. 39. ^ Donald C. Benson, The Moment of Proof : Mathematical Epiphanies. New York Oxford University Press (UK), 1999. ISBN 978-0-19-511721-9 40. ^ Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. p. 216. LCCN 2009939668. doi:10.1007/978-0-8176-4705-6.  ISBN 978-0-8176-4705-6 (online), ISBN 0-8176-4704-X (print) 41. ^ John Benito (April 2003). "Rationale for International Standard—Programming Languages—C" (PDF). Revision 5.10: 182.  42. ^ "Math (Java Platform SE 8) pow". Oracle.  43. ^ ".NET Framework Class Library Math.Pow Method". Microsoft.  44. ^ "Sage worksheet calculating x^0". Jason Grout.  45. ^ "Wolfram Alpha calculates b^0". Wolfram Alpha LLC, accessed April 25, 2015.  46. ^ "Wolfram Alpha calculates 0^x". Wolfram Alpha LLC, accessed April 25, 2015.  47. ^ "Wolfram Alpha calculates 0^0". Wolfram Alpha LLC, accessed April 25, 2015.  48. ^ N. Bourbaki, Topologie générale, V.4.2. 49. ^ Gordon, D. M. (1998). "A Survey of Fast Exponentiation Methods". Journal of Algorithms. 27: 129–146. doi:10.1006/jagm.1997.0913.  External linksEdit
c84c7198097762c1
When I was in undergrad, I once dined with a philosophy major who spontaneously began to argue against free will (apparently some philosophers like to argue unprovoked). He wanted me to imagine that there is a being [such as a God] who knows classical mechanics perfectly and can calculate anything. Even though the universe is a chaotic system, this great intelligence knows the starting conditions, and can calculate the movement of every particle in the universe (ignoring quantum effects as negligible). This being will be able to know what obstacles you will face (e.g., car coming down the street) and will know how you will act (your reaction time, which neurons will fire, if you will be able to get out of the way or not), and how your reaction will affect the rest of your life. And how it will affect the lives of your children, all of humanity, earth, and beyond. We had a semi-productive discussion (argument), but his refusal to engage with quantum mechanics bothered me. First, quantum mechanics is not negligible when talking about life, and second, saying “quantum mechanics” doesn’t suddenly mean that the world is random and non-deterministic. Plenty of biological processes rely on quantum mechanical processes, because proteins (and their active sites) are really, really small. And many reactions that happen in the body (like redox reactions) involve electron or proton transfers, which are almost always quantum mechanical in nature. Second, a lot of interpretations of quantum mechanics are pretty deterministic–that is, the Schrödinger equation explains the probabilities of what can happen (and no other options can happen). When the wave function collapses, one of the predicted outcomes will happen. This is pretty deterministic, because nothing gets to pick and choose how things will turn out. They simply happen. Quantum mechanics isn’t necessarily a vote for free will by any means, especially the less popular De Broglie-Bohm interpretation, which is deterministic at its core. My dining companion’s argument is not new (it bothered Isaac Newton upon his development of classical mechanics, long before quantum mechanics was dreamed up),(1) and it has bothered me over the years. It’s called Laplace’s Demon (where the great intelligence is the “demon”), after Pierre-Simon Laplace formalized his fear of determinism in explicit terms. There are numerous discussions of determinism vs free will that go into many pages of examples and counterexamples, and I won’t really rehash them here. Some of them describe pretty cool concepts, such as Norton’s Dome so the interested reader should peruse them. The main question that one has to ask him or herself when thinking about Laplace’s Demon is ultimately, what is the threshold for free will? If it is randomness, you can forget about using quantum mechanics as some hand-wavy crutch.  Temperature can cause particles to behave in basically a truly random fashion. For instance, Johnson–Nyquist noise (electronic noise) is spontaneous and will happen in any resistor, and the higher the temperature, the noisier it will be. It is a random process. But does random noise in a circuit guarantee free will? What about in a human body? Body temperature (98.6 °F, 37 °C, or 310.15 K depending on your favorite units) generates so much thermal noise that the motions of water molecules (of which comprises about 70% of the human body) is completely unpredictable. Considering there are a mol (6.022*10^23) of water molecules for every 18 grams of water, and the average mass of a North American person in 2005 was 80.7 kilograms, there are about 1.8*10^27 molecules of water in the human body. You’ll burn out quite a few supercomputers trying to figure out the trajectory of those molecules over a microsecond, let alone a lifetime. The motions of proteins and tiny molecules in the body is completely stochastic, and mostly random, as a result. But could someone compute it? Let’s pretend someone has the Schrödinger equation for the universe, and knew how to solve it exactly (which would be pretty cool, because we can’t solve it exactly for anything that has more than one electron, or everything in the universe other than an isolated hydrogen atom). They plug the equation into their fancy computer and hit run. The computer starts chugging. It comes to an event where two particles interact, and could have several possible outcomes. It picks the correct outcome, because it can solve the Schrödinger equation for the universe exactly. Then it has to repeat this process a googolplex more times (or slightly less) before it reaches the present day. Oh. It just simulated the universe exactly if we were living it. It has to go sequentially, just like we live our lives. It can’t magically jump forward 5 months to see if your paper will be accepted into Nature, because each new decision or calculation depends on something that happened before it. Ultimately, I think that there are two ways of thinking about it. If you’re worried about some supernatural being coming along that will compute your future, and you are an atheist, then that being doesn’t actually exist for you and you have nothing to worry about. On the other hand, If you believe in an supernatural omniscient being, then by definition you already accept that God is all-knowing and could know the future if God chose to do so. This leads to some delightful philosophical thought experiments and paradoxes in it’s own right, such as, can God create a rock so big he cannot move it? Either way, it’s not going to change how you live your life, nor will it affect the choices you make. And that is why you have free will–no one is telling you what to do. Even if some God-like being could predict the future, they aren’t telling you what to do (and if you are hearing voices, you might want to consider seeing a doctor). Print References: 1. Stephenson, Neal. “Metaphysics in the Royal Society 1715-2010 (2010).” Some Remarks. New York: HarperCollins, 2012. 38-57. Print. Feature image in public domain, courtesy Wikipedia. Leave a Reply
e88ac6438ca619a6
Dismiss Notice Join Physics Forums Today! DFT for configuration of atoms 1. Oct 28, 2009 #1 I got a locked configuration of atoms, that is there positions are fixed. I would like to calculate the energy of this configuration and also the electron density. I've looked around on the internet and in some books and found that maybe Density Functional Theory (DFT) is the answer. If for example each atom has 2 electrons, and there are 5 atoms, can I just solve the kohn sham equation (where the external potetial is the one generated from the 5 atoms) in a self consistent way and get all these things I meen will this give me a electron density where some of the atoms are bonding and some are not, that is will the electron density be directional between bonding atoms. If this is a way to solve my problem can anyone give me some litterature that describes how to make these calculations numerically. 2. jcsd 3. Oct 28, 2009 #2 User Avatar Science Advisor Well there are two ways to get your energy; either by solving the Schrödinger equation (wavefunction methods) or using DFT. Either way will give you the electron density, since in the former case it's just the sum over the squares of the orbitals and occupancies. The difficulty of solving the Kohn-Sham equations depends on what functional you're using, i.e. how you approximate [tex]E_{xc}[/tex]. (which is not known exactly) Well, yes. But what's the problem you're trying to solve? To figure out whether two atoms are bonding? (in addition to the energy) In practice that wouldn't typically be done by looking at the density itself, but the occupancies of the orbitals. Anyway, there's any number of books on how to do the calculations (pretty much anything with 'quantum chemistry' in the title or subject), but it's not an entirely trivial task. Besides the difficulty, it'd also be drastic overkill to implement the whole thing just to calculate a single system or two! There's plenty of wavefunction and DFT software out there, some of which is free. A particularly popular (and free) program is http://www.msg.ameslab.gov/GAMESS/" [Broken]. Last edited by a moderator: May 4, 2017 4. Oct 29, 2009 #3 What i need is: Given a configuration of atoms (In my case carbon atoms), that I decide, what is the energy of the configuration and wich atoms bonds and is it a sp2 or sp3 bond. For example if i place my atoms in a graphite like structure, it should give me a low energy and the bonds should be sp2, if i place them in a diamond structure the energy should be higher and the bonds should be sp3. In my case i need to place the atoms in all kinds of configurations, and then see what the energy is, and if possible to define it, what bonds they make. Hope it makes sense. Similar Discussions: DFT for configuration of atoms 1. DFT/QFT help (Replies: 2) 2. DFT algorithm (Replies: 2)
2494249d28ad98bb
"The Strangest Finding Since the Scientific Revolution" The philosopher of physics explains the “unsettling” manner in which movement, at its most essential level, breaks down cognitive categories and exhausts the capacities of human logic. • Transcript Question: How does quantum mechanics contradict common sense? David Albert: Here's the deal: quantum mechanics allows physical systems -- and the easiest systems in which to observe phenomena like this are very tiny systems like subatomic particles, electrons or neutrons or protons -- quantum mechanics apparently allows for the existence of physical conditions of material objects like electrons in which questions about where the electron is located in space seem to fail to make sense. Let me back up a little bit and explain this a little more slowly. There are experiments we can do where an electron passes through a certain apparatus, is fed into one end of an apparatus, comes out the other side of the apparatus. And the apparatus has several routes inside of it which the electron could potentially have taken from the input to the output. And there are experiments we can do with pieces of apparatus like this which, taken together, make a compelling case that although the electron went from here to there, it didn't go by route A, it also didn't go by route B, and it didn't go in any intelligible sense by both routes; that is, it didn't split in half, with one half taking one route and one half taking the other route; and it also didn't take neither route, okay? And what's puzzling about that is that that would seem to be all the logical possibilities that there are. These experiments, you know, are now very routine experiments to do in physics laboratories. We've been good at doing experiments like this for something on the order of 70 years now. We're very good at doing them now. The results are very, very compelling. And after enormous soul-searching and puzzlement and confusion and so on and so forth, the sort of standard consensus understanding that evolved in physics of situations like this is that electrons could apparently be in situations where asking a question of the form "which route did the electron take?" was something like asking a question of the form "what is the marital status of the number five?" Or "what are the political affiliations of this tuna sandwich?" or something like that. These are questions that philosophers often refer to as category mistakes, okay? The very raising of a question about the political affiliations of a tuna sandwich or the marital status of the number five indicates that there's something basic that you're misunderstanding of what it is that you're asking a question about. And the strikingly strange thing about quantum mechanics -- and indeed it seems to me a case could be made that this is the strangest and most unsettling result to come out of the natural sciences since the scientific revolution of the Renaissance -- is that even things like particles can be in conditions where it simply radically fails to make sense even to ask where the thing is located in space. What's particularly strange about this is that of course there are other circumstances where it does make sense to ask those questions about where it's located in space. The electron determinately goes into the box over here and comes out over there, okay? But we can give good arguments from these experiments that while it's inside, it's not merely that we don't know where it is, it's something much more radical and much more unsettling than that: that the very act of raising a question about where it is represents some kind of misunderstanding of what mode of being that electron is participating in while it's going through this device. David Albert: Anyway, here's a further fact about electrons: if we go -- if we do one of these experiments that I just described and stop it in the middle, rip the box open, okay, and go look for the electron, as a matter of fact we always find it in some determinate position, okay? We have equations, we have basic laws of motion: the Schrödinger equation in the case of nonrelativistic quantum mechanics; in the case of relativistic quantum mechanics the fundamental equations are the Dirac equation or the Klein-Gordon equation. Anyway, we have these fundamental laws of motion for things like electrons; indeed, for all material things. And these laws are very successful at predicting when these strange -- let me back up and say these conditions in which it fails to make sense to ask whether the electron is here and here -- in a long, distinguished tradition of facing a mystery that one doesn't understand by at least making up a name for it, a name has been made up for this condition. People speak of electrons in such circumstances as being in a superposition of going along route A and going along route B. And although it's very difficult for us to get our heads around what this word means, we are very adept at treating these situations mathematically. We have very reliable equations that tell us when and under what circumstances these superpositions are going to arise and when they're going to go away, and blah blah blah blah blah. Good. Further empirical fact: when we rip open these boxes and look for these electrons, we always find them in one position or another, okay? So that somehow the act of looking at them makes these superpositions go away, okay? Good. On the other hand, we could perform the following exercise: take these fundamental equations that we have discovered and which we have very good reason to believe are reliable at predicting when superpositions are going to arise and when they're going to go away and so on and so forth; use those equations to predict what ought to occur when we rip the box open and look inside. That may sound like a very difficult calculation to do. It involves this macroscopic human being and his brain and so on and so forth. Actually, there's a mathematical trick for getting this calculation done, as miraculous as that sounds. And it's very easy to show that what these equations predict ought to occur when we rip this box open is that we ourselves go into a superposition of seeing the electron on route A and seeing the electron on route B, okay? That is, that we ourselves go into some condition in which not only does it fail to make sense to ask where the electron is; it fails to make sense even to ask about our beliefs about where the electron is, okay? Or it fails to make sense to ask whether we're in the brain state corresponding to believing that the electron is on route A or in the brain state corresponding to believing that the electron is on route B. Good. God knows what the hell that would feel like, okay? But the usual way of setting up this measurement problem is merely to observe that whatever it is that would feel like, that's not what happens to us when we rip open these boxes. When we rip open these boxes, there is always a perfectly determinate matter of fact about where we take the electron to be. Sometimes we see it on route A; sometimes we see it on route B; it's never the case that anything else is going on. It's never the case that it looks fuzzy, or we get nauseous, or we become disoriented, or in any sense that one can put one's finger on there fails to be a fact about where we see the electron. Good. So we have a flat-out contradiction. And this is the more explicit version of the way this story about the glass breaks down. We have a flat-out contradiction between, on the one hand, the predictions of the fundamental quantum mechanical equations of motion about what ought to happen when we rip open these boxes, and our everyday introspective experience of what's going on when we rip open these boxes, which is that in each of those occasions we either see an electron there, or we see an electron there. These two claims flatly contradict one another. Of course, you know, the empirical claim is the one that's true; that's the one that we see from our observations. There's something wrong with these equations. On the other hand, we also know that there's an enormous amount that's right about these equations. These equations are where, you know, indescribably vast swathes of 20th century science and technology come from. These equations get an enormous amount right. On the other hand, it couldn't be more obvious from our everyday experience of the world and of ourselves that something is wrong with them. There's a problem about how to put these two facts together. There's a problem more specifically about how to modify the theory in such a way that this contradiction goes away without ruining the rest of the good predictions of the theory. This problem, once again, is called the measurement problem.
a3423ddc6ca54a85
Dismiss Notice Join Physics Forums Today! Degeneracy of Hydrogen atomic orbitals with different l-values but same n-value 1. Mar 17, 2012 #1 I am terribly confused. I have always been hearing that in the hydrogen atom, 2s and 2p orbitals have the same energy. Similarly, the 3s, 3p and 3d orbitals have the same energies. This is also suggested by the hydrogen spectrum, my professor also believes the same, and I am unable to find anything against this on the internet. But what is the basis for this degeneracy? Upon solving the radial equations for 2s and 2p orbitals, we get the same eigenvalue for Energy, that depends only on the principal quantum number n. However, the wave functions also have an angular part and upon solving the angular equations for 2s and 2p we get a zero value for the 2s (angular momentum=0) and a finite value for 2p (angular momentum=root(2)*hbar). This angular momentum will contribute an extra value of root(2)*hbar/(2*I) to the energy. This will immediately give 2s and 2p different energy values, so they cannot be degenerate. Have I gone wrong somewhere? 2. jcsd 3. Mar 17, 2012 #2 User Avatar Science Advisor The l-degeneracy, i.e. the fact that E(n,l) = E(n) is l-independent is due to a hidden dynamical symmetry of the 1/r potential which results in an additional conserved quantity, the so-called Laplace-Lenz-Runge vector. The 1/r potential has not only the obvious SO(3) symmetry for spatial rotations but a larger SO(4) symmetry. The existence of the Laplace-Lenz-Runge vector and the l-degeneracy allowes one to solve the energy eigenvalue problem algebraically w/o solving the Schrödinger equation (W. Pauli) Just google for hydrogen atom SO(4) and you will find numerous articles, scripts and presentations. I am pretty sure that we had this disucussion here a couple of times. Your reasoning regarding the additional l-term in Veff(r) giving the Ylm(Ω) functions a different energy is not correct b/c different l-values also affect the Rnl(r) functions. I think you can't understand the l-degeneracy by just solving the Schrödinger equation (you can derive it, but you don't see the deeper reason) 4. Mar 19, 2012 #3 Thanks! :) I get it, it has got something to do with the symmetry, I'll go and look that up. However, I still find my reasoning contradictory to this, and I am unable to see any flaw in it. I agree that different l-values affect the R(r) functions, however they don't affect the eigenvalues obtained when the separable hamiltonian acts on R, which depend only on 'n'. 5. Mar 19, 2012 #4 User Avatar Science Advisor To be more specific about the difference of radial functions for orbitals with different l, the 2s orbital has 1 radial node, 2p zero nodes. In general ns has n-1 nodes, np n-2, nd n-3 etc. These additional nodes make up for the lower centrifugal potential of states with lower l as compared to states with higher l. 6. Mar 21, 2012 #5 thanx! :) I suppose the hamiltonian need not be separable, after all 7. Mar 21, 2012 #6 Vanadium 50 User Avatar Staff Emeritus Science Advisor Education Advisor Tom is correct. The symmetry is more explicit in parabolic coordinates. However, working in parabolic coordinates is not simple. Similar Discussions: Degeneracy of Hydrogen atomic orbitals with different l-values but same n-value
c5f758486362ade1
Durham e-Theses You are in: Modern approaches to the exchange-correlation problem Peach, Michael Joseph George (2009) Modern approaches to the exchange-correlation problem. Doctoral thesis, Durham University. Kohn-Sham density functional theory (DFT) is the most prevalent electronic structure method in chemistry. Whilst formally exact, in practice it affords reasonable accuracy with reasonable computational cost and is the method of choice when considering molecules of non-trivial size. The key quantity is the exchange-correlation energy functional, the exact form of which is unknown. Approximate exchange-correlation functionals, particularly B3LYP and PBE, are routinely applied to chemical problems. However, it is not possible to guarantee a given accuracy in advance, nor is there a systematic means of obtaining a more accurate answer. Existing functionals are applied to ever more challenging problems and the accuracy required of them is continually increasing the need for more accurate functionals is one of the major challenges in electronic structure theory. This thesis focuses on several approaches that attempt to address this issue. In chapter 1 the electronic structure problem is outlined and discussed in terms of the Schrödinger equation and solutions involving wavefunctions. In chapter 2, the formal foundations of DFT are presented and methods of approximating the exchange-correlation functional are introduced. A promising new direction for developing exchange-correlation functionals, through attenuation of the exchange term, is introduced and discussed in detail in chapter 3. The accuracy of such functionals is investigated and compared to that obtained from conventional approaches, with a particular emphasis on the dependence on the attenuation parameters. It is then demonstrated that attenuated functionals offer the prospect of significantly improved descriptions of excitation energies, particularly for those of charge-transfer character. Apphcation of attenuated functionals to excitation energies that are problematic for conventional functionals is undertaken in chapter 4. Insight into the conflicting performance of conventional methods for different charge-transfer excitations is provided through a consideration of the orbital overlap between the orbitals involved in an excitation. Through this overlap quantity, a diagnostic test is proposed that enables a user to judge in advance the reliability of excitation energies from conventional functionals. Attenuated functionals are then applied to other difficult properties in chapter 5. Firstly they are used to study the bond length alternation and band gap in poly acetylene and polyyne oligomers and infinite chains. Then they are used to calculate nuclear magnetic resonance parameters in both main-group and first-row transition metal systems, through the theoretically rigorous optimised effective potential method. An entirely different approach to functional development is considered in chapter 6, where the adiabatic connection formalism is introduced as an alternative method of obtaining the exchange-correlation functional. For a series of two-electron systems, exact input data is used to determine the applicability of a number of simple mathematical forms in modelling the exact adiabatic connection. The conclusions from these simple systems are then used to provide insight into the possibility of using this approach in functional development. Item Type:Thesis (Doctoral) Award:Doctor of Philosophy Thesis Date:2009 Copyright:Copyright of this thesis is held by the author Deposited On:08 Sep 2011 18:24 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
cea043ef3ae00f81
torsdag 22 maj 2014 Mr Clay and a Meaningless Navier-Stokes Prize Problem Turbulent flow around a landing gear as non-smooth solution of the 3d incompressible Navier-Stokes equations, by CTLab KTH. Watch also turbulent flow around an airplane in landing configuration. To argue that these flows are smooth would be a meaningless abuse of mathematical language. The Clay Institute of Mathematics (CMI) founded by Landon T. Clay celebrated the new Millennium by setting up 7 Prize Problems each worth $1 million,  presented in beautiful words: • The Clay Mathematics Institute (CMI) grew out of the longstanding belief of its founder, Mr. Landon T. Clay, in the value of mathematical knowledge and its centrality to human progress, culture, and intellectual life.... • further the beauty, power and universality of mathematical thinking...deepest, most difficult problems... achievement in mathematics of historical dimension • elevate in the consciousness of the general public the fact that, in mathematics, the frontier is still open and abounds in important unsolved problems... • Problems have long been regarded as the life of mathematics.  A good problem is one that defies existing methods...whose solution promises a real advance in our knowledge.  I have long argued that since the Navier-Stokes Prize Problem is formulated without including the fundamental aspects of wellposedness and turbulence, it misses these values and thus is not a good Prize Problem. Here is my argument again: Consider the incompressible Navier-Stokes equations with viscosity $\nu >0$ in the case of (very) large Reynolds number $Re =\frac{UL}{\nu}$ with $U$ global flow speed and $L$ global length scale. Assume $U=L=1$ and thus $\nu$ (very) small. Such flows are observed physically and computationally to be turbulent with substantial velocity fluctuations $u\sim \nu^\frac{1}{4}$ on a smallest spatial scale $\epsilon\sim\nu^\frac{3}{4}$ with corresponding substantial viscous dissipation $\sim 1$.  For the jumbojet in the above simulation $Re\approx 10^8$ and the smallest scale a fraction of a millimeter. The heuristic argument to this effect goes as follows: A: Breakdown to smaller scales only takes place for sufficiently large local Reynolds number (of size 100 or more), which gives the following relation for the fluctuations $u$ on the smallest scale $\epsilon$: • $\frac{u\epsilon}{\nu}\sim 1$. B: Substantial dissipation on smallest scale $\epsilon$ means  • $\nu (\frac{u}{\epsilon})^2\sim 1$. Combination of A and B gives $u\sim \nu^\frac{1}{4}$ and $\epsilon\sim\nu^\frac{3}{4}$ as stated. This can be viewed to express Lipschitz-Hölder continuity with exponent $\frac{1}{3}$ and thus that turbulent solutions for (very) small $\nu$ are non-smooth, because they are $Lip^{\frac{1}{3}}$ on (very) small scales.  The existence of such turbulent solutions can mathematically be proved by standard methods by regularization on scales much smaller than $\epsilon$, which does not change the solution but the NS equation.  For smooth data such solutions to regularized NS could formally be proved to be smooth in the sense of the formulation of the NS Prize Problem by Fefferman, but this would be in conflict with the observation that solutions are non-smooth  ($Lip^{\frac{1}{3}}$) on (very) small scales $\sim\nu^\frac{3}{4}$.  The only mathematically and physically reasonable way to resolve this conflict of definitions, would be to view turbulent solutions to be non-smooth ($Lip^{\frac{1}{3}}$ on very small scales), and thus as weak solutions, with weakly small but strongly large Euler residuals, and the aspect of wellposedness would then be of focal interest. Computational sensitivity (stability) analysis shows that turbulent weak solutions, are weakly wellposed in the sense that solution mean-values are not highly sensitive to perturbations of data (while point-values are). Stability analysis further shows that globally smooth solutions with derivatives of unit size for smooth data of unit size, are unstable and thus are not physical solutions.  The net result is that the present formulation of the NS Prize Problem is meaningless from both mathematical and physical point of view. A meaningful formulation must include wellposedness and turbulence as key issues, with existence settled by standard techniques, and a meaningful resolution would have to offer mathematical evidence of weak wellposedness and features of turbulence. I have asked Terence Tao, as a world leading mathematician working on the Prize Problem, about his views on the aspects I have brought up, and will report his response. I have earlier many times asked Fefferman the same thing but the only response I get is "To me my formulation is meaningful".  What would then Mr Clay say if he understood that the NS Prize Problem is not meaningful      outside a small group of mathematicians (which may contain just one person), when comparing to the mission to which he donated his Prize: PS It is remarkable (or deplorable) that my repeated request to start a discussion about the formulation of the Prize problem is met with complete silence from those in charge of the problem. If my view-points are silly, that could be said by those who know better. If they are not silly, maybe even relevant, then it would be silly (or deplorable) to not say anything.  In either case, silence is not reasonable and it is tiresome to keep silent under increasing pressure from the outside world to say something...  onsdag 21 maj 2014 Tao on Clay Navier-Stokes and Turbulence? Terence Tao is working on the Clay Navier-Stokes Prize Problem and in a recent post considers  Kolmogorov's power law for turbulence. A heuristic derivation goes as follows: The smallest spatial scale $\epsilon$ of a fluctuation $u$ of turbulent incompressible flow of small viscosity $\nu >0$ is determined by a local Reynolds number condition Assuming the smallest scale carries a substantial part of the total dissipation gives  Combination gives  • $u\sim \nu^{\frac{1}{4}}$ • $\epsilon\sim\nu^{\frac{3}{4}}$  suggesting that the turbulent solution is Lipschitz continuous with exponent $\frac{1}{3}$.  My question to Tao posed as a post comment is if according to the Clay problem formulation, such a $Lip^\frac{1}{3}$ turbulent solution with smallest scale $\nu^\frac{3}{4}$ is to be viewed as a smooth solution for any small $\nu >0$? tisdag 20 maj 2014 Answer to My Question about Formulation of Clay Navier-Stokes Prize Problem Here is the response from the Clay Mathematics Institute on my message that the formulation of the Navier-Stokes Prize Problem does not include the fundamental aspect of wellposedness required for a mathematical model of a physics phenomenon to be meaningful: Dear Dr Johnson, Thank you for your interest in the Millennium Prize Problems. Complete details can be found at As a matter of policy, the Clay Mathematics Institute does not join in discussion of the formulation of the Millennium Prize Problems, nor does it comment on potential solutions.  I am afraid that we have nothing to add to what is said on the CMI's website. Best wishes, Anne Pearsall (Mrs) Administrative Assistant Office of the President, Clay Mathematics Institute Andrew Wiles Building Radcliffe Observatory Quarter Woodstock Road Oxford OX2 6GG, UK OK, so we learn that the Administrative Assistant of the President of the Clay Mathematics Institute, not the President himself,  "is afraid that we have nothing to add" and that the Institute "does not join in discussion of the formulation of the Millennium Prize Problems".  Yes, this is indeed something to be afraid of, in particular if mr Clay himself understands that the formulation of the NS problem is unfortunate in the sense of lacking meaning to physics, and as a meaningless problem cannot have a meaningful solution.  The fact that my question about the meaningfulnness of the NS Problem in its present formulation, is met by compact silence, may be interpreted as a silent acknowledgement that the formulation indeed is meaningless, and that it is purposely so in order to reserve the problem to meaningless mathematics and guarantee that, in Newton's words, "little smatterers" are kept out. Wellposedness vs the Clay Navier-Stokes Problem? In a sequence of posts I have argued that the omission of wellposedness in the Official Description of the Clay Navier-Stokes Prize Problem by Charles Fefferman, makes the problem meaningless. To support this I quote from Wellposedness and Physical Possibility by B. Gyenis: Well posedness is widely held to be an essential feature of physical theories. Consider the following remarks of Mikhail M. Lavrentiev, Alan Rendall, and Robert M. Wald – leading experts in their respective fields of physics – intended as motivations for the continuous dependence condition: • One should remember that the main goal of solving mathematical problems is to describe certain physical processes in mathematical terms. In this case the initial data are obtained experimentally; and since measurements cannot be absolutely precise, the data contain mea- surement errors. For a mathematical model to describe a real physical process, the problem should be supplemented with some additional requirements reflecting, in a physical sense, the fact that the solution should have only small variations under slight changes of initial data or, to put it conventionally, the stability of the solution under small perturbations in the data. (Lavrentiev et al.; 2003, p. 6)  • The condition of continuity is sometimes called Cauchy stability. The reason for including it is as follows. If PDE are to be applied to model phenomena in the natural world it must be remembered that measurements are never exact but always associated with some error. As a consequence it is impossible to know initial data for a problem exactly and so if solutions depend on the initial data in an uncontrollable way the model cannot make useful predictions. Cauchy stability guarantees that this does not happen and thus represents a necessary condition for the application of PDE to the real world. (Rendall; 2008, p. 134)  • If a theory can be formulated so that “appropriate initial data” may be specified (possibly subject to constraints) such that the subsequent dynamical evolution of the system is uniquely determined, we say that the theory possesses an initial value formulation. How- ever, even if such a formulation exists, there remain further properties that a physically viable theory should satisfy. First, in an appropriate sense, “small changes” in initial data should produce only correspondingly “small changes” in the solution over any fixed compact region of spacetime. If this property were not satisfied, the theory would lose essentially all predictive power, since initial conditions can be measured only to a finite accuracy. It is generally assumed that the pathological behavior which would result from the failure of this property does not occur in physics. [...]2 (Wald; 1984, p. 224)  These remarks express a sentiment widely shared among physicists: wellposedness is a necessary condition for models to describe real physical processes. Lack of wellposedness would be pathological and it “does not occur in physics,” at least not in describing forward time propagation of physical processes. OK, so leading experts of physics consider wellposedness to a necessary requirement for a mathematical model of some physical phenomena to be meaningful. The Navier-Stokes equations is the basic model of fluid mechanics, and as such requires some form of wellposedness to be meaningful. The leading mathematical expert Charles Fefferman formulates the Clay Navier-Stokes problem without reference to wellposedness and thus apparently considers wellposedness to not be a central aspect. But doing so Fefferman separates the mathematics of Navier-Stokes equations from physics, which goes against the reason of formulating a Prize Problems about a mathematical model of fundamental importance in physics.   When I ask The Clay Institute and Fefferman to give a comment concerning these facts, I get zero response. I think my viewpoints are reasonable and essential and thus worthy of some form of answer. måndag 19 maj 2014 Wellposedness and Turbulence Not Part of Clay Navier-Stokes Problem! Dear Colleagues: Sincerely, Claes Johnson söndag 18 maj 2014 Crisis in Mathematics Education in France like in Sweden Mathematics education is falling freely also in France as reported on images des Maths (in my translation): • The many problems present in mathematics education today is of concern to everybody. Results are falling since 1990. • Many people speak thereof but few do anything about it. • The debate is troublesome in the community of mathematicians, and even more so for the general public. • The phenomenon has several causes:  • One is the training of mathematics teachers. It was better before. This is the same analysis as in Sweden based on the following postulates: 1. The training of math teachers was good before and math eduction was then working. 2. Today math education does not work anymore and the reason can only be that the training of math teachers is not as good as before. 3. Hence what is needed is re-training of math teachers to the old standard. Billions of tax-payer money is now spent in Sweden spent on re-training in collegial form, where teachers without "good" education "lift" each other into the old level of training.  Of course, the result is small and much money and effort is lost.  What is forgotten, both in France and Sweden, is that in our computer time, math education has a new role to play and the old role is outdated and cannot be resurrected. Very few people in the math community are willing to face this reality, and the result is that the fall of math education continues to new low levels each year, in France and Sweden alike.  lördag 17 maj 2014 BodySoul Mathematical Simulation Technology Translated to Chinese Dear Professor Johnson, We look forward to your favorable response, Zhimin. Almost Dictatorial Consensus in Germany An internal memo On the situation in the field of meteorology-climatology of the German Meteorological Society reveals a growing and widespread worry over the suppression of scientific views under almost dictatorial consensus: • ….how certain developments are becoming cemented into their scientific fields (foremost climatology) which from a scientific point of view simply cannot be accepted and do not comply to their professional ethics. • In meteorology-climatology every one includes a highly visible army of organized, little known persons; in Germany this is almost the entire public!  • The changes that have taken place in science as a result have in our opinion (and that of others) led to very negative impacts on the quality standards of science.  • For example expressed and disseminated meteorological flaws can hardly be contained and cannot be corrected publicly at all. Yet our meteorological scientists do not speak up. • And it is hardly perceived that behind these developments – admittedly – there is also a political objective for the transformation of society, whether one wants it or not. Currently global sustainable change is the same thing. • Meteorology-climatology is playing a decisive role this political action. The – alleged – CO2 consensus here is serving as a lever within the group that consists of known colleagues who deal with climate, but also consists of a large number of climate bureaucrats coming from every imaginable social field. Together both groups consensually have introduced a binding dogma into this science (which is something that is totally alien to the notion of science). • This is not the first time such a thing has happened in the history of science. Here although this dogma came about through democratic paths (through consensus vote?), in the end it is almost dictatorial.  • Doubting the dogma is de facto forbidden and is punished? In climatology the doubt is about datasets or results taken over from hardly verifiable model simulations from other parties. Until recently this kind of science was considered conquered – thanks to our much celebrated liberty/democratic foundation! • The constant claim of consensus among so-called climatologists, who relentlessly claim man-made climate change has been established, attempts to impose by authority an end to the debate on fundamental questions.  • Thus a large number of scientist colleagues end up being ostracized, and thus could lead to the prompting of actions that would have considerable burdens on the well-intended society. Such a regulation and the resulting incalculable consequence it would have for all people would in our view – and that of many meteorological specialists we know - be irresponsible with respect to our real level of knowledge in this field. • We must desire in general, and also in our scientific field, a return to an international scientific practice that is free of pre-conceptions and cemented biased opinions.  • This must include the freedom of presenting (naturally well-founded) scientific results, even when these do not correspond to the mainstream (e.g. the IPCC requirements). The bullying of Lennart Bengtsson is a recent example of violation of scientific/democratic principles  in the name of "almost dictatorial consensus". Another is KTH-gate. Where is Western society heading? fredag 16 maj 2014 Towards Computational Solution of Clay Navier-Stokes Problem 3 The formulation of the Clay Navier-Stokes Prize problem is unfortunate, or more precisely both mathematically and physically meaningless, because the following two completely fundamental aspects are not included: 1. wellposedness 2. turbulence. To see the effect consider exterior flow with a slip boundary condition, which allows a unique stationary smooth near-solution as potential flow with a Navier-Stokes residual, which scales with the viscosity $\epsilon$. Smooth potential flow thus offers a solution to the NS equations with a vanishingly small residual under vanishingly small viscosity. But potential flow is not stable since it  under small perturbation develops into a completely different turbulent solution. In other words, potential flow is not wellposed in any sense and thus not a physical solution. The present problem formulation without 1 and 2 does not allow unphysical smooth potential flow to be distinguished from physical turbulent flow. The result is that the Clay NS problem has no meaningful solution and does not serve the purpose of a Prize problem. Note that the Clay NS problem is introduced with the following description of the essence of the problem and its importance to humanity: But turbulence is not an issue in the official formulation. The secret to unlock is turbulence, but that is not part of the problem formulation. Something is weird here. I have pointed that out to the President of Clay Mathematics Institute and will report the reaction. Here is the letter: Clay Mathematics Institute I want to convey the information that the formulation of the Clay Navier-Stokes problem is incorrect both mathematically and physically, because the fundamental aspects of (i) wellposedness and (ii) turbulence, are not included, as exposed in detail in the following sequence of blog posts: The result is that the problem cannot be given a meaningful solution and thus does not serve well as a Prize problem. Evidence is given by the fact that no progress towards a solution has been made. I have tried to engage Charles Fefferman, who has formulated the problem, Peter Constantin, who acts as a referee, and Terence Tao, who is working on the problem, into a discussion, but I get no response. I hope this way to stimulate discussion, which I think would be more constructive than no discussion. Sincerely, Claes Johnson  Towards Computational Solution of Clay Navier-Stokes Problem 2 This is a continuation of a previous post: The basic energy estimate which is easily proved analytically by multiplying the momentum equation by the velocity $u_\epsilon$ and integrating, reads for $T>0$ with $Q =\Omega\times (0,T)$: • $\int_\Omega\vert u_\epsilon (x,T)\vert^2\, dx +\int_{Q}\epsilon\vert\nabla u_\epsilon (x,t)\vert^2\, dxdt =\int_\Omega\vert u^0(x)\vert^2\, dx$ or in short notation with obvious meaning: • $U(T) + D_\epsilon (U) = U(0)$, which expresses a balance of kinetic energy $U(T)$ at time $T$ and dissipation $D_\epsilon (U)$ over the time interval $(0,T)$ summing up to initial kinetic energy $U(0)$.  Computations with small $\epsilon$ (compared to data as $\Omega$ and $U(0)$) produce turbulent solutions characterized by  •  $D_\epsilon (U) =\alpha U(0)$ where $\alpha$ is not small, that is solutions with substantial (turbulent) dissipation. For turbulent solutions $\vert \nabla u\vert$ is large, typically scaling with $\epsilon^{-\frac{1}{2}}$, even if initial data is smooth, which can be viewed as an expression of non-smoothness. The basic energy estimate can thus be used to signify non-smoothness by substantial turbulent dissipation. The Clay problem can thus be reduced to the question of proving that the dissipation term is  substantial in the basic energy estimate.  Evidence to this effect is given by computation. Analytical evidence can be given by the following argument: Smooth laminar solutions have small dissipation but smooth laminar solutions are all unstable. If the dissipation remained small it would mean that an unstable solution would remain smooth and unstable, which is not possible under perturbation.  The dissipation therefore must be substantial in the basic energy estimate and only a non-smooth solution can exist (and does exist by computation). An answer to the Clay problem may thus be possible along the following lines, assuming the viscosity is small and data are smooth: 1. Solutions exist for all time and do not cease to exist by blow-up. 2. Solutions become non-smooth (turbulent) in finite time.  3. Solutions cannot stay smooth for all time, because any smooth solution is unstable.  4. Solutions are weakly well-posed in the sense that solution mean-values are stable to perturbations, because of a cancellation effect in turbulent solutions which is not present for smooth solutions.   The group of mathematicians in charge of the problem (Fefferman, Constantin and Tao) do not answer my repeated requests to open a discussion about the formulation of the problem and possible approaches to solution. This is not helpful to progress. Mathematicians apparently want to have a heaven of their own, where they can explain phenomena which have no scientific relevance, but this is a dangerous strategy in the long run, because without connection to science funding may cease. torsdag 15 maj 2014 Lennart Bengtsson vs Royal Swedish Academy on Swedish Climate Science and Politics Lennart Bengtsson indicates that the statement from 2009 by the Royal Swedish Academy of Sciences on the Scientific Basis of Climate Change, authored mainly by himself as leading Swedish climate scientist and expressing (cautious) support of the CO2-alarmism propagated by IPCC, is due to a revision. Since 2009 LB has turned from supporter to skeptic of IPCC CO2-alarmism, which he has made very clear in media outside Sweden. The question is now if LB will participate in forming the revision or not? If the standpoint of LB as skeptic will dominate the revision, which is reasonable since he is the leading climate scientist in the Academy,  then the new statement will express skepticism to CO2-alarmism and there will be no scientific foundation for the current Swedish climate politics. If the standpoint of LB shows to be incompatible with that of the Academy, then the revision will be formed without the participation of the leading climate scientist in Sweden and then will have no weight, and the result will be the same. It seems that interesting times are awaiting the Academy and Swedish climate science and politics. For an account of the related GWPF story see Climate Depot. LBs recent article pointing to small climate sensitivity has been rejected to publication on political grounds since it question the dogma of climate alarmism. See article in The Times and Roy Spencer. Towards Computational Solution of Clay Navier-Stokes Problem 1 The Clay Navier-Stokes problem as formulated by Fefferman asks for a mathematical proof of (i) existence for smooth initial data of smooth solutions for all time to the incompressible Navier-Stokes equations, or (ii) blow-up of a solution in finite time. No progress towards an answer has been made since the problem was announced in 2000. It appears that the available tools of mathematical analysis by pen and paper are too crude to give an answer. Let me here sketch (see also earlier posts) an approach based on digital computation which may give an answer. We then consider the incompressible Navier-Stokes equations in velocity $u=u_\epsilon (t,x)$ and pressure $p=p_\epsilon (t,x)$: • $\frac{\partial u}{\partial t}+u\cdot\nabla u +\nabla p =\epsilon\Delta u$  • $\nabla\cdot u =0$ for  time $t > 0$ and $x\in\Omega$ with $\Omega$ a three-dimensional domain, subject to smooth initial data $u_\epsilon (0,x)=u^0(x)$ and and slip or no-slip boundary conditions. Here $\epsilon > 0$ is a constant viscosity, which we assume to be small compared to data ($\Omega$ and $u^0$). Computed solutions show the following dependence on $\epsilon$ under constant data: 1. $\Vert\epsilon^{\frac{1}{2}}\nabla u_\epsilon\Vert_{L_2(L_2)} \sim 1$ 2. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(H^{-1})}\sim \epsilon^{\frac{1}{2}}$ 3. $\Vert\epsilon\Delta u_\epsilon\Vert_{L_2(L_2)}\sim \epsilon^{-\frac{1}{2}}$. Here 1 reaches the upper bound of the standard energy estimate, which can be proved analytically, which shows that $\nabla u_\epsilon$ becomes large with decreasing $\epsilon$ as a quantitative expression of non-smoothness, with 2 a variant thereof.  Also 3 expresses non-smoothness in quantitative form with $\epsilon\Delta u_\epsilon$ being small in a weak norm but large in a strong norm. Computation thus is observed to produce solutions to the Navier-Stokes equations with increasing degree of non-smoothness as $\epsilon$ tends to zero, which can be seen as an answer to the Clay question in direction of  (ii) but not quite since the solution does not cease to exist by "blow up" and continues as a non-smooth weak solution. Computed solutions satisfying 3 are turbulent. Mean-value outputs of turbulent solutions show small variation as the viscosity becomes small, in particular with slip. This can be seen to express weak well-posedness under variation of small viscosity, which may allow to carry the conclusion from computationally resolvable small viscosity to vanishingly small viscosity beyond computation. We may compare with the attempt by Terence Tao to construct a non-smooth solution by pen and paper in a thought experiment, where the computation is left to the reader of a 70 page dense "computer program" expressed in analytical mathematics. We let instead the computer compute the solution following a standard (freely accessible) computer program, which allows the reader to do the same and then inspect the solution and verify 3 and thus get an answer to the Clay problem.      onsdag 14 maj 2014 Shocking Message from Lennart Bengtsson Muted by Climate Alarmists Die Klimazwiebel publishes the following shocking letter from Lennart Bengtsson forced to resign from the advisory board of GWPF under group pressure from politically correct CO2 alarmists: I have recently communicated with LB and expressed my great admiration for his courageous questioning of CO2 alarmism in media, because in his view the scientific reason is lacking. He then said that he would continue to fight for scientific truth following his responsibility as leading scientist. But LB as has now been muted by naked power and the order of climate alarmism is re-established. What a terribly sad story this is! For Sweden, Science and the World! See also Climate Depot and Tallbloke and Bishophill and JoNova and Climate Audit and the reaction from David Henderson, Chairman, GWPF’s Academic Advisory Council: • With great regret, and all good wishes for the future. No wonder that the reaction is so strong: Big values are at stake. The whole alarmist ship is sinking and desperation is spreading…One day not too far away LB will be glorified as a scientist ready to folllow his conviction, now only temporarily overpowered…., no matter what the cost may be… PS1 As noted by Lubos the event my drive LB into a true skeptic position, rather than back to alarmism, a position now taken by many scientists and thus if not maximally comfortable probably livable. PS2 More on the Swedish Klimatupplysningen and Antropocene and MSM outside Sweden: Towards a Solution of the Clay Navier-Stokes Problem 2 The Clay Millennium Navier-Stokes problem concerns properties of solutions of the incompressible Navier-Stokes equations as the basic model of fluid mechanics of fundamental importance in both science and mathematics. The Official Formulation Description by Charles Fefferman poses the following alternatives: 1. Existence of smooth solutions for all time from smooth initial data? 2. Cease of existence ("break-down" or "blow-up") of a solution from smooth initial data?    No progress towards a solution has been made since the formulation in 2000. Existence of smooth solutions for all time seems impossible since the viscosity term is not strong enough. All efforts to construct a solution with blow-up have failed because the viscosity term is too strong. No answer thus seems to be possible and a scientific dead-lock is reached. Over the years I have, without success, tried to convey the message that the reason for the dead-lock is  that Fefferman's problem formulation is both mathematically and physically meaningless, because the fundamental aspect of (Hadamard) wellposedness or stability of solutions to perturbations is not included.  Including well-posedness leads to the following possible answer which is neither 1 nor 2 and which deals with case of small viscosity (compared to initial data): • Turbulent solutions always develop in finite time from smooth initial data. • A turbulent (non-smooth) solution is characterized by having a Navier-Stokes residual which is small in a weak $H^{-1}$-norm and large in a strong $H^1$-norm. • Turbulent solutions are weakly wellposed by having stable mean-value outputs.  I have tried to get some comment from Terence Tao, Charles Fefferman and Peter Constantin, who are in charge of the problem formulation and serve as referees to evaluate proposed solutions. The response I get is that the problem formulation without wellposedness by Fefferman is fine as a mathematical problem, even if it does not make sense from physics point of view. The response is that it may well be that a solution will never be reached, but if so let it be.  But why not include wellposedness and make the Clay Navier-Stokes problem meaningful from a physics point of view and then meaningful as a challenge to development of mathematics? Why not open to possibility instead of impossibility? Why spend major efforts on a meaningless question without answer?  I pose this question to Fefferman, Constantin and Tao, with the hope of getting some response, to be reported. PS1 We may compare with the lack of global warming since 2000: No progress of the temperature whatsoever. With this evidence one may ask if there may be some fundamental flaw in the idea of global warming. PS2 Terence Tao sets out to "construct" a selfreplicating solution of the Navier-Stokes equations which "blows up" in a 70 page paper and pen excercise, which shows to be impossible. We let instead the computer construct solutions, which turns out to be possible, and we observe that the constructed solutions become turbulent and thus show a form of blow-up. PS3 It does not seem that Fefferman et al are interested in communicating outside their own group and so they respond by silence, whatever it means.  Is this a sign of healthy strong science, which Mr. Clay presumably would prefer to support? The consequences are far reaching: If the Clay problem formulation is wrong, then something bigger is wrong.     tisdag 13 maj 2014 Parameter-Free Fluid Models: How to Make Einstein Happy Towards Solution of the Clay Navier-Stokes Problem? More on the Clay problem here and here. PS2 Quanta reports: måndag 12 maj 2014 Modern Physics as a Mess Alexander Unzicker presents in Higgs Fake a relentless criticism of modern physics also presented on Youtube here and here. Take a look, and think yourself! Korrespondens med Lennart Bengtsson (som lovar att fightas för vetenskapen) Brev från mig till Lennart Bengtsson 11/5: Hej Lennart Som Du förmodligen noterat har jag upprepade kommentarer till Dina inlägg i media uppmanat Dig att verka för att KVAs klimatuttalande skrivs om från politiskt korrekt stöd av IPCC till korrekt vetenskaplig analys av dogmen om koldioxidalarm. Du har inte svarat på mina kommentarer men jag hoppas att Du vill svara på detta direkta mail och tala om hur Du ser på KVAs uttalande och om Du anser att det nu måste skrivas om. KVAs uttalande ligger till grund för svensk klimatpolitik och Du har som författare och ledande vetenskapsman ett stort ansvar att bära. Vänliga hälsningar, Claes Svar från LB: Jag kan bara meddela Dig att just har blivit allvarligt kritiserad av en akademikollega att KVAs klimatyttrande var urvattnat och det var jag som var skuld till detta. Samtidigt anklagar Du mig för att jag skrivit ett alarmistiskt yttrande. Detta går väl knappast ihop? Jag förslår att Du kontaktar professor Olle Häggström så får Ni komma fram till en ny formulering som Ni båda kan stödja. KVA kommer i alla händelser skriva om yttrandet med detta blir knappast med min medverkan. Mitt svar till LB: Tack för svar Lennart.Varför kommer Du inte att deltaga i författandet av KVAs nya klimatuttalande, som Du säger är på gång? Är Du sågad eller väljer Du självmant att överlåta ansvaret till personer som vet mindre än Du? Hur bär Du i så fall Ditt ansvar som vetenskapsman? Svar från LB: Jag var tillsammans med flera medlemmar i akademins 5e klass ansvarig för det yttrande som blev klart i september 2009 och som sedan godkändes av KVA med två reservationer som jag kan erinra mig. Olle Häggström var inte en av dom så vitt jag vet utan han var i stort sett positivt. Han konsulterades under arbetets gång vid ngt tillfälle. Om nu akademien väljer att författa ett nytt yttrande så är kanske inte jag den rätte personen att göra detta efter alla personliga attacker jag utsatts för. KVA vill kanske ha en mindre kontroversiell person än jag som leder detta och som också är mer i linje med den uppfattning som föredras från politiskt håll. Jag kan förstå detta men delar inte en sådan uppfattning. Mitt ansvar som vetenskapsman är en personlig fråga och den kommer jag självklart att behålla. Att jag därför som nu utsätts och säkert kommer att utsättas för alla slag av kritik från både "vänster" och "höger" får jag naturligtvis leva med. Huruvida kritiken är befogat eller inte är det knappast för mig att avgöra. Här är det min uppfattning att Du delar den kritiska uppfattningen med Olle Häggström. Mitt svar till LB: Jag anser att Du har ett vidare ansvar än bara personligt för att KVAs nya yttrande kommer att baseras på vetenskap och inte politik. Du har modigt i media framfört Din övertygelse som vetenskapsman, och det är beundransvärt. Jag hoppas att Du nu inte viker Dig på grund av illasinnade påhopp utan gör det Du kan för att KVAs nya uttalande blir ett uttalande värdigt en vetenskaplig akademi och inte en ny soppa av politisk korrekthet. Kan Sverige räkna med detta? PS Var vänlig och bunta inte ihop mig med OH, som gillar mig lika lite som Dig. Jag delar väsentligen Din uppfattning såsom varande en vetenskapligt baserad hållning, så långt vetenskapen nu har nått. Jag hoppas bara att Du fortsätter att hävda Dina insikter. Min enda kritik skulle uppkomma om Du avstår från att göra det. Svar från LB: Tack för Dina uppmuntrande ord. Jag lovar att fight back... Why Feynman Said: Nobody Understands Quantum Mechanics The (trivial) commutator relation   • $px - xp = ih$,  where $x$ is the position (operator) and $p=\frac{h}{i}\frac{\partial}{\partial x}$ is the momentum (operator), is supposed to play a fundamental role in quantum mechanics,  in particular as the origin of Heisenberg's Uncertainty Principle: • $\sigma_x\sigma_p\ge \frac{h}{2}$, where $\sigma_x$ is the standard deviation in measurements of position $x$, and $\sigma_p$ that of momentum.  We see that both the commutator relation and Heisenberg's Uncertainty Principle concern the product of position and momentum. But such a product lacks physical meaning. Momentum $p$ has physical meaning and so has position $x$, but their product has no physical meaning.   Momentum multiplied by velocity has a physical meaning as kinetic energy, but momentum multiplied by position does not. Force multiplied by velocity has a meaning as work. Quantum mechanics is however obsessed with the product of momentum and position, with the message that because of the commutator relation they cannot both be determined at the same time and spot. The message is that this makes quantum mechanics fundamentally different from classical mechanics where supposedly momentum and position can both be determined. There are two approaches to physics: 1. Make it as simple and understandable as possible.  2. Make it as complicated and mysterioud as possible.      Quantum mechanics has developed according to 2 as evidenced by Richard Feynman: One reason is that the product of momentum and position is given an fundamental role in contradiction to the fact that it has no physical meaning.  söndag 11 maj 2014 How to Win Any Debate: Claim You Understand Entropy! John von Neumann (1903-1957) was a very clever mathematician who offered the following advice: • No one really knows what entropy really is, so in a debate you will always have the advantage (by pretending that you know). This is still true, and causes a lot of confusion. If you want to improve your understanding then you could consult Computational Thermodynamics, which presents the 2nd Law of Thermodynamics resulting from the Euler equations for a compressible gas subject to finite precision computation in the following integrated form, with the dot signifying time differentiation (see the previous post): • $\dot K+\dot P = W-D$ • $\dot E = -W + D$,   where $K$ is kinetic energy, $P$ potential energy, $W$ work, $E$ heat energy and $D\ge 0$ is turbulent dissipation with $W > 0$ under expansion and $W < 0$ under compression. The sign of $D$ sets the direction of time with always transfer of energy from $K+P$ to $E$ from turbulent dissipation. Here turbulent dissipation is the same as entropy production or the other way around: • Entropy production is the same as turbulent dissipation.  This removes the mystery from entropy and you can now win any debate, by really knowing what entropy is!  lördag 10 maj 2014 Lennart Bengtsson om Bränning av Böcker Lennart Bengtsson kommenterar en debattartikel i dagens DN: • Med det stora antalet akademiker bland undertecknarna kunde man kanske väntat sig lite mer kritiskt och öppet tänkande och inte bara detta sagolika flum. Att världen idag är beroende av fossil energi till mer än 80% och med 1.4 miljarder människor som saknar tillgång till el och där halva jordens befolkning är underförsörjda med energi verkar knappast bekymra denna ljusets riddarvakt det ringaste. • Nästa steg blir väl att bannlysa det felaktiga tänkandet eller bannlysa eller rent av bränna olämpliga böcker som den framstående belgiske energiexperten Samuele Furfaris nyutkomna bok: ”Vive les énergies fossiles!” med undertiteln ”La contre-révolution énergétique” Det enda hoppfulla är väl att dessa untertecknare eller snarare deras klimatstridande studenter inte normalt läser böcker på franska. I slutstadiet får vi räkna med att även diverse olämpliga personer blir bannlysta i denna nysvenska omvända upplysningstid. Mot detta står att LB deltog i det offentliga brännandet på KTH de 4e december 2010 (post with 4051 page views) av min matte-bok, eftersom matematiken hos enkla klimatmodeller ifrågasatte den då (och ännu) rådande dogmen av CO2-alarmism.   Kan vi läsa LBs kommentar som ett uttryck för att LB inte skulle göra om samma sak idag? Skulle man kunna säga att bokbränning inte är bra eftersom det leder till ökade CO2 utsläpp? Strange Laws by Strangest Man: Dirac                Paul Dirac, The Strangest Man, who conjured (strange) laws of nature from pure thought. Paul Dirac coined in 1926 the name fermion after Enrico Fermi as an elementary particle with antisymmetric wave-function $\psi (x_1,…,x_N)$ as a function of $N$ three-dimensional space variables $x_1,…,x_N$, and a boson after Satyendra Nath Bose to have a symmetric wave-function. Dirac conjectured that Nature is so constructed that only wave-functions which are either anti-symmetric or symmetric can occur, but could not give a reason other than mathematical beauty. Dirac was encouraged by the property of an antisymmetric wave function to change sign under permutation of two particles, which forbids two particles to be at the same spot (assuming the same spin), which he happily recognized as Pauli's exclusion principle. Since then it has become an incontrovertible fact impossible to question that Nature only accepts either anti-symmetric or symmetric wave-functions, but no underlying reason has ever been presented, other than mathematical beauty (for people who rightly can admire such a thing). But if there it has no physical reason, Dirac's conjecture may be wrong. The first evidence to this effect is that the wave-function for Helium appears to be neither symmetric nor anti-symmetric as representing a configuration with the two electrons separated into two opposite half spheres.  If Dirac's conjecture is wrong for $N=2$, it may well be wrong also for $N>2$ and then standard quantum mechanics collapses… Basic Atmospheric Thermodynamics as 2nd Law The debate on the temperature distribution in the atmosphere is going around in never-ending circulation just like the air in the atmosphere. Let us here recall the basic statements of my chapter Climate Thermodynamics in a famous book, which is condensed as the 2nd law of thermodynamics expressed in the following form with the dot signifying time differentiation: • $\dot E = -W + D$,   There are two basic temperature distributions with linear decrease with height as lapse rate (assuming zero heat conductivity):  • Isothermal atmosphere with zero lapse rate: $D$ maximal with $W=D$. • Maximal (dry adiabatic) lapse rate $=9.8\, C/km$ with $D=0$ minimal. The observed lapse rate (of about 6.5 C/km) is somewhere between maximal and minimal. We note: 1. Lapse rate may increase by slow laminar vertical circulation with ascending air cooling and descending air warming with $D=0$. 2. Lapse rate may decrease by turbulent dissipation $D>0$ heating upper layers. 3. A (partially) transparent atmosphere (like on Earth) heated from below may naturally develop a positive lapse rate by 1.  4. An opaque atmosphere (like on Venus) heated from above may become isothermal by heat conduction and may then develop a positive lapse rate by 1.   The lapse rate is basic to planetary climate since it determines the surface temperature from the temperature at the top of the troposphere, and its dependence on the radiative properties of the atmosphere is a key question in global climate science. Compare with the previous post Lapse Rate by Gravitation: Loschmidt or Boltzmann/Maxwell? fredag 9 maj 2014 Why Insist on Quantum Mechanics Based on Magic and Contradiction? The ground state of Helium is postulated to be $1s^2$ with two overlaying electrons with opposite spin and identical spherically symmetric spatial wave-functions in the first shell, which is not the ground state because its energy is too large. This is the starting point for the Schrödinger equation for many-electron atoms. Here is a further motivation why it may be of interest to consider wave-functions for an atom with $N$ electrons as a sum of $N$ functions $\psi_1(x)$,…,$\psi_N(x)$, all depending on a common three-dimensional space coordinate $x$ (plus time)  as suggested in a previous post: • $\psi (x)=\psi_1(x)+\psi_2(x)+…+\psi_N(x)$. We recall that Schrödinger's equation for the Hydrogen atom as the basis of quantum mechanics, takes the form: • $ih\frac{\partial\psi}{\partial t}=-\frac{h^2}{2m}\Delta\psi +V\psi$ for all $x$ and $t>0$, with kernel potential $V(x)=-\frac{1}{\vert x\vert}$, $x$ a three-dimensional space coordinate, $t>0$ time, $h$ Planck's constant, $m$ the mass of an electron and corresponding one-electron wave-function $\psi (x,t)$ as solution. This equation is magically pulled out of a hat from the relation expressing conservation of energy $E$ of a body of mass $m$ with position $x(t)$ moving in a potential $V(x)$ with momentum $p=m\frac{dx}{dt}$, by the following formal substitutions: • $E\rightarrow ih\frac{\partial}{\partial t}$, • $p\rightarrow\frac{h}{i}\nabla$, followed by formal multiplication by $\psi$. Energy conservation for the Hydrogen atom then takes the form: • $E=K(t)+P(t)$ for all $t>0$, where • $K(t) =\frac{h^2}{2m}\int\vert\nabla\psi (x,t)\vert^2\, dx$ is the kinetic energy,  • $P(t)=\int \frac{\vert\psi (x, t)\vert^2}{\vert x\vert}dx$ is the potential energy    of the electron, under the normalization • $\int\vert\psi (x,t)\vert^2\, dx=1$. So far so good: The different energy levels $E$ of time-periodic solutions to Schrödinger's equation give the observed spectrum of the Hydrogen atom with corresponding wave-functions describing the distribution of the electron around the kernel. We see that the Laplace term gives rise to the kinetic energy as an effect of gradient regularization.   But consider now the accepted standard text-book generalization of Schrödinger's equation to an atom with $N$ electrons: • $ih\frac{\partial\psi}{\partial t}=-\sum_{j=1}^N(\frac{h^2}{2m}\Delta_j -\frac{N}{\vert x_j\vert})\psi + \sum_{k < j}\frac{1}{\vert x_j-x_k\vert}\psi$,   where $\psi (x_1,…,x_N,t)$ depends on $N$ three-dimensional space coordinates $x_1,…, x_N$ and time $t$, and $\Delta_j$ is the Laplace operator with respect to coordinate $x_j$, under the normalization • $\int\vert\psi\vert^2\, dx_1….dx_N=1$. We see the appearance of the one-electron operators with corresponding one-electron kinetic energies: • $K_j(t) =\frac{h^2}{2m}\int\vert\nabla_j\psi\vert^2\, dx_1…dx_N$,  and electron-electron repulsion expressed by the coupling potential • $\sum_{k < j}\frac{1}{\vert x_j-x_k\vert}$. We see that in this model each electron $j$ is equipped with its own three-dimensional space with coordinate $x_j$ and its own kinetic energy $K_j$, with interaction between the electrons only through the coupling potential. The electron individuality and high dimensionality of the wave function $\psi (x_1,…x_N)$ is reduced by restriction to wave functions as products $\psi_1(x_1)…\psi (x_N)$ built from three-dimensional wave functions $\psi_1,…,\psi_N$ combined with symmetry or antisymmetry under permutations of the coordinates $x_1,…,x_N$, which eliminates all individuality of the electrons. Extreme electron individuality is thus countered by permutations removing all individuality, but the individual one-electron kinetic energies $K_j$ are kept as if each electron keeps its individuality. This is strange. To see the result, recall that the ground state of minimal energy of Helium with two electrons is supposed to be given by a symmetric wave function $\psi (x_1,x_2)$ • $\psi (x_1,x_2)=\phi (x_1)\phi (x_2)$,       where $\phi (x_1)\sim \exp(-2\vert x_1\vert )$ is spherically symmetric, the same for both electrons. The two electrons of the ground state of Helium are thus supposed to have identical spherically symmetric distributions denoted $1s^2$, see the periodic table above. The trouble is now that this configuration has energy (in Hartree units) $- 2.75$ while the observed energy is $-2.903$. The true ground state is thus different from $1s^2$ and to handle this situation, while insisting that ground state still is $1s^2$ as in the table above, a so-called corrective perturbation is made introducing a dependence of $\psi (x_1,x_2)$ on $\vert x_1-x_2\vert$ in a Rayleigh-Ritz minimization procedure. This way a better correspondence with observation is reached, because separation of the electrons is now possible: If one electron is on one side of the kernel then the other electron is on the other side. But the standard message is contradictory: • The ground state configuration for Helium is $1s^2$, which however is not the ground state because its energy is too large ($-2.75$ instead of $-2.903$). • Smaller energy can be obtained by a perturbation computation but the corresponding electron configuration is hidden to readers of physics books, because the ground state is still postulated to be $1s^2$.      If we minimize energy over wave functions of product form • $\psi (x_1,x_2)=\psi_1(x_1)\psi_2(x_2)$,  without asking for symmetry, we find that the minimum is achieved with spherically symmetric  $\psi_1=\psi_2$, with too large energy as just noted. However, if we instead compute the kinetic energy based on the sum with common space space coordinate $x$ • $\psi_1(x) +\psi_2(x)$  as suggested in the previous post, then separation of the electrons is advantageous allowing discontinuous electron distributions (joining smoothly) without cost of kinetic energy and better correspondence with observation is achieved. • The standard attribution of individual kinetic energy appears to make the individual electron distributions "too stiff" and thus favors overlaying electrons rather than separated electrons, requiring Pauli's exclusion principle to prevent overlaying of more than two electrons.  • If kinetic energy is instead computed from the sum of individual electron distributions, electron "stiffness" is reduced and separation favored.  • Since the standard individual one-electron attribution of kinetic energy is ad hoc,  there is little  reason to insist that kinetic energy must be computed this way, in particular when it leads to an incorrect ground state already for Helium.  • Attributing kinetic energy to a sum of electron wave-functions allows discontinuous electron distributions joining smoothly without cost of kinetic energy. Electron individuality is here kept as individual distribution in space, while kinetic energy is collectively computed from the assembly.  This would be the way to handle individuality in a collective macroscopic setting and there is no reason why this would not be operational also for microscopics. • Since the stated ground state as $1s^2$ for Helium is incorrect, there is no reason to believe that any of the other ground states listed in the standard periodic table is correct.  • If so, then the claim that the standard Schrödinger's equation explains the periodic table has little reason. PS1 The standard argument is that the standard multi-d Schrödinger equation must be correct since there is no case known for which the multi-d wave-function solution does not agree exactly with what is observed! But this is not a correct argument, because (i) the multi-d Schrödinger equation cannot be solved, (ii) even if the wave-function could be determined its physical meaning is unclear and so comparison with reality is impossible. The standard argument is to turn (i) and (ii) from scientific disaster into monumental success by claiming that since the wave-function is impossible to determine, there is no way to prove that it is not correct. Realizing that arguing this way does not follow basic scientific principle may open to searching for different forms of Schrödinger's equation, as non-linear systems of equations in three space dimensions instead of linear multi-d scalar equations, which are computable and have physical meaning, as suggested. PS2 The standard way to handle that the standard linear multi-d Schrödinger equation is uncomputable is using Density Functional Theory (DFT) awarded the 1998 Nobel Prize in Chemistry, as a non-linear 3d scalar system in electron density. DFT results from averaging in the standard linear multi-d Schrödinger equation producing exchange correlation potentials which are impossible to determine. If the standard multi-d linear Schrödinger equation is questionable, then so is DFT.         torsdag 8 maj 2014 Quantum Statistics as Salvation from Catastrophe? Planck awarding the Planck Medal to Einstein in 1929 for his elaboration of Planck's idea of discrete of quanta of energy into quanta of light, an idea which Planck viewed as a "hypothetical attempt" resulting from an "act of desperation". To understand a theory of physics it is helpful to seek the reason the theory was developed. In The Conceptual Development of Quantum Mechanics by Max Jammer we read: • Quantum theory had its origin in the inability of classical physics to account for the experimentally observed distribution in the continuous spectrum of black-body. • It is convenient to define the first phase in the development of quantum theory the period in which all quantum conceptions and principles proposed referred exclusively to black-body radiation or harmonic vibrations.   • …the study of the single physical phenomenon of blackbody radiation led to the conceptions of quanta and to quantum statistics of the harmonic oscillator, and thus to results which defied the principles of classical mechanics and, in particular, the equipartition theorem. • It was generally agreed that classical physics was incapable of accounting for atomic and molecular processes. • Planck obviously regarded the use of the law of chance… merely as a provisional device… in his own opinion his new theory was but a "hypothetical attempt" to reconcile the law of radiation with foundations of Maxwell's doctrine, and not a final solution to the problem. Quantum mechanics thus developed from Planck's hypothetical attempt to save Wien's classical radiation law with radiance of frequency $\nu$ scaling like $T\nu^2$ with $T$ temperature, from an ultraviolet catastrophe with the radiance apparently tending to infinity without any bound on the frequency $\nu$. To save the world from this catastrophe, Planck against his basic convictions as scientist seeing no way other out, then gave up causality as the essence of science by corrupting his deterministic harmonic oscillators by statistics. And on this shaky ground quantum mechanics was formed. No wonder that quantum mechanics in its present form is a catastrophe (with uncomputable wave-functions without physical meaning), although depicted as an imposing intellectual structure of great beauty.  But can statistics really save us from catastrophe? Catastrophe may be the result an unfortunate throw of dice by fate, but you don't avoid a catastrophe by letting dice throw decide how to steer your car.   Computational Blackbody Radiation describes a different way of avoiding the ultraviolet catastrophe with statistics replaced by a constructive version of classical mechanics based on finite precision computation. From this starting point a quantum mechanics without statistics may be possible to formulate. If so the present catastrophe of quantum mechanics can (perhaps) be avoided. onsdag 7 maj 2014 Is Blackbody Radiation Universal? söndag 4 maj 2014 A Three-dimensional Multi-Electron Wave Function with associated energy as the sum of kinetic energy, attractive kernel potential energy and repulsive interelectron energy: • $E(\psi )= \frac{1}{2}\int\vert\nabla\psi\vert^2dx - \int\frac{N\psi^2}{\vert x\vert}dx+\sum_{j\neq k}\int\int\frac{\psi_j^2(x)\psi_k^2(y)}{2\vert x-y\vert}dxdy$, under the normalization • $\int\psi_j^2dx =1$ for $j=1,...,N$, where $\psi_j(x)$ represents the distribution of electron $j$. The ground state is determined as the state of minimal energy determined as the solution of a non-linear system of equations in three space dimensions expressing minimality.  We see that minimization favors atomistic wavefunctions $\psi (x)=\sum_j\psi_j(x)$ built from electronic wave functions $\psi_j$ with disjoint supports, which makes the interelectronic repulsion energy small without cost of kinetic energy. The ground state of Helium thus will have its two electrons separated into two half-spheres with corresponding wave functions $\psi_1(x)$ and $\psi_2(x)$ meeting smoothly at a common separation surface. It is possible that this is the origin of the Zweideutigkeit or two-valuedness expressed in Pauli's exclusion principle, which Pauli did not like because it was ad hoc without rationale. The  sequence of posts on Quantum Contradictions explores atomic ground states based on the above wave function with surprisingly good correspondence with observations, see also Many-Minds Quantum Mechanics. We compare with standard quantum mechanics with multi-dimensional wave functions $\psi (x_1,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…,$x_N$, typically in the form of a Slater determinant as a linear combination of products of $N$ functions $\psi_1$,…,$\psi_N$, each function separately depending on three space coordinates,  thus based on wavefunctions depending on altogether $3N$ space coordinates. Such multi-dimensional wave functions defy direct physical interpretation and are also impossible to compute for atoms with several electrons and thus do not belong to science. Yet they are supposed to be fundamental to atomistic physics. The standard view is that macroscopic and microscopic (atomistic) physics are fundamentally different,  because microscopic physics demands a multi-dimensional wave function, while macroscopic physics is described by systems of three-dimensional functions. If also microscopic physics can be described by systems of three-dimensional functions, as indictated above, then there will be no fundamental difference between macroscopic and microscopic physics and a major obstacle for progress can be eliminated. Computations based on wavefunctions of the above form are under way and will presented when available.  For simple hand calculations see here and here. PS1 For Helium with two electrons at distance $\frac{1}{2}$ from the kernel and mutual distance $1$ as an approximate ground state configuration energy in the above model, we get $E = -3$, to be compared with the observed $-2.903$. For Lithium with two electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{2}{3}$ together with a third electron at distance 1 from an effective kernel of charge +1, we get $E = -8$, to be compared with the observed $-7.5$. The ground state energy of three electrons at distance $\frac{1}{3}$ from the kernel and mutual distance $\frac{1}{2}$, we get $E = -7.5$ indicating that the configuration with two electrons in an inner shell and one in an outer shell has smaller energy and thus is the actual ground state configuration for Lithium, thus obtained without reference to Pauli's exclusion principle. PS2 Recall that the standard quantum mechanics is formulated in terms of a multi-dimensional wave function $\psi (x_1,x_2,…,x_N)$ depending on $N$ three-dimensional space coordinates $x_1$,…$x_N$, altogheter on $3N$ space coordinates, which is devastating because both physical interpretation and computational determination is impossible. To reduce the dimensionality typically an Ansatz is made as Slater determinants of three-dimensional wave functions $\psi_i$ as linear combinations of products of the form (subject to permutations of the coordinates): • $\psi (x_1,…,x_N)=\psi_1(x_1)\psi_2(x_2)….\psi_N(x_N)$, leading to a set of one-electron wave equations coupled by complex exchange-correlation terms which are very difficult to determine. The above Ansatz with a sum instead of products of three-dimensional wave functions may offer more computationally managable and thus more useful models. PS3 For Beryllium with 4 electrons, we get $E=-14$ from 2 electrons at distance $\frac{1}{4}$ from the kernel with mutual distance $\frac{1}{2}$, together with $E = -\frac{2}{3}$ from 2 electrons of width $\frac{1}{2}$ at distance $\frac{1}{4} + \frac{1}{2}$ from an effective charge of +2, which gives altogether $E = -14.667 which is exactly what is observed!! PS4 For N electrons distributed over one shell at distance $\frac{1}{N}$ to the kernel assuming the average distance between any pair of electrons is $\frac{1}{N}$, we get $E = -\frac{N^2}{2}$, which is much larger than the observed $E \approx - N^2$ and thus is not the ground state configuration.  A multi-shell distribution in the model gives better agreement with observations and so the model may capture the real shell structure (without resort to any Pauli exclusion principle). PS5 Note that the above model allows discontinuous electron distributions (joining smoothy) without cost of kinetic energy which favors electron separation. We compare with Hartree models as systems of one-electron models with continuous electron distributions for which separation requires kinetic energy cost and a resort to Pauli's exclusion principle is necessary to prevent more than two electron distributions to overlay. PS6 To find the ground state, we can use time-stepping of the parabolic system • $\frac{\partial\psi_j(x,t)}{\partial t} = \Delta\psi (x,t) + \frac{N\psi (x,t)}{\vert x\vert}-\sum_{k\neq j}\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy\,\psi_j(t,x)$ for $t > 0$, $j=1,…,N$, with successive normalization to $\int\psi_j^2(x,t)\, dx=1$ after each time step and $\psi =\sum_{k=1}^N\psi_k$.  Further • $V_k\equiv\int\frac{\psi_k^2(y,t)}{2\vert x-y\vert}dy$, can be computed by solving $-\Delta V_k = 2\pi\psi_k^2$.
4d520567971e3cf5
The Info List - Determinism --- Advertisement --- Related concepts and fundamentals: * Agnosticism * Epistemology * Presupposition * Probability * v * t * e DETERMINISM is the philosophical position that for every event there exist conditions that could cause no other event. "There are many determinisms, depending on what pre-conditions are considered to be determinative of an event or action." Deterministic theories throughout the history of philosophy have sprung from diverse and sometimes overlapping motives and considerations. Some forms of determinism can be empirically tested with ideas from physics and the philosophy of physics . The opposite of determinism is some kind of indeterminism (otherwise called nondeterminism). Determinism is often contrasted with free will . rarely requires that perfect prediction be practically possible. * 1 Varieties * 2 Philosophical connections * 2.1 With nature/nurture controversy * 2.2 With particular factors * 2.3 With free will * 2.4 With the soul * 2.5 With ethics and morality * 3 History * 3.1 Eastern tradition * 3.2 Western tradition * 4 Modern scientific perspective * 4.1 Generative processes * 4.2 Compatibility with the existence of science * 4.3 Mathematical models Mathematical models * 4.4 Quantum mechanics Quantum mechanics and classical physics * 4.4.1 Day-to-day physics * 4.4.2 Quantum realm * 4.4.3 Other matters of quantum determinism * 5 See also * 5.1 Types of determinism * 6 References * 6.1 Notes * 6.2 Bibliography * 7 Further reading * 8 External links "Determinism" may commonly refer to any of the following viewpoints: Many philosophical theories of determinism frame themselves with the idea that reality follows a sort of predetermined path * Causal determinism is "the idea that every event is necessitated by antecedent events and conditions together with the laws of nature". However, causal determinism is a broad enough term to consider that "one's deliberations, choices, and actions will often be necessary links in the causal chain that brings something about. In other words, even though our deliberations, choices, and actions are themselves determined like everything else, it is still the case, according to causal determinism, that the occurrence or existence of yet other things depends upon our deliberating, choosing and acting in a certain way". Causal determinism proposes that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. The relation between events may not be specified, nor the origin of that universe. Causal determinists believe that there is nothing in the universe that is uncaused or self-caused . Historical determinism (a sort of path dependence ) can also be synonymous with causal determinism. Causal determinism has also been considered more generally as the idea that everything that happens or exists is caused by antecedent conditions. In the case of nomological determinism, these conditions are considered events also, implying that the future is determined completely by preceding events—a combination of prior states of the universe and the laws of nature. Yet they can also be considered metaphysical of origin (such as in the case of theological determinism). Quantum mechanics and various interpretations thereof pose a serious challenge to this view. Nomological determinism is sometimes illustrated by the thought experiment of Laplace\'s demon . Nomological determinism is sometimes called 'scientific' determinism, although that is a misnomer. Physical determinism is generally used synonymously with nomological determinism (its opposite being physical indeterminism ). * Necessitarianism is closely related to the causal determinism described above. It is a metaphysical principle that denies all mere possibility; there is exactly one way for the world to be. Leucippus claimed there were no uncaused events, and that everything occurs for a reason and by necessity. * Predeterminism is the idea that all events are determined in advance. The concept of predeterminism is often argued by invoking causal determinism, implying that there is an unbroken chain of prior occurrences stretching back to the origin of the universe. In the case of predeterminism, this chain of events has been pre-established, and human actions cannot interfere with the outcomes of this pre-established chain. Predeterminism can be used to mean such pre-established causal determinism, in which case it is categorised as a specific type of determinism. It can also be used interchangeably with causal determinism—in the context of its capacity to determine future events. Despite this, predeterminism is often considered as independent of causal determinism. The term predeterminism is also frequently used in the context of biology and hereditary, in which case it represents a form of biological determinism . * Fatalism is normally distinguished from "determinism", as a form of teleological determinism. Fatalism is the idea that everything is fated to happen, so that humans have no control over their future. Fate has arbitrary power, and need not follow any causal or otherwise deterministic laws . Types of fatalism include hard theological determinism and the idea of predestination , where there is a God who determines all that humans will do. This may be accomplished either by knowing their actions in advance, via some form of omniscience or by decreeing their actions in advance. * Theological determinism is a form of determinism which states that all events that happen are pre-ordained, or predestined to happen, by a monotheistic deity , or that they are destined to occur given its omniscience . Two forms of theological determinism exist, here referenced as strong and weak theological determinism. The first one, strong theological determinism, is based on the concept of a creator deity dictating all events in history: "everything that happens has been predestined to happen by an omniscient, omnipotent divinity". The second form, weak theological determinism, is based on the concept of divine foreknowledge—"because God 's omniscience is perfect, what God knows about the future will inevitably happen, which means, consequently, that the future is already fixed". There exist slight variations on the above categorisation. Some claim that theological determinism requires predestination of all events and outcomes by the divinity (i.e. they do not classify the weaker version as 'theological determinism' unless libertarian free will is assumed to be denied as a consequence), or that the weaker version does not constitute 'theological determinism' at all. With respect to free will, "theological determinism is the thesis that God exists and has infallible knowledge of all true propositions including propositions about our future actions", more minimal criteria designed to encapsulate all forms of theological determinism. Theological determinism can also be seen as a form of causal determinism, in which the antecedent conditions are the nature and will of God. * Often synonymous with logical determinism are the ideas behind spatio-temporal determinism or eternalism : the view of special relativity. J. J. C. Smart J. J. C. Smart Albert Einstein * Adequate determinism is the idea that quantum indeterminacy can be ignored for most macroscopic events. This is because of quantum decoherence . Random quantum events "average out" in the limit of large numbers of particles (where the laws of quantum mechanics asymptotically approach the laws of classical mechanics). Stephen Hawking explains a similar idea: he says that the microscopic world of quantum mechanics is one of determined probabilities. That is, quantum effects rarely alter the predictions of classical mechanics , which are quite accurate (albeit still not perfectly certain ) at larger scales. Something as large as an animal cell , then, would be "adequately determined" (even in light of quantum indeterminacy). * The many-worlds interpretation accepts the linear causal sets of sequential events with adequate consistency yet also suggests constant forking of causal chains creating "multiple universes" to account for multiple outcomes from single events. Meaning the causal set of events leading to the present are all valid yet appear as a singular linear time stream within a much broader unseen conic probability field of other outcomes that "split off" from the locally observed timeline. Under this model causal sets are still "consistent" yet not exclusive to singular iterated outcomes. The interpretation side steps the exclusive retrospective causal chain problem of "could not have done otherwise" by suggesting "the other outcome does exist" in a set of parallel universe time streams that split off when the action occurred. This theory is sometimes described with the example of agent based choices but more involved models argue that recursive causal splitting occurs with all particle wave functions at play. This model is highly contested with multiple objections from the scientific community. Environmental determinism , also known as climatic or geographical determinism, proposes that the physical environment, rather than social conditions, determines culture. Supporters of environmental determinism often also support Behavioral determinism . Key proponents of this notion have included Ellen Churchill Semple , Ellsworth Huntington , Thomas Griffith Taylor and possibly Jared Diamond Jared Diamond , although his status as an environmental determinist is debated. Karl Marx Technological determinism Technological determinism Main article: Free will Free will A table showing the different positions related to free will and determinism Some research (funded by the John Templeton Foundation John Templeton Foundation ) suggested that reducing a person's belief in free will is dangerous, making them less helpful and more aggressive. This could occur because the individual's sense of self-efficacy suffers. A number of positions can be delineated: * Immaterial souls are all that exist ( Idealism ). * Immaterial souls exist and exert a non-deterministic causal influence on bodies. (Traditional free-will, interactionist dualism ). * Immaterial souls exist, but are part of deterministic framework. * Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism , occasionalism ) * Immaterial souls do not exist — there is no mind-body dichotomy , and there is a Materialistic explanation for intuitions to the contrary. Another topic of debate is the implication that Determinism Philosopher and incompatibilist Peter van Inwagen introduces this thesis as such: Argument that Free Will Free Will is Required for Moral Judgments * The moral judgment that you shouldn't have done X implies that you should have done something else instead * That you should have done something else instead implies that there was something else for you to do * That there was something else for you to do implies that you could have done something else * That you could have done something else implies that you have free will * If you don't have free will to have done other than X we cannot make the moral judgment that you shouldn't have done X. * The moral judgment that you should not have done X implies that you can do something else instead * That you can do something else instead implies that there is something else for you to do * That there is something else for you to do implies that you can do something else * That you can do something else implies that you have free will for planning future recourse * If you have free will to do other than X we can make the moral judgment that you should do other than X, and punishing you as a responsible party for having done X that you know you should not have done can help you remember to not do X in the future. Marcus Aurelius , Omar Khayyám Omar Khayyám , Thomas Hobbes Thomas Hobbes , Baruch Spinoza Baruch Spinoza , Gottfried Leibniz Gottfried Leibniz , David Hume David Hume , Baron d\'Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace Pierre-Simon Laplace , Arthur Schopenhauer , William James , Friedrich Nietzsche Friedrich Nietzsche , Albert Einstein Albert Einstein , Niels Bohr Niels Bohr , Ralph Waldo Emerson and, more recently, John Searle , Sam Harris Sam Harris , Ted Honderich , and Daniel Dennett Daniel Dennett B.F. Skinner In I Ching I Ching in Hinduism . and Leucippus. The first full-fledged notion of determinism appears to originate with the Stoics , as part of their theory of universal causal determinism. The resulting philosophical debates, which involved the confluence of elements of Aristotelian Ethics Alexander of Aphrodisias to the first recorded Western debate over determinism and freedom, an issue that is known in theology as the paradox of free will . The writings of Epictetus as well as Middle Platonist and early Christian thought were instrumental in this development. The Jewish philosopher Moses Maimonides said of the deterministic implications of an omniscient god: "Does God know or does He not know that a certain individual will be good or bad? If thou sayest 'He knows', then it necessarily follows that man is compelled to act as God Main article: Emergence Although it was once thought by scientists that any indeterminism in quantum mechanics occurred at too small a scale to influence biological or neurological systems, there is indication that nervous systems are influenced by quantum indeterminism due to chaos theory . It is unclear what implications this has for the problem of free will given various possible reactions to the problem in the first place. Many biologists don't grant determinism: Christof Koch argues against it, and in favour of libertarian free will , by making arguments based on generative processes (emergence ). Other proponents of emergentist or generative philosophy , cognitive sciences and evolutionary psychology , argue that a certain form of determinism (not necessarily causal) is true. They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters . Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist. Certain experiments looking at the neuroscience of free will can be said to support this possibility. In Conway\'s Game of Life , the interaction of just four simple rules creates patterns that seem somehow "alive". John Horton Conway 's playable Game of Life . Nassim Taleb is wary of such models, and coined the term "ludic fallacy ". Mathematical models Day-to-day Physics Further information: Macroscopic quantum phenomena Chaos theory Quantum Realm Quantum physics works differently in many ways from Newtonian physics. Physicist Aaron D. O\'Connell explains that understanding our universe, at such small scales as atoms, requires a different logic than day-to-day life does. O'Connell does not deny that it is all interconnected: the scale of human existence ultimately does emerge from the quantum scale. O'Connell argues that we must simply use different models and constructs when dealing with the quantum world. Quantum mechanics Quantum mechanics Stephen Hawking principle ). Some (including Albert Einstein Albert Einstein Bohr–Einstein debates between Einstein and Niels Bohr Niels Bohr and there is still no consensus . Stephen Hawking calls Libertarian free will "just an illusion". see Free will Free will for further discussions on this topic. Other Matters Of Quantum Determinism Chaotic radioactivity is the next explanatory challenge for physicists supporting determinism The time dependent Schrödinger equation Schrödinger equation gives the first time derivative of the quantum state . That is, it explicitly and uniquely predicts the development of the wave function with time. i ( x , t ) t = 2 2 m 2 ( x , t ) x 2 + V ( x ) {displaystyle ihbar {frac {partial psi (x,t)}{partial t}}=-{frac {hbar ^{2}}{2m}}{frac {partial ^{2}psi (x,t)}{partial x^{2}}}+V(x)psi } So if the wave function itself is reality (rather than probability of classical coordinates), then the unitary evolution of the wave function in quantum mechanics, can be said to be deterministic. But the unitary evolution of the wave function is not the entirety of quantum mechanics. However, neither the posited "reality", nor the proven & extraordinary accuracy of the wave function vertical-align: top;"> * Amor fati * Block time * Calvinism * Causality * Chaos theory Chaos theory * Digital physics * Emergence * False necessity * Fatalism * Fractal * Game theory Game theory * Ilya Prigogine * Interpretation of quantum mechanics * Many-Worlds interpretation Many-Worlds interpretation * Neuroscience of free will Neuroscience of free will * Notes from Underground * Open theism * Predestination * Philosophical interpretation of classical physics * Radical behaviorism * Voluntarism * Wheeler–Feynman absorber theory * Genetic determinism * Biological determinism * Psychological determinism * Social determinism * Cultural determinism * Economic determinism * Logical determinism * Geographic determinism * Historical determinism * Technological determinism Technological determinism * Environmental determinism * Theological determinism * Predeterminism * ^ A list of a dozen varieties of determinism is provided in Bob Doyle (2011). Free Will: The Scandal in Philosophy. I-Phi Press. pp. 145–146 ff. ISBN 0983580200 . * ^ For example, see Richard Langdon Franklin (1968). Freewill and determinism: a study of rival conceptions of man. Routledge & K. Paul. * ^ A B Hoefer, Carl (Apr 1, 2008). "Causal Determinism". In Edward N. Zalta, ed. The Stanford Encyclopedia of Philosophy (Winter 2009 edition). CS1 maint: Extra text: editors list (link ) * ^ A B C Eshleman, Andrew (Nov 18, 2009). "Moral Responsibility". In Edward N. Zalta, ed. The Stanford Encyclopedia of Philosophy (Winter 2009 ed.). CS1 maint: Extra text: editors list (link ) * ^ A B Arguments for Incompatibilism (Stanford Encyclopedia of Philosophy) * ^ Laplace posited that an omniscient observer knowing with infinite precision all the positions and velocities of every particle in the universe could predict the future entirely. For a discussion, see Robert C. Solomon; Kathleen M. Higgins (2009). " Free will Free will of Scientific Explanation (2nd ed.). Hackett. pp. 285–292. ISBN 0915144719 . a theory is deterministic if, and only if, given its state variables for some initial period, the theory logically determines a unique set of values for those variables for any other period. * ^ Leucippus, Fragment 569 - from Fr. 2 Actius I, 25, 4 * ^ A B C McKewan, Jaclyn (2009). "Evolution, Chemical". In H. James Birx". Predeterminism. Encyclopedia of Time: Science, Philosophy, Theology, & Culture. SAGE Publications, Inc. pp. 1035–1036. ISBN 9781412941648 . doi :10.4135/9781412963961.n191 . * ^ "Predeterminism". Oxford Dictionaries. Oxford Dictionaries. April 2010. Retrieved 20 December 2012. . See also "Predeterminism". Collins English Dictionary. Collins. Retrieved 20 December 2012. * ^ "Some Varieties of Free Will Free Will and Determinism". Philosophy 302: Ethics. philosophy.lander.edu. Retrieved 19 December 2012. Predeterminism: the philosophical and theological view that combines God with determinism. On this doctrine events throughout eternity have been foreordained by some supernatural power in a causal sequence. * ^ See for example Hooft, G. (2001). "How does god play dice? (Pre-)determinism at the Planck scale". arXiv :hep-th/0104219  . Predeterminism is here defined by the assumption that the experimenter's 'free will' in deciding what to measure (such as his choice to measure the x- or the y-component of an electron's spin), is in fact limited by deterministic laws, hence not free at all , and Sukumar, CV (1996). "A new paradigm for science and architecture". City. Taylor & Francis. 1 (1–2): 181–183. doi :10.1080/13604819608900044 . Quantum Theory provided a beautiful description of the behaviour of isolated atoms and nuclei and small aggregates of elementary particles. Modern science recognized that predisposition rather than predeterminism is what is widely prevalent in nature. * ^ Borst, C. (1992). "Leibniz and the compatibilist account of free will". Studia leibnitiana: 49–58. JSTOR 40694201 . Leibniz presents a clear case of a philosopher who does not think that predeterminism requires universal causal determinism * ^ Far Western Philosophy of Education Society. Far Western Philosophy of Education Society. p. 12. Retrieved 20 December 2012. "Determinism" is, in essence, the position which holds that all behavior is caused by prior behavior. "Predeterminism" is the position which holds that all behavior is caused by conditions which predate behavior altogether (such impersonal boundaries as "the human conditions", instincts, the will of God, inherent knowledge, fate, and such). * ^ "Predeterminism". Merriam-Webster Dictionary. Merriam-Webster, Incorporated. Retrieved 20 December 2012. See for example Ormond, A.T. (1894). "Freedom and psycho-genesis". Psychological Review. Macmillan & Company. 1 (3): 217–229. doi :10.1037/h0065249 . The problem of predeterminism is one that involves the factors of heredity and environment, and the point to be debated here is the relation of the present self that chooses to these predetermining agencies , and Garris, M.D.; et al. (1992). "A Platform for Evolving Genetic Automata for Text Segmentation (GNATS)". Science of Artificial Neural Networks. Science of Artificial Neural Networks. Citeseer. 1710: 714–724. doi :10.1117/12.140132 . However, predeterminism is not completely avoided. If the codes within the genotype are not designed properly, then the organisms being evolved will be fundamentally handicapped. * ^ SEP, Causal Determinism * ^ Fischer, John Martin (1989) God, Foreknowledge and Freedom. Stanford, California: Stanford University Press. ISBN 1-55786-857-3 * ^ Watt, Montgomery (1948) Free-Will and Predestination in Early Islam. London:Luzac Anne Lockyer Jordan Neil Lockyer Edwin Tate; Neil Lockyer; Edwin Tate (25 June 2004). Philosophy permits it to happen in order to make room for the free will of humans. * ^ Wentzel Van Huyssteen (2003). "theological determinism". Encyclopedia of science and religion. 1. Macmillan Reference. p. 217. ISBN 978-0-02-865705-9 . Retrieved 22 December 2012. Theological determinism constitutes a fifth kind of determinism. There are two types of theological determinism, both compatible with scientific and metaphysical determinism. In the first, God has perfect knowledge of everything in the universe because God knows about the future will inevitably happen, which means, consequently, that the future is already fixed. * ^ Raymond J. VanArragon (21 October 2010). Key Terms in Philosophy of Religion. Continuum International Publishing Group. p. 21. ISBN 978-1-4411-3867-5 . Retrieved 22 December 2012. Theological determinism, on the other hand, claims that all events are determined by God. On this view, God must not only know but must also cause those events to occur in order for their occurrence to be determined. * ^ Vihvelin, Kadri (2011). "Arguments for Incompatibilism". In Edward N. Zalta. The Stanford Encyclopedia of Philosophy (Spring 2011 ed.). * ^ The Information Philosopher website, "Adequate Determinism", from the site: "We are happy to agree with scientists and philosophers who feel that quantum effects are for the most part negligible in the macroscopic world. We particularly agree that they are negligible when considering the causally determined will and the causally determined actions set in motion by decisions of that will." * ^ Grand Design (2010), page 32: "the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets.", and page 72: "Quantum physics might seem to undermine the idea that nature is governed by laws, but that is not the case. Instead it leads us to accept a new form of determinism: Given the state of a system at some time, the laws of nature determine the probabilities of various futures and pasts rather than determining the future and past with certainty." (emphasis in original, discussing a Many worlds interpretation ) * ^ Kent, Adrian. "One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation." Many worlds (2010): 307–354. * ^ Vaidman, Lev. " Many-worlds interpretation Many-worlds interpretation of quantum mechanics." (2002). * ^ de Melo-Martín I (2005). "Firing up the nature/nurture controversy: bioethics and genetic determinism" . J Med Ethics. 31 (9): 526–30. PMC 1734214  . PMID 16131554 . doi :10.1136/jme.2004.008417 . * ^ Andrew, Sluyter. "Neo-Environmental Determinism, Intellectual Damage Control, and Nature/Society Science". Antipode. 4 (35). * ^ J. J. C. Smart, "Free-Will, Praise and Blame,"Mind, July 1961, p.293-4. * ^ Sam Harris, The Moral Landscape (2010), pg.216, note102 * ^ Sam Harris, The Moral Landscape (2010), pg.217, note109 * ^ Baumeister, RF; Masicampo, EJ; Dewall, CN (2009). "Prosocial benefits of feeling free: disbelief in free will increases aggression and reduces helpfulness". Pers Soc Psychol Bull. 35 (2): 260–8. PMID 19141628 . doi :10.1177/0146167208327217 . * ^ By 'soul' in the context of (1) is meant an autonomous immaterial agent that has the power to control the body but not to be controlled by the body (this theory of determinism thus conceives of conscious agents in dualistic terms). Therefore the soul stands to the activities of the individual agent's body as does the creator of the universe to the universe. The creator of the universe put in motion a deterministic system of material entities that would, if left to themselves, carry out the chain of events determined by ordinary causation. But the creator also provided for souls that could exert a causal force analogous to the primordial causal force and alter outcomes in the physical universe via the acts of their bodies. Thus, it emerges that no events in the physical universe are uncaused. Some are caused entirely by the original creative act and the way it plays itself out through time, and some are caused by the acts of created souls. But those created souls were not created by means of physical processes involving ordinary causation. They are another order of being entirely, gifted with the power to modify the original creation. However, determinism is not necessarily limited to matter; it can encompass energy as well. The question of how these immaterial entities can act upon material entities is deeply involved in what is generally known as the mind-body problem . It is a significant problem which philosophers have not reached agreement about * ^ Free Will Free Will (Stanford Encyclopedia of Philosophy) * ^ van Inwagen, Peter (2009). The Powers of Rational Beings: Freedom of the Will. Oxford. * ^ Chiesa, Mecca (2004) Radical Behaviorism: The Philosophy & The Science. * ^ Ringen, J. D. (1993). "Adaptation, teleology, and selection by consequences" . Journal of Applied Behavior Analysis. 60 (1): 3–15. PMC 1322142  . PMID 16812698 . doi :10.1901/jeab.1993.60-3 . * ^ Stobaeus Eclogae I 5 ( Heraclitus ) * ^ Stobaeus Eclogae I 4 ( Leucippus ) * ^ Susanne Bobzien Determinism and Freedom in Stoic Philosophy (Oxford 1998) chapter 1. * ^ Susanne Bobzien The Inadvertent Conception and Late Birth of the Free-Will Problem (Phronesis 43, 1998). * ^ Michael Frede A Free Will: Origins of the Notion in Ancient Thought (Berkeley 2011). * ^ Though Moses Maimonides was not arguing against the existence of God, but rather for the incompatibility between the full EXERCISE by God Free Will and specifically Section 6: The Modal Fallacy * ^ The Eight Chapters of Maimonides on Ethics (Semonah Perakhim), edited, annotated, and translated with an Introduction by Joseph I. Gorfinkle, pp. 99–100. (New York: AMS Press), 1966. * ^ Swartz, Norman (2003) The Concept of Physical Law / Chapter 10: Free Will Free Will and Determinism ( http://www.sfu.ca/philosophy/physical-law/) * ^ Lewis, E.R.; MacGregor, R.J. (2006). "On Indeterminism, Chaos, and Small Number Particle Systems in the Brain" (PDF). Journal of Integrative Neuroscience . 5 (2): 223–247. doi :10.1142/S0219635206001112 . * ^ Koch, Christof (September 2009). "Free Will, Physics, Biology and the Brain". In Murphy, Nancy; Ellis, George; O'Connor, Timothy. Downward Causation and the Neurobiology of Free Will. New York, USA: Springer . ISBN 978-3-642-03204-2 . * ^ A B C Kenrick, D. T.; Li, N. P.; Butner, J. (2003). "Dynamical evolutionary psychology: Individual decision rules and emergent social norms". Psychological Review. 110 (1): 3–28. PMID 12529056 . doi :10.1037/0033-295x.110.1.3 . * ^ A B C Nowak A., Vallacher R.R., Tesser A., Borkowski W., (2000) "Society of Self: The emergence of collective properties in self-structure", Psychological Review 107. * ^ A B C Epstein J.M. and Axtell R. (1996) Growing Artificial Societies - Social Science from the Bottom. Cambridge MA, MIT Press. * ^ A B C Epstein J.M. (1999) Agent Based Models and Generative Social Science. Complexity, IV (5) * ^ John Conway\'s Game of Life * ^ Karl Popper: Conjectures and refutations * ^ Werndl, Charlotte (2009). "Are Deterministic Descriptions and Indeterministic Descriptions Observationally Equivalent?". Studies in History and Philosophy of Modern Physics. 40 (3): 232–242. Bibcode :2009SHPMP..40..232W. doi :10.1016/j.shpsb.2009.06.004 . * ^ Werndl, Charlotte (2009). Deterministic Versus Indeterministic Descriptions: Not That Different After All?. In: A. Hieke and H. Leitgeb (eds), Reduction, Abstraction, Analysis, Proceedings of the 31st International Ludwig Wittgenstein-Symposium. Ontos, 63-78. * ^ J. Glimm, D. Sharp, Stochastic Differential Equations: Selected Applications in Continuum Physics, in: R.A. Carmona, B. Rozovskii (ed.) Stochastic Partial Differential Equations: Six Perspectives, American Mathematical Society (October 1998) (ISBN 0-8218-0806-0 ). * ^ "Struggling with quantum logic: Q&A with Aaron O\'Connell * ^ Heisenberg, Werner (1949). Physikalische Prinzipien der Quantentheorie . Leipzig: Hirzel/University of Chicago Press. p. 4. ISBN 9780486601137 . * ^ A B Grand Design (2010), page 32: "the molecular basis of biology shows that biological processes are governed by the laws of physics and chemistry and therefore are as determined as the orbits of the planets...so it seems that we are no more than biological machines and that free will is just an illusion", and page 72: "Quantum physics might seem to undermine the idea that nature is governed by laws, but that is not the case. Instead it leads us to accept a new form of determinism: Given the state of a system at some time, the laws of nature determine the probabilities of various futures and pasts rather than determining the future and past with certainty." (discussing a Many worlds interpretation ) * ^ Scientific American, "What is Quantum Mechanics Good For?" * ^ Albert Einstein Albert Einstein insisted that, "I am convinced God does not play dice" in a private letter to Max Born Max Born , 4 December 1926, Albert Einstein Archives reel 8, item 180 * ^ Jabs, Arthur (2016). "A conjecture concerning determinism, reduction, and measurement in quantum mechanics". Quantum Studies: Mathematics and Foundations. 3 (4): 279–292. doi :10.1007/s40509-016-0077-7 . * ^ Bishop, Robert C. (2011). "Chaos, Indeterminism, and Free Will". In Kane, Robert. The Oxford Handbook of Free Will Free Will * Daniel Dennett Daniel Dennett (2003) Freedom Evolves. Viking Penguin. * John Earman (2007) "Aspects of Determinism of Physics, Part B. North Holland: 1369-1434. * George Ellis (2005) "Physics and the Real World", Physics Today. * Epstein, J.M. (1999). "Agent Based Models and Generative Social Science". Complexity. IV (5): 5. doi :10.1002/(sici)1099-0526(199905/06)4:53.3.co;2-6 . * -------- and Axtell R. (1996) Growing Artificial Societies — Social Science from the Bottom. MIT Press. * Kenrick, D. T.; Li, N. P.; Butner, J. (2003). "Dynamical evolutionary psychology: Individual decision rules and emergent social norms". Psychological Review. 110 (1): 3–28. PMID 12529056 . doi :10.1037/0033-295x.110.1.3 . * Albert Messiah , Quantum Mechanics, English translation by G. M. Temmer of Mécanique Quantique, 1966, John Wiley and Sons, vol. I, chapter IV, section III. * Ernest Nagel (March 3, 1960). " Determinism in history". Philosophy and Phenomenological Research, number 8. International Phenomenological Society. 20 (3): 291–317. JSTOR 2105051 . doi :10.2307/2105051 . (Online version found here) * John T Roberts (2006). "Determinism". In Sahotra Sarkar; Jessica Pfeifer. The Philosophy of Science: A-M. Taylor & Francis. pp. 197 ff. ISBN 0415977096 . * Nowak A., Vallacher R.R., Tesser A., Borkowski W., (2000) "Society of Self: The emergence of collective properties in self-structure", Psychological Review 107. * George Mu
51d4c21a81ff7afe
Orchestrated Objective Reduction ("ORCH OR") Orchestrated Objective Reduction ("ORCH OR") Postby d023n » Sun Mar 25, 2018 11:32 pm Here are a couple of excerpts from the following paper by Sir Roger Penrose and Stuart Hameroff regarding this theory. The first explains the motivations behind the OR part, and the second explains the motivations behind the ORCH part, but I highly recommend reading the entire paper. At the end is a link to a 43 minute Youtube video of Stuart Hameroff summarizing the theory along with a few related ideas, including his complementary "Conscious Pilot" model, which is also quite intriguing. Stuart R. Hameroff, and Roger Penrose 14.4.3. The measurement problem and OR The issue of why we don't directly perceive quantum superpositions is a manifestation of the measurement problem mentioned above. Put more precisely, the measurement problem is the conflict between the two fundamental procedures of quantum mechanics. One of these procedures, referred to as unitary evolution, denoted here by U, is the continuous deterministic evolution of the quantum state (i.e. of the wavefunction of the entire system) according to the fundamental Schrödinger equation. The other is the procedure that is adopted whenever a measurement of the system—or observation—is deemed to have taken place, where the quantum state is discontinuously and probabilistically replaced by another quantum state (referred to, technically, as an eigenstate of a mathematical operator that is taken to describe the measurement). This discontinuous jumping of the state is referred to as the reduction of the state (or the "collapse of the wavefunction"), and will be denoted here by the letter R. This conflict between U and R is what is encapsulated by the term "measurement problem" (but perhaps more accurately it may be referred to as "the measurement paradox") and its problematic nature is made manifest when we consider the measuring apparatus itself as a quantum entity, which is part of the entire quantum system consisting of the original system under observation together with this measuring apparatus. The apparatus is, after all, constructed out of the same type of quantum ingredients (electrons, photons, protons, neutrons etc.—or quarks and gluons etc.) as is the system under observation, so it ought to be subject also to the same quantum laws, these being described in terms of the continuous and deterministic U. How, then, can the discontinuous and probabilistic R come about as a result of the interaction (measurement) between two parts of the quantum system? This is the paradox faced by the measurement problem. There are many ways that quantum physicists have attempted to come to terms with this conflict (Bell, 1966; Bohm, 1983; Rae, 2002; Polkinghorne, 2002; Penrose, 2004). In the early 20th century, the Danish physicist Niels Bohr, together with Werner Heisenberg, proposed the pragmatic "Copenhagen interpretation," according to which the wavefunction of a quantum system, evolving according to U, is not assigned any actual physical "reality," but is taken as basically providing the needed "book-keeping" so that eventually probability values can be assigned to the various possible outcomes of a quantum measurement. The measuring device itself is explicitly taken to behave classically and no account is taken of the fact that the device is ultimately built from quantum-level constituents. The probabilities are calculated, once the nature of the measuring device is known, from the state that the wavefunction has U-evolved to at the time of the measurement. The discontinuous "jump" that the wavefunction makes upon measurement, according to R, is attributed to the change in "knowledge" that the result of the measurement has on the observer. Since the wavefunction is not assigned physical reality, but is considered to refer merely to the observer's knowledge of the quantum system, the jumping is considered simply to reflect the jump in the observer's knowledge state, rather than in the quantum system under consideration. Many physicists remain unhappy with such a point of view, however, and regard it largely as a "stop-gap," in order that progress can be made in applying the quantum formalism, without this progress being held up by a lack of a serious quantum ontology, which might provide a more complete picture of what is actually going on. One may ask, in particular, what it is about a measuring device that allows one to ignore the fact that it is itself made from quantum constituents and is permitted to be treated entirely classically. A good many proponents of the Copenhagen standpoint would take the view that while the physical measuring apparatus ought actually to be treated as a quantum system, and therefore part of an over-riding wavefunction evolving according to U, it would be the conscious observer, examining the readings on that device, who actually reduces the state, according to R, thereby assigning a physical reality to the particular observed alternative resulting from the measurement. Accordingly, before the intervention of the observer's consciousness, the various alternatives of the result of the measurement including the different states of the measuring apparatus would, in effect, still have to be treated as coexisting in superposition, in accordance with what would be the usual evolution according to U. In this way, the Copenhagen viewpoint puts consciousness outside science, and does not seriously address the ontological nature or physical role of superposition itself nor the question of how large quantum superpositions like Schrödinger's superposed alive and dead cat (see below) might actually become one thing or another. A more extreme variant of this approach is the "multiple worlds hypothesis" of Everett (1957) in which each possibility in a superposition evolves to form its own universe, resulting in an infinite multitude of coexisting "parallel" worlds. The stream of consciousness of the observer is supposed somehow to "split," so that there is one in each of the worlds—at least in those worlds for which the observer remains alive and conscious. Each instance of the observer's consciousness experiences a separate independent world, and is not directly aware of any of the other worlds. A more "down-to-earth" viewpoint is that of environmental decoherence, in which interaction of a superposition with its environment "erodes" quantum states, so that instead of a single wavefunction being used to describe the state, a more complicated entity is used, referred to as a density matrix. However, decoherence does not provide a consistent ontology for the reality of the world, in relation to the density matrix (see, for example, Penrose (1994), Secs. 29.3–29.6), and provides merely a pragmatic procedure. Moreover, it does not address the issue of how R might arise in isolated systems, nor the nature of isolation, in which an external "environment" would not be involved, nor does it tell us which part of a system is to be regarded as the 'environment' part, and it provides no limit to the size of that part which can remain subject to quantum superposition. Still other approaches include various types of OR in which a specific objective threshold is proposed to cause quantum state reduction (Percival, 1994; Moroz et al., 1998; Ghirardi et al., 1986). The specific OR scheme that is used in Orch OR will be described below. The quantum pioneer Erwin Schrödinger took pains to point out the difficulties that confront the U-evolution of a quantum system with his still-famous thought experiment called "Schrödinger's cat" (Schrödinger, 1935). Here, the fate of a cat in a box is determined by magnifying a quantum event (say the decay of a radioactive atom, within a specific time period that would provide a 50% probability of decay) to a macroscopic action which would kill the cat, so that according to Schrödinger's own U-evolution the cat would be in a quantum superposition of being both dead and alive at the same time. According to this perspective on the Copenhagen interpretation, if this U-evolution is maintained until the box is opened and the cat observed, then it would have to be the conscious human observing the cat that results in the cat becoming either dead or alive (unless, of course, the cat's own consciousness could be considered to have already served this purpose). Schrödinger intended to illustrate the absurdity of the direct applicability of the rules of quantum mechanics (including his own U-evolution) when applied at the level of a cat. Like Einstein, he regarded quantum mechanics as an incomplete theory, and his 'cat' provided an excellent example for emphasizing this incompleteness. There is a need for something to be done about quantum mechanics, irrespective of the issue of its relevance to consciousness. 14.5.1. Orch OR quantum computing in the brain Penrose (1989, 1994) suggested that consciousness depends in some way on processes of the general nature of quantum computations occurring in the brain, these being terminated by some form of OR. Here the term "quantum computation" is being used in a loose sense, in which information is encoded in some discrete (not necessarily binary) physical form, and where the evolution is determined according to the U process (Schrödinger's equation). In the standard picture of quantum computers (Benioff, 1982; Deutsch, 1985; Feynman, 1986), information is represented not just as bits of either 1 or 0, but during the U process, also as quantum superposition of both 1 and 0 together (quantum bits or "qubits") where, moreover, large-scale entanglements among many qubits would also be involved. These entangled qubits would compute, in accordance with the Schrödinger equation, in order to enable complex and highly efficient potential parallel processing. As originally conceived, quantum computers would indeed act strictly in accordance with U, but at some point a measurement is made causing a quantum state reduction R (with some randomness normally introduced). Accordingly, the output is in the form of a definite state in terms of classical bits. A proposal was made in Penrose (1989) that something analogous to quantum computing, proceeding by the Schrödinger equation without decoherence, could well be acting in the brain, but where, for conscious processes, this would have to terminate in accordance with some threshold for self-collapse by a form of non-computable OR. A quantum computation terminating by OR could thus be associated with consciousness. However, no plausible biological candidate for quantum computing in the brain had been available to him, as he was then unfamiliar with microtubules. Penrose and Hameroff teamed up in the early 1990s when, fortunately, the DP form of OR mechanism was then at hand to be applied in extending the microtubule-automata models for consciousness as had been developed by Hameroff and colleagues. As described in Sec. 2.3, the most logical strategic site for coherent microtubule Orch OR and consciousness is in post-synaptic dendrites and soma (in which microtubules are uniquely arrayed and stabilized) during integration phases in integrate-and-fire brain neurons. Synaptic inputs could "orchestrate" tubulin states governed by quantum dipoles, leading to tubulin superposition in vast numbers of microtubules all involved quantum-coherently together in a large-scale quantum state, where entanglement and quantum computation takes place during integration. The termination, by OR, of this orchestrated quantum computation at the end of integration phases would select microtubule states which could then influence and regulate axonal firings, thus controlling conscious behavior. Quantum states in dendrites and soma of a particular neuron could entangle with microtubules in the dendritic tree of that neuron, and also in neighboring neurons via dendritic–dendritic (or dendritic–interneuron–dendritic) gap junctions, enabling quantum entanglement of superposed microtubule tubulins among many neurons (Fig. 1). This allows unity and binding of conscious content, and a large EG which reaches threshold (by τ ≈ ℏ/EG) quickly, such as at end-integration in EEG-relevant periods of time, e.g., τ = 0.5 s to τ = 10−2 s. In the Orch OR "beat frequency" proposal, we envisage that τ could be far briefer, e.g., 10−7 s, a time interval already shown by Bandyopadhyay’s group to sustain apparent quantum coherence in microtubules. In either case, or mixture of both, Orch OR provides a possible way to account for frequent moments of conscious awareness and choices governing conscious behavior. Section 3 described microtubule automata, in which tubulins represent distinct information states interacting with neighbor states according to rules based on dipole couplings which can apply to either London force electric dipoles, or electron spin magnetic dipoles. These dipoles move atomic nuclei slightly (femtometers), and become quantum superpositioned (along with superpositioned atomic nuclei), entangled and perform quantum computation in a U process. In dendrites and soma of brain neurons, synaptic inputs could encode memory in alternating classical phases, thereby avoiding random environmental decoherence to "orchestrate" U quantum processes, enabling them to reach threshold at time τ for orchestrated objective reduction "Orch OR" by τ ≈ ℏ/EG. At that time, according to this proposal, a moment of conscious experience occurs, and tubulin states are selected which influence axonal firing, encode memory and regulate synaptic plasticity. An Orch OR moment is shown schematically in Fig. 10. The top panel shows microtubule automata with (gray) superposition EG increasing over a period up to time τ, evolving deterministically and algorithmically by the Schrödinger equation (U) until threshold for OR by τ ≈ ℏ/EG is reached, at which time Orch OR occurs, accompanied by a moment of conscious experience. In the "beat frequency" modification of this proposal, these Orch OR events could occur on a faster timescale, for example in megahertz. Their far slower beat frequencies might then constitute conscious moments. The particular selection of conscious perceptions and choices would, according to standard quantum theory, involve an entirely random process, but according to Orch OR, the (objective) reduction could act to select specific states in accordance with some non-computational new physics (in line with suggestions made in Penrose (1989, 1994). Figure 10 (middle) depicts alternative superposed space–time curvatures (Figs. 8 and 9) corresponding to the superpositions portrayed in MTs in the top of the figure, reaching threshold at the moment of OR and selecting one space–time. Figure 10 (bottom) shows a schematic of the same process. The idea is that consciousness is associated with this (gravitational) OR process, but (see Sec. 4.5) occurs significantly only when (1) the alternatives are part of some highly organized cognitive structure capable of information processing, so that OR occurs in an extremely orchestrated form, with vast numbers of microtubules acting coherently, in order that there is sufficient mass displacement overall for the τ ≈ ℏ/EG criterion to be satisfied. (2) Interaction with environment must be avoided long enough during the U process evolution so strictly orchestrated components of the superposition reach OR threshold without too much randomness, and reflect a significant non-computable influence. Only then does a recognizably conscious Orch OR event takes place. On the other hand, we may consider that any individual occurrence of OR without orchestration would be a moment of random proto-consciousness lacking cognition and meaningful content. We shall be seeing orchestrated OR in more detail shortly, together with its particular relevance to microtubules. In any case, we recognize that the experiential elements of proto-consciousness would be intimately tied in with the most primitive Planck-level ingredients of space–time geometry, these presumed "ingredients" being taken to be at the absurdly tiny level of 10−35 m and 10−43 s, a distance and a time of 20 orders of magnitude smaller than those of normal particle-physics scales and their most rapid processes, and they are smaller by far than biological scales and processes. These scales refer only to the normally extremely tiny differences in space–time geometry between different states in superposition, the separated states themselves being enormously larger. OR is deemed to take place when such tiny space–time differences reach the Planck level (roughly speaking). Owing to the extreme weakness of gravitational forces as compared with those of the chemical and electric forces of biology, the energy EG is liable to be far smaller than any energy that arises directly from biological processes. OR acts effectively instantaneously as a choice between dynamical alternatives (a choice that is an integral part of the relevant quantum dynamics) and EG is not to be thought of as being in direct competition with any of the usual biological energies, as it plays a completely different role, supplying a needed energy uncertainty that then allows a choice to be made between the separated space–time geometries, rather than providing an actual energy that enters into any considerations of energy balance that would be of direct relevance to chemical or normal physical processes. This energy uncertainty is the key ingredient of the computation of the reduction time τ, and it is appropriate that this energy uncertainty is indeed far smaller than the energies that are normally under consideration with regard to chemical energy balance, etc. If it were not so, then there would be a danger of conflict with normal considerations of energy balance. Nevertheless, the extreme weakness of gravity tells us there must be a considerable amount of material involved in the coherent mass displacement between superposed structures in order that τ can be small enough to be playing its necessary role in the relevant OR processes in the brain. These superposed structures should also process information and regulate neuronal physiology. According to Orch OR, microtubules are central to these structures, and some form of biological quantum computation in microtubules (perhaps in the more symmetrical A-lattice microtubules) would have to be involved to provide a subtle yet direct connection to Planck-scale geometry, leading eventually to discrete moments of actual conscious experience and choice. As described above, these are presumed to occur primarily in dendritic–somatic microtubules during integration phases in integrate-and-fire brain neurons, resulting in sequences of Orch OR conscious moments occurring within brain physiology, and able to regulate neuronal firings and behavior. Dr. Stuart Hameroff - Quantum Consciousness and its Nature ... In Microtubules ? - Brief History. User avatar Posts: 35 Joined: Sun Oct 01, 2017 10:16 pm Re: Orchestrated Objective Reduction ("ORCH OR") Postby d023n » Tue Mar 27, 2018 12:58 pm If anyone is interested in this so far but also likes to mix philosophy in with their physics, Marcus Arvan has an extremely fascinating take on the idea that our universe is a simulated reality, which happens to fit quite well with Penrose's Objective Reduction theory. It is called the Peer-to-Peer Simulation Hypothesis. The P2P hypothesis holds that we are living in a peer-to-peer networked computer simulation. Some computer simulations have a "dedicated" centeral server (a single computer running the simulation that all other computers access). However, peer-to-peer networked simulations have no central server. The "simulated reality" is simply a vast network of different computers (a "cloud") running the simulation in parallel The Peer-to-Peer Simulation Hypothesis explains features of our world that otherwise have no known explanation. Physicists, to this very date, do not have any deep theory of why our world is quantum mechanical or relativistic. The equations of quantum mechanics and relativity merely reflect the fact that our world has these strange features. The Peer-to-Peer Simulation hypothesis provides the first unified explanation of why our world quantum mechanical and relativistic. It shows that "quantum mechanics" and "relativity" emerge naturally and inevitably from the purely computational structure of a peer-to-peer simulation. Here is an excerpt from one of Arvan's papers. A Unified Explanation of Quantum Phenomena? The Case for the Peer-to-Peer Simulation Hypothesis as an Interdisciplinary Research Program Online computer simulations are by now familiar parts of our world. Computer scientists and videogame companies have created sophisticated simulated environments in which “players” can navigate and interact with one another online. These simulated environments often have, within them, functional analogues of the kinds of ordinary objects we interact with in our world: they have simulated rocks, simulated cars, simulated guns, simulated bullets etc. There are, however, two distinct types of online simulations: (1) “dedicated-server” simulations, and (2) peer-to-peer (P2P) simulations. Allow me to explain the difference. A dedicated server online simulation is one in which one computer on the network (the “dedicated server”) represents where objects are in the simulated environment (see Figure 1). Every object in a dedicated server simulation thus has determinate properties within the simulation, including determinate positions and velocities. Moreover, provided the other computers hooked up to the simulation interact with the dedicated server properly, each computer on the network will take the same measurements, measuring objects in the simulated environment as having precisely the properties (e.g. location, velocity, etc.) represented on the server. A peer-to-peer (P2P) networked simulation, however, is very different. In a P2P network, no single computer anywhere on the network encodes where objects in the simulated environment “objectively” are. Rather, the simulated environment is comprised by the entire network of computers on the network, each of which takes independent measurements at every instant, measurements which, in turn, at every successive instant, alter the measurements that other computers on the network will make (see Figure 2). In other words, a P2P simulation simply is an array of computers networked together where (A) each computer simulates the environment in parallel to every other computer on the network, and (B) the totality of individual measurements of each machine on the network at any given instant represents “the simulated environment” in which all computers on the network “experience in common.” The following is something I wrote on reddit a couple of weeks ago about the possible connection between Penrose's Objective Reduction theory and Arvan's P2P Simulation Hypothesis. I have bolded the sentence that describes an effect that looks similar to Sir Roger Penrose's Objective Reduction idea, but from the outside, so to speak. The idea is that the sum of the instances of an object, such as an electron, that are spread around the peer-to-peer network running the simulation is what we on the inside would describe as a superposition. When the degree of divergence among these instances reaches a critical threshold, a single instance is selected to update all of the peers, an event that we on the inside would describe as a collapse, reduction, or measurement. Furthermore, individual instances of an object might interact differently with other objects before reaching the threshold, something that we on the inside would describe as entanglement, thereby accelerating the process of divergence and so hastening the moment of reduction. Objective Reduction explains that this threshold is achieved, and so reduction occurs, when the product of the age of the superposition and its "self-energy" (a quantity that Penrose explains in his paper linked above) reaches Planck's constant; or the time until reduction is proportional to Planck's constant divided by the "self-energy" (t ~ h / E-sub-G). For example, immediately after the measurement of an electron, it once again begins to smear out into a superposition, and, because an electron has such a small mass, its superposition can spread extremely far across spacetime before its "self-energy" becomes great enough to reach the reduction threshold. However, if the electron superposition were to become entangled with another superposition, its time until reduction would then depend upon the new superpositon of both together. In fact, the new "self-energy" of the larger superposition might be enough to satisfy the threshold, causing everything involved to instantly reduce, and so selecting a definite state for the original electron from which it would begin the process all over again. Of course, why the threshold behaves this way is almost certainly related to the fundamental programming and computational limitations of the peer-to-peer network itself. It seems sensible to say that the more information that is being processed, the slower the processing occurs, manifesting on the inside as time dilation effects, serving as an interesting way to explain why time for massive objects runs more slowly and implying that black holes are areas of the simulation that have frozen entirely. If superpositions were allowed to grow without limit, the network would quickly become overwhelmed with keeping track of all the increasingly divergent paths, and everything would inevitably freeze. The only way around this problem would be if there were simply more and more computational resources available for the peers, although this would seem to have problematic consequences for the appearance of time dilation effects. Effectively unlimited resources would actually be the situtation that the "many worlds" interpretation of quantum mechanics describes and would not be a peer-to-peer type simulation; it would just be a huge collection of the "dedicated server" type simulations spawning more and more "dedicated server" type simulations for every interaction. This could still work, but it would be insanely more complicated than the peer-to-peer setup. All of this being said, positive experimental evidence for Objective Reduction still would not be "proof" that our universe is a simulation. To be quite honest, I think that the whole simulation idea isn't a matter of proof at all and is instead a matter of axiomatic construction. However, it does really look like the P2P idea could be a way for us to more efficiently simulate our own physics for whatever reasons we might want, and, if we did start to think about physics in the context of the simulation idea, we might be able to more effectively figure out things we otherwise wouldn't necessarily think to look for, things like how to manipulate the computational architecture in unintended ways along the same lines as the recently discovered Spectre and Meltdown vulnerabilities. Who knows, maybe seeing the idea of "existence" as necessarily computational or "simulated" in nature is the first step in actually understanding presently pointless philosophical topics like where everything comes from, what death might really mean, and things like like that. Heck, we might even figure out a way to get out of this universe. User avatar Posts: 35 Joined: Sun Oct 01, 2017 10:16 pm Re: Orchestrated Objective Reduction ("ORCH OR") Postby d023n » Fri Nov 30, 2018 11:49 pm If the brain starts acting before the perception of conscious choice, does that mean that consciousness is just an illusion, or is retrocausality somehow a thing? Sir Roger Penrose has a fascinating and surprisingly simple idea about how it works. The TL;DR is at the bottom down there, but first, here is the relevant context. So, the last 2 posts were back in March, but in April, the University of Arizona Center for Consciousness Studies held its Tuscon Biennial Science of Consciousness Conference (YouTube channel link), which "is an interdisciplinary conference aimed at rigorous and leading edge approaches to all aspects of the study of conscious experience. These include neuroscience, psychology, philosophy, cognitive science, artificial intelligence, molecular biology, medicine, quantum physics, and cosmology, as well as art, technology, and experiential and contemplative approaches. The conference is the largest and longest-running interdisciplinary gathering probing fundamental questions related to conscious experience." Yada yada yada.. I should have posted this here at the end of July when the YouTube channel fiiinally added the Plenary session where Sir Roger Penrose spoke about Orchestrated Objective Reduction, but here it is now: Why Algorithmic Systems Possess No Understanding (~35 minute talk, starting at the already queued up 01:04:38 mark). However, the part about apparent retrocausality starts a little after the 1 hour 35 minute mark, and he pulls out his handy transparencies (he loves his transparencies) after about 60 seconds and then explains his idea until just after the 1 hour 41 minute mark. Anyway, the TL;DR is this: (1) a wavefunction in the brain begins to spread out in an orchestrated manner by way of the isolated environment within neuronal microtubules, but still taking many possible, superposed paths; (2) the wavefunction reaches its threshold of objective reduction and reduces, as a moment of meaningful, human-level conscious experience (e.g. a choice); (3) and now, because only the single path which led to that choice remains in the actual history of the universe, it appears as if the universe made the choice way back at moment (1) instead of at moment (2). In other words, wavefunctions collapse at their end according to all of the paths they represent, but leave a single path that appears in retrospect to have been selected from the start. I highly suggest listening to how Penrose explains it though, if you haven't already. User avatar Posts: 35 Joined: Sun Oct 01, 2017 10:16 pm Return to Consciousness Who is online Users browsing this forum: No registered users and 2 guests
540a3adf8c938c5f
The Full Wiki More info on Davisson-Germer experiment Davisson-Germer experiment: Wikis (Redirected to Davisson–Germer experiment article) From Wikipedia, the free encyclopedia Quantum mechanics Uncertainty principle Introduction · Mathematical formulation Double-slit experiment Davisson–Germer experiment Stern–Gerlach experiment Bell's inequality experiment Popper's experiment Schrödinger's cat Elitzur-Vaidman bomb-tester Quantum eraser The Davisson–Germer experiment was a physics experiment conducted in 1927 which confirmed the de Broglie hypothesis, which says that particles of matter (such as electrons) have wave properties. This demonstration of wave-particle duality was important historically in the establishment of quantum mechanics and of the Schrödinger equation. In 1924 Louis de Broglie presented his thesis concerning the wave-particle, proposing the idea that all matter displayed the wave-particle duality of photons.[1] According to de Broglie, for all matter and for radiation alike, the energy E of the particle was related to the frequency of its associated wave ν, by the Planck relation and that the momentum of the particle p was related to its wavelength λ by what is now known as the de Broglie relation where h is Planck's constant. In 1926, upon knowing the preliminary results of Davisson and Germer, Walter Elsasser remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, as the wave-like nature of X-rays was confirmed through X-ray scattering experiments on crystalline solids.[1][2] In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target.[3] The angular dependence of the reflected electron intensity was measured, and was determined to have the same diffraction pattern as those predicted by Bragg for X-rays. This was also replicated by George Paget Thomson.[1] The experiment confirmed the de Broglie hypothesis – matter displayed wave-like behaviour. This, in combination with Arthur Compton's experiment, established the wave particle duality hypothesis, which was a fundamental step in quantum theory. The experiment consisted of firing an electron beam from an electron gun on a nickel crystal at normal incidence (i.e. perpendicular to the surface of the crystal). The electron gun consisted of a heated filament that released thermally excited electrons, which were then accelerated through a potential difference V (giving them a kinetic energy of eV (e is the charge of an electron)). An electron detector was placed at an angle θ = 50° and measured the number of electrons that were scattered at that particular angle.[1] According to the de Broglie relation, a beam of 54 eV had a wavelength of 0.165 nm. This matched the predictions of Bragg's law See also 2. ^ H. Rubin. "Walter m. Elsasser". National Academies Press. Retrieved 2008-08-26.   3. ^ C.Davisson, L.H. Germer (1927). "Reflection of electrons by a crystal of nickel". Nature 119: 558–560. doi:10.1038/119558a0.   External links Got something to say? Make a comment. Your name Your email address
cf1f0a47cd80c74b
All Issues Volume 18, 2019 Volume 17, 2018 Volume 16, 2017 Volume 15, 2016 Volume 14, 2015 Volume 13, 2014 Volume 12, 2013 Volume 11, 2012 Volume 10, 2011 Volume 9, 2010 Volume 8, 2009 Volume 7, 2008 Volume 6, 2007 Volume 5, 2006 Volume 4, 2005 Volume 3, 2004 Volume 2, 2003 Volume 1, 2002 Communications on Pure & Applied Analysis May 2016 , Volume 15 , Issue 3 Select all articles Nonexistence of positive solutions for polyharmonic systems in $\mathbb{R}^N_+$ Yuxia Guo and Bo Li 2016, 15(3): 701-713 doi: 10.3934/cpaa.2016.15.701 +[Abstract](710) +[PDF](393.7KB) In this paper, we study the monotonicity and nonexistence of positive solutions for polyharmonic systems $\left\{\begin{array}{rlll} (-\Delta)^m u&=&f(u, v)\\ (-\Delta)^m v&=&g(u, v) \end{array}\right.\;\hbox{in}\;\mathbb{R}^N_+.$ By using the Alexandrov-Serrin method of moving plane combined with integral inequalities and Sobolev's inequality in a narrow domain, we prove the monotonicity of positive solutions for semilinear polyharmonic systems in $\mathbb{R_+^N}.$ As a result, the nonexistence for positive weak solutions to the system are obtained. On Compactness Conditions for the $p$-Laplacian Pavel Jirásek 2016, 15(3): 715-726 doi: 10.3934/cpaa.2016.15.715 +[Abstract](835) +[PDF](369.4KB) We investigate the geometry and validity of various compactness conditions (e.g. Palais-Smale condition) for the energy functional \begin{eqnarray} J_{\lambda_1}(u)=\frac{1}{p}\int_\Omega |\nabla u|^p \ \mathrm{d}x- \frac{\lambda_1}{p}\int_\Omega|u|^p \ \mathrm{d}x - \int_\Omega fu \ \mathrm{d}x \nonumber \end{eqnarray} for $u \in W^{1,p}_0(\Omega)$, $1 < p < \infty$, where $\Omega$ is a bounded domain in $\mathbb{R}^N$, $f \in L^\infty(\Omega)$ is a given function and $-\lambda_1<0$ is the first eigenvalue of the Dirichlet $p$-Laplacian $\Delta_p$ on $W_0^{1,p}(\Omega)$. Well-posedness and ill-posedness results for the Novikov-Veselov equation Yannis Angelopoulos 2016, 15(3): 727-760 doi: 10.3934/cpaa.2016.15.727 +[Abstract](889) +[PDF](563.2KB) In this paper we study the Novikov-Veselov equation and the related modified Novikov-Veselov equation in certain Sobolev spaces. We prove local well-posedness in $H^s (\mathbb{R}^2)$ for $s > \frac{1}{2}$ for the Novikov-Veselov equation, and local well-posedness in $H^s (\mathbb{R}^2)$ for $s > 1$ for the modified Novikov-Veselov equation. Finally we point out some ill-posedness issues for the Novikov-Veselov equation in the supercritical regime. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space Kyeong-Hun Kim and Kijung Lee 2016, 15(3): 761-794 doi: 10.3934/cpaa.2016.15.761 +[Abstract](853) +[PDF](547.8KB) In this article we consider parabolic systems and $L_p$ regularity of the solutions. With zero boundary condition the solutions experience bad regularity near the boundary. This article addresses a possible way of describing the regularity nature. Our space domain is a half space and we adapt an appropriate weight into our function spaces. In this weighted Sobolev space setting we develop a Fefferman-Stein theorem, a Hardy-Littlewood theorem and sharp function estimations. Using these, we prove uniqueness and existence results for second-order elliptic and parabolic partial differential systems in weighed Sobolev spaces. A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells and its stability analysis Wenbo Cheng, Wanbiao Ma and Songbai Guo 2016, 15(3): 795-806 doi: 10.3934/cpaa.2016.15.795 +[Abstract](977) +[PDF](413.7KB) A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells is proposed. It is shown that the infection-free equilibrium of the model is globally asymptotically stable, if the reproduction number $R_0$ is less than one, and that the infected equilibrium of the model is locally asymptotically stable, if the reproduction number $R_0$ is larger than one. Furthermore, it is also shown that the model is uniformly persistent, and some explicit formulae for the lower bounds of the solutions of the model are obtained. A Liouville-type theorem for higher order elliptic systems of Hé non-Lane-Emden type Frank Arthur and Xiaodong Yan 2016, 15(3): 807-830 doi: 10.3934/cpaa.2016.15.807 +[Abstract](1137) +[PDF](476.2KB) We prove there are no positive solutions with slow decay rates to higher order elliptic system \begin{eqnarray} \left\{ \begin{array}{c} \left( -\Delta \right) ^{m}u=\left\vert x\right\vert ^{a}v^{p} \\ \left( -\Delta \right) ^{m}v=\left\vert x\right\vert ^{b}u^{q} \end{array} \text{ in }\mathbb{R}^{N}\right. \end{eqnarray} if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1+\frac{a}{N}}{p+1}+\frac{1+\frac{b}{N}}{q+1}>1-\frac{2m}{N} $ and \begin{eqnarray} \max \left( \frac{2m\left( p+1\right) +a+bp}{pq-1},\frac{2m\left( q+1\right) +aq+b}{pq-1}\right) >N-2m-1. \end{eqnarray} Moreover, if $N=2m+1$ or $N=2m+2,$ this system admits no positive solutions with slow decay rates if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1}{ p+1}+\frac{1}{q+1}>1-\frac{2m}{N}.$ Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity Hiroyuki Hirayama and Mamoru Okamoto 2016, 15(3): 831-851 doi: 10.3934/cpaa.2016.15.831 +[Abstract](1046) +[PDF](481.9KB) In the present paper, we consider the Cauchy problem of fourth order nonlinear Schrödinger type equations with derivative nonlinearity. In one dimensional case, the small data global well-posedness and scattering for the fourth order nonlinear Schrödinger equation with the nonlinear term $\partial _x (\overline{u}^4)$ are shown in the scaling invariant space $\dot{H}^{-1/2}$. Furthermore, we show that the same result holds for the $d \ge 2$ and derivative polynomial type nonlinearity, for example $|\nabla | (u^m)$ with $(m-1)d \ge 4$, where $d$ denotes the space dimension. A class of generalized quasilinear Schrödinger equations Yaotian Shen and Youjun Wang 2016, 15(3): 853-870 doi: 10.3934/cpaa.2016.15.853 +[Abstract](762) +[PDF](438.2KB) We establish the existence of nontrivial solutions for the following quasilinear Schrödinger equation with critical Sobolev exponent: \begin{eqnarray} -\Delta u+V(x) u-\Delta [l(u^2)]l'(u^2)u= \lambda u^{\alpha2^*-1}+h(u),\ \ x\in \mathbb{R}^N, \end{eqnarray} where $V(x):\mathbb{R}^N\rightarrow \mathbb{R}$ is a given potential and $l,h$ are real functions, $\lambda\geq 0$, $\alpha>1$, $2^*=2N/(N-2)$, $N\geq 3$. Our results cover two physical models $l(s)=s^{\frac{\alpha}{2}}$ and $l(s) = (1+s)^{\frac{\alpha}{2}}$ with $\alpha\geq 3/2$. Traveling waves for a diffusive SEIR epidemic model Zhiting Xu 2016, 15(3): 871-892 doi: 10.3934/cpaa.2016.15.871 +[Abstract](889) +[PDF](461.7KB) In this paper, we propose a diffusive SEIR epidemic model with saturating incidence rate. We first study the well posedness of the model, and give the explicit formula of the basic reproduction number $\mathcal{R}_0$. And hence, we show that if $\mathcal{R}_0>1$, then there exists a positive constant $c^*>0$ such that for each $c>c^*$, the model admits a nontrivial traveling wave solution, and if $\mathcal{R}_0\leq1$ and $c\geq 0$ (or, $\mathcal{R}_0>1$ and $c\in[0,c^*)$), then the model has no nontrivial traveling wave solutions. Consequently, we confirm that the constant $c^*$ is indeed the minimal wave speed. The proof of the main results is mainly based on Schauder fixed theorem and Laplace transform. Qualitative properties of solutions to an integral system associated with the Bessel potential Lu Chen, Zhao Liu and Guozhen Lu 2016, 15(3): 893-906 doi: 10.3934/cpaa.2016.15.893 +[Abstract](843) +[PDF](403.5KB) In this paper, we study a differential system associated with the Bessel potential: \begin{eqnarray}\begin{cases} (I-\Delta)^{\frac{\alpha}{2}}u(x)=f_1(u(x),v(x)),\\ (I-\Delta)^{\frac{\alpha}{2}}v(x)=f_2(u(x),v(x)), \end{cases}\end{eqnarray} where $f_1(u(x),v(x))=\lambda_1u^{p_1}(x)+\mu_1v^{q_1}(x)+\gamma_1u^{\alpha_1}(x)v^{\beta_1}(x)$, $f_2(u(x),v(x))=\lambda_2u^{p_2}(x)+\mu_2v^{q_2}(x)+\gamma_2u^{\alpha_2}(x)v^{\beta_2}(x)$, $I$ is the identity operator and $\Delta=\sum_{j=1}^{n}\frac{\partial^2}{\partial x^2_j}$ is the Laplacian operator in $\mathbb{R}^n$. Under some appropriate conditions, this differential system is equivalent to an integral system of the Bessel potential type. By the regularity lifting method developed in [4] and [18], we obtain the regularity of solutions to the integral system. We then apply the moving planes method to obtain radial symmetry and monotonicity of positive solutions. We also establish the uniqueness theorem for radially symmetric solutions. Our nonlinear terms $f_1(u(x), v(x))$ and $f_2(u(x), v(x))$ are quite general and our results extend the earlier ones even in the case of single equation substantially. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian Imran H. Biswas and Indranil Chowdhury 2016, 15(3): 907-927 doi: 10.3934/cpaa.2016.15.907 +[Abstract](738) +[PDF](476.4KB) We derive $C^{1,\sigma}$-estimate for the solutions of a class of non-local elliptic Bellman-Isaacs equations. These equations are fully nonlinear and are associated with infinite horizon stochastic differential game problems involving jump-diffusions. The non-locality is represented by the presence of fractional order diffusion term and we deal with the particular case of $\frac 12$-Laplacian, where the order $\frac 12$ is known as the critical order in this context. More importantly, these equations are not translation invariant and we prove that the viscosity solutions of such equations are $C^{1,\sigma}$, making the equations classically solvable. Oscillatory integrals related to Carleson's theorem: fractional monomials Shaoming Guo 2016, 15(3): 929-946 doi: 10.3934/cpaa.2016.15.929 +[Abstract](724) +[PDF](441.3KB) Stein and Wainger [21] proved the $L^p$ bounds of the polynomial Carleson operator for all integer-power polynomials without linear term. In the present paper, we partially generalise this result to all fractional monomials in dimension one. Moreover, the connections with Carleson's theorem and the Hilbert transform along vector fields or (variable) curves %and a polynomial Carleson operator along the paraboloid are also discussed in details. Layer solutions for an Allen-Cahn type system driven by the fractional Laplacian Yan Hu 2016, 15(3): 947-964 doi: 10.3934/cpaa.2016.15.947 +[Abstract](866) +[PDF](439.7KB) We study entire solutions in $R$ of the nonlocal system $(-\Delta)^{s}U+\nabla W(U)=(0,0)$ where $W:R^{2}\rightarrow R$ is a double well potential. We seek solutions $U$ which are heteroclinic in the sense that they connect at infinity a pair of global minima of $W$ and are also global minimizers. Under some symmetric assumptions on potential $W$, we prove the existence of such solutions for $s>\frac{1}{2}$, and give asymptotic behavior as $x\rightarrow\pm\infty$. Infinitely many solutions for nonlinear Schrödinger system with non-symmetric potentials Weiwei Ao, Liping Wang and Wei Yao 2016, 15(3): 965-989 doi: 10.3934/cpaa.2016.15.965 +[Abstract](792) +[PDF](496.0KB) Without any symmetric conditions on potentials, we proved the following nonlinear Schrödinger system \begin{eqnarray} \left\{\begin{array}{ll} \Delta u-P(x)u+\mu_1u^3+\beta uv^2=0, \quad &\mbox{in} \quad R^2\\ \Delta v-Q(x)v+\mu_2v^3+\beta vu^2=0, \quad &\mbox{in} \quad R^2 \end{array} \right. \end{eqnarray} has infinitely many non-radial solutions with suitable decaying rate at infinity of potentials $P(x)$ and $Q(x)$. This is the continued work of [8]. Especially when $P(x)$ and $Q(x)$ are symmetric, this result has been proved in [18]. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent Kaimin Teng and Xiumei He 2016, 15(3): 991-1008 doi: 10.3934/cpaa.2016.15.991 +[Abstract](897) +[PDF](430.2KB) In this paper, we establish the existence of ground state solutions for fractional Schrödinger equations with a critical exponent. The methods used here are based on the $s-$harmonic extension technique of Caffarelli and Silvestre, the concentration-compactness principle of Lions and methods of Brezis and Nirenberg. Global regular solutions to two-dimensional thermoviscoelasticity Jerzy Gawinecki and Wojciech M. Zajączkowski 2016, 15(3): 1009-1028 doi: 10.3934/cpaa.2016.15.1009 +[Abstract](719) +[PDF](416.5KB) A two-dimensional thermoviscoelastic system of Kelvin-Voigt type with strong dependence on temperature is considered. The existence and uniqueness of a global regular solution is proved without small data assumptions. The global existence is proved in two steps. First global a priori estimate is derived applying the theory of anisotropic Sobolev spaces with a mixed norm. Then local existence, proved by the method of successive approximations for a sufficiently small time interval, is extended step by step in time. By two-dimensional solution we mean that all its quantities depend on two space variables only. Inversion of the spherical Radon transform on spheres through the origin using the regular Radon transform Sunghwan Moon 2016, 15(3): 1029-1039 doi: 10.3934/cpaa.2016.15.1029 +[Abstract](892) +[PDF](4710.4KB) A spherical Radon transform whose integral domain is a sphere has many applications in partial differential equations as well as tomography. This paper is devoted to the spherical Radon transform which assigns to a given function its integrals over the set of spheres passing through the origin. We present a relation between this spherical Radon transform and the regular Radon transform, and we provide a new inversion formula for the spherical Radon transform using this relation. Numerical simulations were performed to demonstrate the suggested algorithm in dimension 2. Bogdanov-Takens bifurcation of codimension 3 in a predator-prey model with constant-yield predator harvesting Jicai Huang, Sanhong Liu, Shigui Ruan and Xinan Zhang 2016, 15(3): 1041-1055 doi: 10.3934/cpaa.2016.15.1041 +[Abstract](1037) +[PDF](2561.0KB) Recently, we (J. Huang, Y. Gong and S. Ruan, Discrete Contin. Dynam. Syst. B 18 (2013), 2101-2121) showed that a Leslie-Gower type predator-prey model with constant-yield predator harvesting has a Bogdanov-Takens singularity (cusp) of codimension 3 for some parameter values. In this paper, we prove analytically that the model undergoes Bogdanov-Takens bifurcation (cusp case) of codimension 3. To confirm the theoretical analysis and results, we also perform numerical simulations for various bifurcation scenarios, including the existence of two limit cycles, the coexistence of a stable homoclinic loop and an unstable limit cycle, supercritical and subcritical Hopf bifurcations, and homoclinic bifurcation of codimension 1. Traveling wave solutions in a nonlocal reaction-diffusion population model Bang-Sheng Han and Zhi-Cheng Wang 2016, 15(3): 1057-1076 doi: 10.3934/cpaa.2016.15.1057 +[Abstract](1270) +[PDF](1955.9KB) This paper is concerned with a nonlocal reaction-diffusion equation with the form \begin{eqnarray} \frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+u\left\{ 1+\alpha u-\beta u^{2}-(1+\alpha-\beta)(\phi\ast u) \right\}, \quad (t,x)\in (0,\infty) \times \mathbb{R}, \end{eqnarray} where $\alpha $ and $\beta$ are positive constants, $0<\beta<1+\alpha$. We prove that there exists a number $c^*\geq 2$ such that the equation admits a positive traveling wave solution connecting the zero equilibrium to an unknown positive steady state for each speed $c>c^*$. At the same time, we show that there is no such traveling wave solutions for speed $c<2$. For sufficiently large speed $c>c^*$, we further show that the steady state is the unique positive equilibrium. Using the lower and upper solutions method, we also establish the existence of monotone traveling wave fronts connecting the zero equilibrium and the positive equilibrium. Finally, for a specific kernel function $\phi(x):=\frac{1}{2\sigma}e^{-\frac{|x|}{\sigma}}$ ($\sigma>0$), by numerical simulations we show that the traveling wave solutions may connects the zero equilibrium to a periodic steady state as $\sigma$ is increased. Furthermore, by the stability analysis we explain why and when a periodic steady state can appear. 2017  Impact Factor: 0.884 Email Alert [Back to Top]
aa68941a0ab6e81f
Integrated SequencePhysics Chemistry Organic Biology WikiPremed Resources Module 4 in the Syllabus Atomic Theory Concepts Concept chapter for Atomic Theory in PDF format Atomic Theory Practice Items Problem set for Atomic Theory in PDF format Answer Key Answers and explanations Atomic Theory Images Question Drill for Atomic Theory Conceptual Vocabulary Self-Test Basic Terms Crossword Puzzle Basic Puzzle Solution Overview of Atomic Theory Atomic Theory is the branch of chemistry concerned with the smallest form of an element that can exist chemically, the atom. Classical physics is helpful to understanding some properties of atoms. However, the range of behaviors of atoms exceeds the descriptive powers of classical physics. To explain the line spectrum of hydrogen, for example, Neils Bohr develped his early form of atomic theory. A more complete picture of the electronic structure of the atom is provided by modern quantum electrodynamics. Atomic Theory on the MCAT Questions directly concerned with Atomic Theory, or more generally, basic quantum mechanics, do appear with fair regularity on the MCAT, although they tend to be easier questions than they may seem at first glance. More important than the direct appearance of these concepts on the exam is that these initial chapters of Chemistry, dealing with the instrinsic structure of matter, i.e. Atomic Theory, Periodic Properties, and Chemical Bonding, are absolutely crucial for the scientific understanding of the physical and natural world. The rest of General Chemistry, Organic Chemistry, and Biology will make profoundly better sense, and be much more interesting besides, if you take special care to understand the structure of matter. Conceptual Vocabulary AtomAn atom is the smallest particle still characterizing a chemical element ElectronThe electron is a fundamental subatomic particle that carries a negative electric charge. ProtonThe proton is a subatomic particle with an electric charge of one positive fundamental unit, a diameter of about 1.5 fm femtometer, and a mass that is about 1836 times the mass of an electron. NeutronThe neutron is a subatomic particle with no net electric charge and a mass that is slightly more than a proton Atomic orbitalAn atomic orbital is a mathematical description of the region in which an electron may be found around a single atom. IonAn ion is an atom or molecule which has lost or gained one or more electrons, making it negatively or positively charged. IsotopeIsotopes are any of the several different forms of an element with nuclei having the same number of protons but different numbers of neutrons. HydrogenHydrogen is a chemical element represented by the symbol H and an atomic number of 1. Valence electronValence electrons are the electrons contained in the outermost electron shell of an atom. Electron configurationThe electron configuration is the arrangement of electrons in an atom, molecule, or other physical structure such as a crystal. Ernest RutherfordErnest Rutherford was a nuclear physicist who pioneered the orbital theory of the atom through his discovery of scattering off the nucleus with his gold foil experiment. Alpha particleAlpha particles consist of two protons and two neutrons bound together into a particle identical to a helium nucleus. Quantum leapA quantum leap is a change of an electron from one energy state to another within an atom. Emission spectrumAn element's emission spectrum is the relative intensity of electromagnetic radiation of each frequency it emits when it is excited. Hund's rulesHund's rules are a simple set of rules used to determine the term symbol that corresponds to the ground state of a multi-electron atom. Aufbau principleThe Aufbau is used to determine the electron configuration of an atom, molecule or ion, postulating a hypothetical process in which an atom is built up by progressively adding electrons. Rutherford modelThe Rutherford model showed that the plum pudding model of the atom of J. J. Thomson was incorrect, presenting the atom as containing a central charge concentrated into a very small volume in comparison to the rest of the atom. Quantum mechanicsQuantum mechanics is the study of the relationship between energy quanta and matter, in particular between photons and valence shell electrons. Uncertainty principleThe Heisenberg uncertainty principle gives a lower bound on the product of the standard deviations of position and momentum for a system, implying that it is impossible to have a particle that has an arbitrarily well-defined position and momentum simultaneously. Quantum stateThe quantum state of a system corresponds to a set of numbers that fully describe a quantum system. Pauli exclusion principleThe Pauli exclusion principle explains why matter occupies space exclusively for itself and does not allow other material objects to pass through it, while at the same time allowing light and radiation to pass. Excited stateAn excited state of a system is any quantum state of the system that has a higher energy than the ground state. Rutherford scatteringObservation of the phenomenon of Rutherford scattering of alpha particles incident on gold foil led to the development of the orbital theory of the atom. Principal quantum numberThe principal quantum number has the greatest correlation to energy of the quantum numbers describing the unique quantum state of an electron in an atom. Cathode rayCathode rays are streams of electrons observed in vacuum tubes. Oil-drop experimentThe purpose of Robert Millikan and Harvey Fletcher's oil-drop experiment (1909) was to measure the electric charge of the electron. J. J. ThomsonSir Joseph John Thomson (1856 - 1940) was a British scientist credited for the discovery of the electron, of isotopes, and the invention of the mass spectrometer. Spin quantum numberThe spin quantum number is a quantum number that parametrizes the intrinsic angular momentum of a given particle. Magnetic quantum numberThe magnetic quantum number, along with the principal quantum number, the azimuthal quantum number, and the spin quantum number, describes the unique quantum state of an electron. Schrödinger equationThe Schrödinger equation describes the space- and time- dependence of quantum mechanical systems. Plum pudding modelThe plum pudding model of the atom was proposed by J. J. Thomson, the discoverer of the electron in 1897 before the discovery of the atomic nucleus. Azimuthal quantum numberThe Azimuthal quantum number (or orbital angular momentum quantum number) is the quantum number for an atomic orbital which determines its orbital angular momentum. Balmer seriesThe Balmer series describes a series of spectral line emissions of the hydrogen atom that reflect emissions of photons by electrons in excited states transitioning to the quantum level described by the principal quantum number n equals 2. John DaltonJohn Dalton (1766 - 1844) was an English chemist, meteorologist and physicist, best known for his pioneering work in the development of modern atomic theory. Electron cloudIn the electron cloud analogy, the probability density of an electron, or wavefunction, is described as a region of space around the atomic or molecular nucleus representing the electron's likely location. Stationary stateA stationary state is an eigenstate of a Hamiltonian, or in other words, a state of definite energy. The corresponding probability density has no time dependence. Lyman seriesThe Lyman series is the series of transitions and resulting emission lines of the hydrogen atom as an electron goes from an electron shell of principal quantum number greater than or equal to 2 to the ground state. Advanced terms that may appear in context in MCAT passages Rydberg formulaThe Rydberg formula is used in atomic physics for describing the wavelengths of spectral lines of many chemical elements. Zeeman effectThe Zeeman effect is the splitting of a spectral line into several components in the presence of a static magnetic field. Paschen seriesThe Paschen series is the series of transitions and resulting emission lines of the hydrogen atom as an electron goes from an electron shell greater than or equal to 4 to n = 3. Hyperfine structureHyperfine structure is a small perturbation in the energy levels, or spectra, of atoms or molecules due to the magnetic dipole-dipole interaction, arising from the interaction of the nuclear magnetic moment with the magnetic field of the electron. Brackett seriesIn atomic physics, the Brackett series describes a series of spectral line emissions of the hydrogen atom that appear in emission when hydrogen atoms' electrons descend to the fourth energy level from a higher level. Franck-Hertz experimentIn 1914, the Franck-Hertz experiment elegantly supported Niels Bohr's model of the atom by demonstrating that atoms could indeed only absorb specific amounts of energy. Moseley's lawMoseley's law is an empirical law concerning the characteristic x-rays that are emitted by atoms which justified the conception of the nuclear model of the atom. K-alphaIn X-ray spectroscopy, K-alpha emission lines result when an electron transitions to the innermost K shell from a 2p orbital of the second or L shell. Spin-orbit interactionThe spin-orbit interaction is any interaction of a particle's spin with its motion. Siegbahn notationThe Siegbahn notation is used in x-ray spectroscopy to name the spectral lines that are characteristic to elements. It was created by Manne Siegbahn. Creative Commons License
b5eeb7dc38e1ca26
Steven Weinberg's mutated density matrices Loopholes like that are probably not too interesting Quantum Mechanics Without State Vectors Anti-quantum noise as an introduction Two unsatisfactory features of quantum mechanics have bothered physicists for decades. The first is the difficulty of dealing with measurement. Weinberg continues: The unitary deterministic evolution of the state vector in quantum mechanics cannot convert a definite initial state vector to an ensemble of eigenvectors of the measured quantity with various probabilities. It cannot and indeed, it doesn't and shouldn't. The unitary deterministic evolution is just a time-dependence of the state vector or the density matrix or the operators whose purpose is to calculate the probabilities of measurements. And one must know what the questions are before he demands that a quantum mechanical theory calculates the answers for him. Later, the paper discusses this point in detail and I am sure that as soon as Weinberg gets sober, he agrees with me. By the ensemble, he really means a particular decomposition of a density matrix as\[ \] into state vectors and probabilities. If we allow \(\ket\psi\) not to be orthogonal to one another, \(\rho\) may be expanded in infinitely many ways according to this template. But it's obvious that a particular choice to decompose the density matrix is unphysical. All predictions about the physical system are encoded in the density matrix itself. Weinberg himself admits this trivial fact – if the choice of the decomposition were real, its change could be used to sent the information superluminally. Weinberg sort of suggests that he discovered that only the density matrix matters (while the particular decomposition does not). Sorry to say but at least in Prague (and I guess that almost everywhere else), those things were taught as the elementary stuff to the undergraduates and it was correctly claimed that these basic constructions and interpretations of the density matrix were pioneered by the fathers of the density matrix – Felix Bloch, Lev Landau, and John von Neumann. It's very clear why only the density matrix and not some decomposition matters: everything that quantum mechanics may predict are the probabilities and all of them may be calculated via formulae that only depend on \(\rho\) and not some "finer information" about \(\rho\) such as a decomposition. See my blog post on the density matrix and its classical counterpart. The same comments apply to the state vector. If a physical system is described by the state vector, all predictions – the probabilities – may be calculated from the state vector itself, so e.g. a particular decomposition of the form\[ \ket\psi = \sum_i c_i \ket{\phi_i} \] can't possibly be physical. The decomposition of a pure state to the eigenstates of a particular observable is more directly useful for those who are just planning to measure this observable. But that's it. A decomposition may be more useful than another one; but it cannot be "more right". Now, the density matrix is a more general object than the state vector. The state vector describes the state of a physical system of which we have the "maximum knowledge" allowed by the laws of quantum physics. It typically means that we have measured the eigenvalues of a complete set of commuting observables. Even with this "maximum knowledge", i.e. a state vector, almost all predictions are inevitably just probabilistic. This directly follows from the uncertainty principle. In the case of the maximum knowledge, all the predictions may still be calculated from the density matrix\[ \rho = \ket \psi \bra \psi \] using the same formulae we are using for the most general density matrix. None of these claims is new in any way. All of them were fully understood in the late 1920s, undergraduate students of quantum mechanics should understand them in the first semester, and Weinberg or other contemporary physicists shouldn't try to take credit for these things. Here we seem to be faced with nothing but bad choices. The Copenhagen interpretation assumes a mysterious division between the microscopic world governed by quantum mechanics and a macroscopic world of apparatus and observers that obeys classical physics. Like always in science, we are facing both good choices and bad choices. Quantum mechanics – as explained by the "Copenhagen interpretation" – precisely formulates what the laws of physics may do and may not do, what is physical and what is unphysical. Only results of measurements are real facts and the laws of physics may calculate the (conditional) probabilities that the observations of a certain property or quantity will be something or something else if the results of previous measurements were something or something else. This basic paradigm – that only observations are meaningful and only probabilities may be predicted – isn't open to "interpretations". These conceptual assumptions are the postulates of quantum mechanics – the physical theory, not a direction in philosphers' or artists' babbling – in the same sense as the equivalence of different inertial systems is a postulate of the special theory of relativity. And indeed, every good "modern" way of talking about quantum mechanics – consistent histories, quantum Bayesianism, and perhaps others – agrees with these basic pillars of modern physics. Every wrong way of talking about quantum physics – deBroglie-Bohm theories, many worlds, and Ghirardi-Rimini-Weber or Penrose-Hameroff kindergarten-like-real collapses, among others – try to deny that physics has irreversibly switched from the classical foundations to new, quantum foundations. Quantum mechanics – and by that, I mean what the heroic Copenhagen folks discovered and what is studied by those who respect the basic foundations of quantum mechanics, not the wrong, ideologically motivated pseudoscientific delusions about what quantum mechanics "should be" – doesn't introduce any mysterious division between the microscopic world and the macroscopic world. Instead, all objects in the world, whether they are microscopic or macroscopic, obey the laws of quantum mechanics. This fact has been known since the 1920s, too. In fact, people have developed the quantum theory of crystals, conductors, gases (including Fermi-Dirac and Bose-Einstein statistics), paramagnets, diamagnets, ferromagnets, and other macroscopic materials and objects in the late 1920s and early 1930s. It is absolutely ludicrous to suggest that quantum mechanics has any problem with macroscopic objects. What is true is that for large enough objects, classical physics works as well in the sense that it is a qualitatively good approximation of quantum mechanics. In the limit \(\hbar\to 0\), quantum mechanics may generally be approximated by a classical theory. One must still realize that quantum mechanics is always right and always exact while classical physics is only sometimes right and it is only approximately right: it is a limit of quantum mechanics. This limiting procedure has many aspects and implications. The existence of the classical limit is needed for nearly classical observers like us to be able to talk about the predicted observations and their probabilities using the same language that is used in classical physics (a fact – something has happened or not – means exactly the same thing in quantum physics as it does in classical physics; probabilities mean the same thing in quantum mechanics and classical physics, too; only the quantum mechanical rules that allow us to deduce that something will probably occur out of the knowledge of something else that has occurred in the past are different than they are in classical physics). Decoherence is a calculable process – caused by sufficiently strong interactions of the object of interest with sufficiently many degrees of freedom in the environment – by which the information about the relative phases encoded in the state vector or, more generally, about the off-diagonal elements of the density matrix in a basis (one that ultimate agrees with – and defines – the common-sense decomposition) is rapidly being lost. That process implies that the probabilities encoded in the density matrix may be approximated by classical ones because the potential for characteristic quantum phenomena in the future – in particular, re-interference of parts of the state vector – has been practically lost. But decoherence doesn't weaken quantum mechanics in any way; it is not an addition to quantum mechanics that has to be made to fix a "bug" in quantum mechanics. Quantum mechanics has no bugs. Decoherence is a consequence of the laws of quantum mechanics that justifies the classical reasoning as an approximation in certain situations. You may still insist on the exact quantum interpretation of all the probabilities, however! None of these insights weakens the fact that quantum mechanics is a perfectly and exactly valid theory of the microscopic world as well as the macroscopic world. Niels Bohr and Werner Heisenberg not only knew it but they helped to lay the foundations of the actual modern quantum mechanical theories of many macroscopic objects and phenomena. It is a disrespectful untruth for someone to suggest that something fundamental was missing in the Copenhagen school's description of macroscopic objects. This untruth has almost become the consensus of popular books on quantum mechanics which doesn't make it any less outrageous. Such attacks on the universal validity of quantum mechanics are as outrageous as the attacks against heliocentrism voiced 90 years after the Galileo trial. If instead we take the wave function or state vector seriously as a description of reality, and suppose that it evolves unitarily according to the deterministic time-dependent Schrödinger equation, we are inevitably led to a many-worlds interpretation [2], in which all possible results of any measurement are realized. There exists no sequence of logical arguments with reasonable assumptions that would imply that "all possible results are realized". And there doesn't even exist any "theory" that would describe at least basic features of the world around us in agreement with the paradigm that "all possible results are realized". Every time someone "concludes" that all results are realized, the conclusion follows either from sloppy and circular thinking, cheating, a brain defect, or a combination of these three reasons. Weinberg derives many things as a real perfectionist so it's due to a non-Weinbergian trait when he suddenly claims that something clearly invalid and indefensible may be derived from something else (which is also wrong, but in a different way) – that we are "inevitably led" somewhere. We are surely not. If there were a glimpse of an argument that makes sense, Weinberg would show it instead of screaming words like "inevitably". These comments about the non-existence of the "logical derivation" are not too important because the assumption, the claim that "the state vector describes the objective reality", is demonstrably wrong. To avoid both the absurd dualism of the Copenhagen interpretation and the endless creation of inconceivably many branches of history of the many-worlds approach, some physicists adopt an instrumentalist position, giving up on any realistic interpretation of the wave function, and regarding it as only a source of predictions of probabilities, as in the decoherent histories approach [3]. The "instrumentalist" position is exactly what the founders of orthodox quantum mechanics (Copenhagen school: Bohr, Heisenberg, Jordan, Born, Pauli, Dirac, and so on) would defend. They may have called it a "philosophy", and a "positivist" one, not an "instrumentalist" one, and the wordings and sociology may have changed but the physical content is exactly the same. The content is also the same as the content of the slogan "shut up and calculate". You just shouldn't insist on talking about things that cannot be measured. You may talk about them but it is perfectly fine for a theory to declare some or all unmeasurable questions to be physically meaningless and a person criticizing a theory for its not talking about unmeasurable things is simply not acting as a scientist! In my opinion, this is the most correct interpretation of the positivist/instrumentalist/shut-up-and-calculate attitude and the general philosophy of this attitude was really uncovered by Einstein's thoughts about relativity, too. At least Heisenberg would always credit Einstein with bringing this general positivist philosophy to physics. Einstein realized that a theory doesn't have to define the objective meaning of the "simultaneity of two events" because there exists no objective instrumental test of whether or not two distant events occurred simultaneously. The only difference between relativity and quantum mechanics is that relativity only declared a small number of things "subjective" (well, observer-dependent) and people got used to it while many things remained Lorentz-invariant. Quantum mechanics makes every and any knowledge fundamentally subjective but the logic why it's fine is qualitatively the same as it is in the case of the simultaneity of events in relativity! The fundamental universal reason why it's fine for a theory to declare things subjective, i.e. observer-dependent, is that an observer is needed to make observations (sure!). So in general, every part of the observation may depend on the observer. There are things that the observers will ultimately agree upon (Will a fast broom be caught in a barn in relativity? Has the Schrödinger's Cat started a nuclear war a minute ago?) but the agreement may be a nontrivial derived fact while the intermediate steps in the derivations may be different for different observers. The agreement between different observer doesn't have to be and isn't due to the fundamentally and exactly objective character of almost everything in the world! The other problem with quantum mechanics arises from entanglement [4]. In an entangled state in ordinary quantum mechanics an intervention in the state vector affecting one part of a system can instantaneously affect the state vector describing a distant isolated part of the system. Entanglement isn't a problem. Entanglement is the generic as well as the most general quantum description of correlation(s) between two subsystems. Almost all states are entangled i.e. refusing to be tensor-factorized to independent states of the subsystem. Quantum mechanics would be reduced to "nothing" or would lose its "quantumness" if entanglement were "forbidden". The predictive interpretation of the entanglement is exactly the same as the predictive interpretation of correlations in classical physics. But quantum mechanics and its entanglement may actually imply predictions that can't follow from any classical model – like simultaneously guaranteed correlations in many pairs of quantities, high correlations violating Bell's inequalities, and other things. But that's simply because it's a different theory. The class of classical theories may have looked large to many people but it's too small and constraining for the correct theories of Nature and the correct theories of Nature are quantum mechanical and refuse to belong to the classical class! It is true that in ordinary quantum mechanics no measurement in one subsystem can reveal what measurement was done in a different isolated subsystem, but the susceptibility of the state vector to instantaneous change from a distance casts doubts on its physical significance. It doesn't just cast doubts. It proves that the state vector – or the density matrix – can't be viewed as an objective feature of reality. Indeed, the wave function may rapidly change and if it were a piece of the objective reality, this "collapse" would be in conflict with relativity. But the actual, correctly interpreted quantum mechanics has no problem with relativity. It may be compatible with relativity and indeed, quantum field theory and string theory are guaranteed to be compatible with relativity. The state vectors or density matrices are data summarizing the subjective knowledge about the physical system. And the "collapse" is nothing else than the subjective process – taking place in the brain – that allows us to replace the original complicated probability amplitudes encoding distributions of all quantities by the conditional probability distributions in which the already known outcomes of measurements are taken into account as facts. I could go on for a while. Although Weinberg avoids writing "clearly and atrociously wrong" claims about the foundations of quantum mechanics that others like to produce, I don't really feel comfortable with a single sentence he is writing about the right interpretation of quantum mechanics, its applicability, or its history. Generalizing transformations of density matrices But these provocative comments about the foundations of quantum mechanics are not supposed to be the key content of the preprint, I guess. Instead, Weinberg wants to generalize symmetry transformations that may apply to density matrices. The basic "modest proposal" is that the density matrix is more fundamental than a state vector. Well, I agree with that, kind of. The pure density matrix\[ \rho = \ket\psi \bra\psi \] is a special case of the density matrix corresponding to the "maximum knowledge". Note that the overall phase of the state vector \(\ket\psi\) does not affect the predictions because the phases cancel in \(\rho\) and all predicted probabilities may be calculated using this \(\rho\). While the state vector (pure state) may be viewed as a special example of a density matrix, it is sort of sufficient, too, because the most general density matrix may be written as a mixture (a real linear combination) of squared pure vectors with some probabilities as the coefficients (see the formula at the top and use it to diagonalize the density matrix; the vectors on the right hand side will be orthogonal to each other in this case). The probabilities predicted from a density matrix are therefore weighted averages of the probabilities predicted from pure states – the weighted averaging is no different than in the corresponding classical calculation. For this reason, the "quantum essence" of the predictions is hiding in the pure states and the density matrices may be – but don't have to be – considered an "engineering addition" added on top of the calculus of pure states. The probabilistic ignorance from both sources (the unavoidable uncertainty hiding already in the pure states; and the probabilistic, classical-like mixing from the density matrices) gets mixed up and both types of ignorance may be treated together using the natural and simple formalism of density matrices which is why it's totally OK to think that the equations involving density matrices are "fundamental". What does it mean to have a symmetry in quantum mechanics? We mean a linear operator \(U\) that acts as\[ \ket\psi \to U \ket\psi \] Simple. The corresponding bra-vector \(\bra \psi\) is just the Hermitian conjugate so it transforms accordingly:\[ \bra\psi\to \bra\psi U^\dagger. \] Because the density matrix is a combination of \(\ket\psi\bra\psi\) objects, it transforms as\[ \rho \to U \rho U^\dagger. \] Great. If the density matrix as well as operators \(L\) transform by this conjugation and if \(UU^\dagger=U^\dagger U = 1\) i.e. if the transformation is unitary (linear and preserving probabilities), then the expectation values\[ {\rm Tr}(\rho L_1 L_2\dots ) \] are conserved because \(U^\dagger U\) cancel everywhere including the beginning and the end (due to the cyclic property of the trace). All predicted probabilities may be written in this form as well, with \(L_i=P_i\) chosen as some projection operators on the Yes subspace of the Yes/No questions, so the predicted probabilities may be invariant under the symmetry transformations, too. (Consistent histories work with a mild generalization of this formula for probabilities in which we consider probabilities of whole histories i.e. traces \[ {\rm Tr}(P_n\dots P_2 P_1\rho P_1 P_2 \dots P_n) \] involving traces of products of the density matrix and a multiplicative sequence of several projection operators encoding different properties at different times. For the different histories to be mutually exclusive in the classical sense, we demand a sort of "orthogonality" consistency conditions for these pairs of histories. The consistent histories aren't really quite new; they're just the normal Copenhagen formalism adapted to composite questions about several properties of the system at different times.) The main technical question that Weinberg is addressing in the paper is whether there may be transformations of the density matrix\[ \rho \to g(\rho) \] that are not descendants of transformations of the pure state vectors\[ \ket\psi \to g(\ket\psi ). \] If we assume that the transformations are linear in the matrix entries of \(\rho\), the transformations on the \(N\)-dimensional Hilbert space are pretty much elements of the \(U(N)\) group. But the density matrix has \(N^2\) different real parameters (if we allow the trace to be anything) – it would be \(2N^2\) if the entries were complex but the Hermiticity reduces the number of parameters exactly to one-half. And Weinberg and others could think about transformations that may mix these \(N^2\) entries in more general ways than ways descended from the pure state vector transformations i.e. different from \(\rho\to U\rho U^\dagger\). In other words, he wants to talk to the matrix entries of the density matrix directly, i.e. via \(U(N^2)\) transformations of a sort (or some useful subgroup that acts on the entries differently than the action descended from the \(U(N)\) transformations of the pure states). This is a potentially interesting business of looking for loopholes and exceptional structures. For the time evolution (i.e. the transformation by time-translations), there is a well-known generalization of the "normal" transformation given by the Lindblad equation\[ \] The commutator term describes the "normal continuous differential evolution" of the density matrix that is derived from the Schrödinger's evolution of the pure state vector. The terms involving the mutually orthogonal operators \(L_m\) are new. Such a form of the time evolution may be obtained for open systems, i.e. from tracing over some environmental degrees of freedom. When you do it in this way, the evolution is naturally irreversible, time-reversal-asymmetric, and that's why Weinberg talks about the semi-group structures (a semi-group is almost like a group but the inverse element isn't required; for example, the group of renormalization group "flows" is really a semi-group because the "integrating out of the degrees of freedom" is irreversible). At a fundamental level, one expects the "normal transformation" of the density matrix to be the only physically kosher one and there are partial theorems that Weinberg acknowledges. But he is looking for loopholes. The Lindblad equation is one such loophole. But he wants to focus on more exotic, exceptional, cheeky ways to access the individual matrix entries of the density matrix. His first provocative example is\[ \rho = \pmatrix {a_1 & b_3& b^*_2\\ b^*_3 & a_2 & b_1\\ b_2 & b^*_1 & a_3 } \] where \((b_1,b_2,b_3)\) are supposed to transform as a complex triplet under an \(SU(3)\) group while \(a_1,a_2,a_3\) are three real singlets, not transforming at all. Note that these \(SU(3)\) transformations are preserving the trace (well, even the individual diagonal entries) and the Hermiticity of the density matrix. This \(SU(3)\) action on the density matrix is different from the descendant of the usual \(U(3)\) action on the pure states in the Hilbert space. Why? Well, under the \(SU(3)\) subgroup of the latter, the density matrix transforms as the adjoint i.e.\[ {\bf 3}\otimes \overline{\bf 3} = {\bf 8} \oplus {\bf 1}. \] On the other hand, Weinberg's proposed mutated density matrix transforms as another 9-dimensional representation, namely\[ {\bf 3}\oplus \overline{\bf 3} \oplus {\bf 1} \oplus {\bf 1} \oplus {\bf 1}. \] There's no adjoint representation here at all. OK, in the group-theoretical terminology that Weinberg seems to avoid for unknown reasons, we may ask with him: Are there some interesting physical models in which the density matrix transforms differently than in the adjoint representation of the \(U(N)\) symmetry acting on the Hilbert space (into which all the symmetry transformations are normally embedded)? There are tons of reasons why the usual "adjoint representation option" is the only physically meaningful one, and Weinberg describes several of them. For example, the final portion of the preprint is dedicated to positivity – to the requirement that the general mutated transformations of the density matrix must preserve the non-negativity of its eigenvalues. Also, it's obvious that the "adjoint representation option" is the only possible one if we allow all operators, including all projection operators on arbitrary pure states, and we require the traces \({\rm Tr}(\rho L)\) to be conserved by the symmetry transformations. But it seems to me that Weinberg doesn't articulate the most obvious reason why we want to insist on the "adjoint representation option": the trace \({\rm Tr}(\rho L)\) is contracting the operator \(L\) with something else, so this something else should mathematically transform as an operator, too. Otherwise there are no nontrivial singlets in the bilinear product. The density matrix isn't really an observable – the probabilities can't be measured by a single measurement – but in some sense, it is the "operator of probabilities" (the eigenvalues are probabilities of the corresponding eigenstates, if we choose this basis) and it must transform in the same way as operators. Observables must transform in the adjoint because they are operators, stupid. We're supposed to know how to multiply them, in an associated way, so they're really matrices or generalized, infinite-dimensional matrices of a sort. Observables have transformation properties "derived" from the pure states because they may be defined as something that depends on the pure states. Another interesting issue is whether the "algebra of observables" has a preferred representation. In the simple models like non-relativistic quantum mechanics and quantum field theory (and therefore string theory in well-known backgrounds which admit a quantum-field-theory-based description), we're used to the "Yes" answer, at least morally. In the quantum mechanics generalized in the way I still count as quantum mechanics, the answer is demonstrably always "Yes"; the space of pure states is "canonical". Note that this space is large and unifies the spaces with all eigenvalues of whatever you could consider "Casimirs of an algebra". If we talk about quantum fields etc., we are considering all operators and their products, not just a limited set of symmetry generators. One may weaken the requirement that all these things are in the adjoint representation in some way and try to look for exceptional solutions (loopholes) to these weakened requirements, and that's what Weinberg is doing. But at the end, or at the beginning, he should have asked what are the broader rules of the game or the motivation behind this whole business. Of course that if one weakens some postulates sufficiently, i.e. doesn't require the operators and/or density matrix to transform in the adjoint, there will generically be new solutions to the weakened constraints. But we must ask: 1. Are these new solutions physically relevant for our world or worlds that enjoy at least some qualitative kinship to our world (e.g. some highly exotic string vacua)? 2. Are these new solutions mathematically interesting so that these new non-adjoint exceptions are exciting to be studied for mathematical reasons? I would say that if the answers to both questions were "No", then the research of these exceptions would be pretty much worthless. If it is not worthless, which of these two questions is answered by a "Yes"? Maybe both questions are answered by a "Yes"? That would be thrilling, indeed. The answer to the first question is more likely to be "No" and the "Yes" answer would be shocking but there may be something new waiting over there. If the answer to the second question is "Yes", then these new solutions could be analogous to the exceptional Lie groups. Someone could think that \(SU(N),SO(N),USp(2N)\) are the only compact simple Lie groups. But there actually exist the exceptional groups \(E_6,E_7,E_8,F_4,G_2\), too. We could have overlooked them but we may find them if we're careful, too. Similarly, all the transformations on the space of density matrices is being typically embedded into \(U(N)\) by assuming that the density matrix transforms in the adjoint representation. But it could perhaps transform as another representation of the group, perhaps a different group than \(U(N)\). If such an interesting exception exists, the formulation of the "theory" using these mutated density matrices must forbid pure states. The "theory" would only work in terms of the density matrices. Is that possible? One thing to notice is that it must be impossible to fully identify a pure state by measuring a complete set of commuting observables. Pure states just shouldn't be allowed – otherwise the "theory" would have to tell us how the pure states transform as well, and the density matrices' transformation laws would have to be derived from that. What does it mean that pure states aren't allowed in the theory? It means that there is no "classical-like knowledge" in the theory. Creatures living in that theory can't ever be 100% certain about pretty much anything. Their freedom to measure the observables (general operators/matrices on the Hilbert space) is fundamentally restricted in some universal way. If they were certain about something, that they have a pure state, then the pure states would probably be back in the game. So yes, I think that the existence of the classical limit of the usual sort also forces us to admit the usual "adjoint representation option" for the density matrices. Yes, I tend to think that in our world, at least e.g. in an \(n\)-qubit quantum computer embedded into the real world, it's possible to design a (usually complicated, composite) procedure to measure an arbitrary observable (given by any matrix on the \(2^n\)-dimensional Hilbert space). Such a procedure would have to be banned in "Weinberg's realm of loopholes". To avoid direct contradictions with the engineering tests, with the ability of quantum computation experts' to measure almost anything, the observables that may be measured in Weinberg's realm of loopholes, at least approximately, should be at least slightly "dense" in the space of operators. But cannot there be something that is physically "close" to the adjoint representation option but is fundamentally different? Maybe it could describe the world around us, too. Let me tell you a scenario that I can prove to be impossible but for a while, at least if you are just smoking marijuana, you could think that it is an ingenious idea. Maybe the density matrix transforms as a large irreducible representation of the monster group and physical symmetries we know are only approximated by transformations embedded into the monster group! Again, I can show that our world can't be like this particular proposal – and no world that looks like a "related" vacuum (e.g. other conventional enough vacua of string theory) can behave like that, either. (The monster group is relevant for the quantum description of all black holes states in the maximally curved \(AdS_3\) background of pure 3D gravity, as Witten has argued, but I think that this theory still allows arbitrary pure states and respects the decomposition of density matrices to pure states; maybe there's some natural way to restrict allowed values of the density matrix to a rational subset, however.) But of course, it is conceivable that some overlooked scenario involving mutated, twisted, and strangely constrained density matrices exists. It is possible that this is a gem that is waiting to be discovered and one must weaken some assumptions or axioms to find it. The existence of a research project often tries to promote its own importance and it's just illegitimate in the absence of evidence However, as always, I think it's critically important not to degrade science to the industry of rationalization of a wishful thinking, a posteriori justification of some "cool" conclusions that are actually assumptions of the industry. I think that even the question whether there exists an interesting, at least remotely physical, mutated formalism for non-adjoint density matrices is a scientific question that must be approached rationally and scientifically. Scientists must compare the evidence in favor and against the answer "yes, such an interesting generalization exists". And for me, i.e. as far as I can evaluate the available evidence including the newest paper by Weinberg, the odds are way over 99.7% that such an interesting generalization doesn't exist. I am less certain about this answer than about the claim that "Bohmian, real many worlds, and objective collapses as a rewriting of quantum mechanics will always remain stinky piles of šit" – but I am still sufficiently certain that I would be willing to bet one million crowns on that assuming that the criteria of the bet would be sufficiently "objective". Add to Digg this Add to reddit snail feedback (60) : reader imho said... Your entire diatribe rests on your assumption that the wave function simply encodes probabilities and has nothing further to say about objective reality. This is a reasonable, and perhaps correct, interpretation, but it is by no means indisputable correct. It is perfectly reasonable to believe that the wave function somehow also encodes something about objective reality. While the former is certainly more natural for high energy Physics, the latter is definitely more appropriate for Atomic or Solid State Physics. There are plenty of molecular phenomenon that rely on one single electron being at all possible positions simultaneously. If the electron was really only at one particular spot, we would get a different answers in subsequent measurements. Hartree-Fock, Exchange-Correlation, etc etc. Molecules and bubble chambers look very different. I don't think the data supports your confidence. reader Gene Day said... It is remarkable that a guy as smart as Weinberg can be so confused about QM. Of course the world is quantum mechanical and QM is the only formalism that can ever describe it accurately on any scale, large or small. Classical mechanics may give an adequate answer to a practical problem but so may a simple guess. That does not make it science. reader Luboš Motl said... "...your assumption that the wave function simply encodes probabilities and has nothing further to say about objective reality." The fact that the wave function or density matrix only encodes our knowledge and no objective reality is not "my assumption". First of all, it's not mine - it's due to the discoverers of quantum mechanics like Heisenberg, Jordan, Born, Bohr, and others. Second, it's not an assumption, it is the *result* i.e. major conclusion of years-long research which was the most important research in science at least in the last 150 years. Science works by falsification and 90 years ago or so, it simply falsified the idea that the world is described by classical i.e. objective reality. What is falsified can't be unfalsified - falsification is really irreversible. The only thing you may do is to deny science because you find it inconvenient - much like some people deny that the Earth isn't the center of the Universe and we contain DNA. reader Tom said... Lubos, thanks for this discourse on QM, very deep and provocative for sure. I’m far from a QM expert but I’m currently rereading Messiah’s two volume QM classic and your essay surely fits seamlessly with his take. One thing I do consider hopelessly muddled is the multi-world interpretations of QM probabilities. I’ve worked with some very good physicists, one actually a student of Weinberg’s, and I’ve been struck by how few of them had any expertise in measure theory, leaving their understanding of probability a little hollow. The (possibly unique) thing about QM is that the “experiment” yields exactly the numerical values of interest (the observable), and this means that the measurable function connecting Nature’s uncertainty to our quantifications is the identity function. Statistically, then, there’s nothing to interpret, your done and stuff like Bayesian musings are basically irrelevant. Statistics is totally subordinate to probability and it only arrises when the measurable function connecting uncertainty and quantifications is unknown (consider an agricultural experiment where weights or lengths are recorded). The totality of measurable functions yielding finite variance (i.e. L2) form a Hilbert space and almost all statistical methods are simply finite dimension projections onto “nice” subspaces admitting convenient parameterizations (i.e. likelihoods). None of this occurs in QM because the experimental results are the values of interest, so, indeed, shut up and calculate. reader Luboš Motl said... Thanks, Gene, and a good point. Quantum mechanics with its new features is the only right way to describe any object and phenomenon in the world at the accuracy of 20th century science or later. The efforts to return the description to the straitjacket of classical physics are completely analogous to a farmer's attempt to explain all the observations using the agricultural common sense for which even classical mechanics may be annoyingly abstract and hard. ;-) If one requires a certain level of accuracy, one simply has to get used to some features of the thinking of the description that would be annoying at a lower level. reader Luboš Motl said... Good to hear that my take on quantum mechanics agrees with Jesus Christ. ;-) His book is probably among the canonical targets of the "anger of the interpreters". The "literal" many-worlds-advocates' interpretation of probabilities is the same as Walter Wagner. It's the guy who said that the LHC would probably destroy the world - 50% probability - because it either will or it won't. There are two possibilities and 100/2=50, so each of them has 50%. The host said that he was not sure whether that's how probabilities work. But that's exactly how they work according to the many-worlds-advocates' picture and they don't seem to care at all that the main and only thing that physics predicts - probabilities - are replaced by some universal completely wrong blanket fractions. They don't know how to fix this 50%-50% prediction and they don't seem to care because they don't seem to care about the theory's ability to predict anything. reader Swine flu said... The only meaningful known way to talk about an electron being "in all possible positions simultaneously" is by accepting the superposition of amplitudes in quantum mechanics. Is that what you mean by "objective reality" of the wave function? If so, superposition is a standard feature of standard quantum mechanics and a particle physicist could no more make a living without it than an atomic physicist :). reader Dilaton said... Still here .... ;-)? As somebody else said earlier, imho you are still an idiot. Note that for drawing this conclusion, it is enough to read the first sentence of your comment. Andas your quantum state seems not to evolve at all, I will not have to read a single character of what you write in the future ... reader John Archer said... Are you really, really sure there isn't more to it? I mean it's not exactly out and out intuitive like it should be, is it? One needs to be able to see these things. You know, like it's easy to see how billiard balls work. Maybe you could sharpen it up a bit for us? reader tomandersen said... QM, like every other physical theory will fail. For example, Newton's gravity is very accurate, yet fundamentally wrong. QM will hold only over some regime. No matter where it fails, it means that all the QM math is nothing more or less fundamental than, say the classical solution of a N body Newtonian system. Useful to predict the path of a space probe, but useless as an insight into the underpinnings of nature. Weinberg is skeptical of QM, looking for a failure point, which is an admirable thing to do, even if you hold the view that with QM 'this time its different'. reader imho said... Hi Lubos, I understand what you are saying. I just don't think it is that cut and dry. The key words you are using are observation and measurement. And yes, in that case, the Uncertainty Principal is relevant and blah blah blah... no need to rehash undergraduate material. And yes, I agree that basis states are simply math. I can solve the Hydrogen atom using Hermite Polynomials and it has no physical meaning what-so-ever. The complication(s) come when we do round-about measurements, like the double slit experiment, or measuring energy levels of single molecules. The latter depends on the point like electron being "smeared out" (that's a terrible choice of words... I know). In fact, for the calculation to work we should interpret the wave function as a charge density over all space. I agree that when we make measurements the wavefunction is clearly a probability. But when we are not looking things are much less clear. This is believe is at the root of Weinberg's point. reader Rehbock said... Weinberg acknowledges as much at page 95 of his Lectures on QM. He says there is nothing absurd about the state vector being only a predictor of probabilities. He goes on though to lament the loss of realism. It is hard to live with no physcal states he says and that it is consistent but is disappointing if the state vector is not more . Sounds like he loves the classical world too much. reader Rehbock said... I agree we should look for the failure points. I do not agree that QM will fail. Newton is not wrong and does not fail for what it is intended. QM correctly answers certain questions. It will never be wrong when used for what intended. reader Marcel van Velzen said... Weinberg WTF! reader Gene Day said... Looking for a QM failure point is a fool’s errand, Tom. While Newtonian mechanics works only over a limited range of mass and velocity, QM has no such limits. It is complete and inalterable up to the Planck energy (and thus to the smallest meaningful dimension) and it is scaleable to the entire universe. This sounds like pure dogmatism, of course, but it is so much more than that. There really is no conceptual way to modify QM. If you disagree the ball is in your court but you can’t do it either. Those who dislike QM really do feel uncomfortable with it but that has nothing to do with science. They are just uncomfortable; that’s all. Newton’s gravity, by the way, is not “fundamentally” wrong; it is not even a little bit wrong. It is, of course, incomplete as a theory of the world we live in. Over it’s wide range of application, it yields exactly the same answers as QM. It is not Wrong! reader Gene Day said... As a professional solid-state physicist myself I feel I must rebut your assertion that it is reasonable to believe that the wave function encodes something about objective reality. It does not and it is not reasonable to believe it does. There is not a whit of difference between the meaning of the wave function for high energy physics and for solid-state physics. I would also suggest that you avoid the use of the insulting word “diatribe” when assessing Lubos’ blog. He is right and you are wrong. reader Gene Day said... If you actually understood the double slit experiment you would not be saying such silly things. reader Gene Day said... When you say QM is not intuitive you are really saying that it is not intuitive to you. After almost sixty years of intimate familiarity I assure you that QM is perfectly intuitive to me. The reason it is unintuitive to you is that you are trying to understand it in terms of the world that you already know. You can’t do that. In a sense you you have to start over and forget much of what you already “know” about the world. It’s not easy. I am of at least average intelligence and it took many years for me to get it. reader Swine flu said... "Looking for a QM failure point is a fool’s errand ..." People will always keep probing the foundations of the accepted fundamental theories. The question in any given case is whether it is being done intelligently. reader Giotis said... Why old physicists question the foundations of QM? Because there is nothing else to do. Why younger physicists don’t question the foundations of QM? Because there is much else to do. reader Curious George said... QM and Newton's gravity do not describe the same phenomena. QM is probably only an approximate description, it will fail some day, but how I can not imagine. reader Anon said... This is kind of spooky. I was just making pretty much the same point about the interpretations of quantum mechanics here I even ended up using pretty much the same analogy with special relativity. Honestly, I don't even accept the idea that you have to give up "objective reality." You just have to give up a notion of objective reality that is close to our classical intuition. reader AJ said... Another dumb question from the peanut gallery: QM says we can only probabilistically predict where the photon is in space. I'm assuming that applies to all dimensions of spacetime, eh? So not only can we not be certain *where* it is, but we can't be certain *when* it is? Is this correct? reader Luboš Motl said... Hi Anon, I upvoted most of your comments - are you the outer space potato man nine? ;-) Including the comment where you talk about "vitriol" on this blog. It's not vitriol, make a pH test! I agree with you that the Born rule almost certainly won't be derived from something "qualitatively more fundamental". It's as fundamental as it can get. Classical physics was predicting classical quantities such as positions. Quantum mechanics is predicting i.e. calculating probabilities. It has to predict something that is measurable, a theory has to say what it is, and both classical physics and quantum mechanics give answers. So they're complete theories/hypotheses. On top of that, quantum mechanics is exactly correct, too. reader Luboš Motl said... Dear John, I am not "super quite sure" about anything but I am more sure about the fundamental role of probabilities in the exact laws of physics than I am, for example, about the existence of DNA as the carrier of the genetic information etc. Much higher than 99.9999%. I haven't really seen DNA with my own eyes after an experiment that I have fully done, so there may be a conspiracy. On the other hand, I have verified or rediscovered the lines of evidence that make the probabilistic character of the laws of physics inevitable myself. A careful, rational thinking about the double slit experiment (or a careful thinking about any other simple enough and characteristically quantum setup) is really enough for that. reader John Archer said... Sorry, Luboš. I was just fooling around. I apologise for wasting your time. Thanks for the reply though. Although I'm a million miles from anything like your level I think I see things broadly the way you do, at least as far as I understand them. I like heroes too. ;) :) reader Eugene S said... The trouble with clear explanations using simple language, such as the above, is that they can trick the ordinary reader into adopting a delusional belief that one has understood quantum mechanics. The acid test comes when one tries to pass on that knowledge. That happened to me the other day, when I wanted to explain the uncertainty principle to a friend. Even though the friend was patient and receptive, I found myself tripping over words, mixing metaphors, making illogical leaps, and just embarrassing myself. Well, the trouble is not really with the explanation. It is a commendable achievement to write simply and clearly. (Another one who does an excellent job is Johannes Koelman. Hmm, a Czech and a Dutchman -- neither of whom "native speakers" of English -- are better writers than most English-only scientists? How to explain that LOL) The trouble lies in not realizing that to explain something well, it is not sufficient -- not by a long shot -- to consume well-written texts as a reader, one should also invest the time to learn the algebra -- do the math -- and put what one has learned into one's own words... before attempting to teach others. reader Luboš Motl said... Right, Eugene, digestion or rediscovery of the facts - including the algebra and other forms of the "personal struggle" - is crucial for a true understanding in physics. reader Nik FromNyc said... On the CLOUD result: This means that the literal smell of pine forests is forming clouds as the highly volatile pinene and related terpene molecules raise into the atmosphere, and are oxidized by photochemically-generated hydroxy radicals to make poly carboxylic acids that can then form complexes with sulfate ions act as templates for water condensation into cloud water droplets. Here is a quick graphic of the chemistry: They ran mass spectroscopy to show how specific small complexes, likely hydrogen bonded, between a specific carboxylic acid laden molecule and some sulfates inside of their big CLOUD machine that simulates the atmosphere. reader Peter F. said... As far as I am concerned, the Uncertainty Principle can be intuitively extrapolated from fundamental physics into a for some encompassing philosophical tasks required-to-be-applied Tolerance Principled attitude. Lubos and Gene (and people with similar points of view) seem to inadvertently apply this my tenuously derived principle in their understanding of and their attitude to QM and how it should be/is best interpreted. :-> A tolerance principled attitude may have to be deployed in order to facilitate the 'production, sales and consumption' of atheistic enlightenment promoting tools for thought; Or, at least it can in my experience facilitate a science-aligned and conservatively revolutionary accEPTance of What Is and was going on - as seen mainly from a perspective of an effectively philosophy terminating evolutionary psychology type analysis of ourselves. reader Swine flu said... By probing the foundations I don't just mean theoretical ruminations, but also experimental tests. GR continues to be tested. reader Eelco Hoogendoorn said... That it 'takes place in the brain', or is entirely subjective, would suggest that different experimenters in the same room could disagree on the measured value of some quantum mechanical observable, which does not appear to be the case. I suppose (but am not sure) that different observers could indeed observe uncorrelated collapse, but for that to happen the different subspaces that these observers reside in must be entirely orthogonal, which isn't really going to happen in practice. Being placed in a separate cat-box certainly wouldn't be enough to disentangle you from your immediate environment. But being on another planet? Could that allow us to observe the subjectivity of collapse somehow? Being in another lab a few km away? This is what I dislike about the decoherence framework; it sounds plausible, but nobody ever bothers making a quantitative prediction. reader Marcel van Velzen said... The most fundamental and yet practical attack on the fundamentals of QM was the EPR "paradox". It was shown to be 100% in agreement with QM. This implies that the fundamental axioms of QM must also be 100% correct to avoid contradictions with relativity. First: if one could influence the outcome of an experiment even by a tiny fraction, information could be exchanged faster than the speed of light. Second: relativity forbids the conclusion that one observer causes the collapse of a state before the other observer measures the state (time ordering is not relativistically invariant for space-like separated events) let alone that something happens simultaneously and it's out of the question that the system was in a definite state to begin with. So all these classical interpretations of QM are insane. You don't like it? Go somewhere else: reader Marcel van Velzen said... The most fundamental and yet practical attack on the fundamentals of QM was the EPR "paradox". It was shown to be 100% in agreement with QM. This implies that the fundamental axioms of QM must also be 100% correct to avoid contradictions with relativity. First: if one could influence the outcome of an experiment even by a tiny fraction, information could be exchanged faster than the speed of light. Second: relativity forbids the conclusion that one observer causes the collapse of a state before the other observer measures the state (time ordering is not relativistically invariant for space-like separated events) let alone that something happens simultaneously and it's out of the question that the system was in a definite state to begin with. So all these classical interpretations of QM are insane. Feynman: You don't like it? Go somewhere else! reader Emily powell said... reader tomandersen said... The path to the next breakthrough in physics is not known, and the people who will blaze the trail have to trust their uncomfortable feelings, as there is nothing else to hold onto, by definition. Physics today holds little respect for crazy theories - which stifles progress. In other words its easy to publish a 'me too' paper that adds some extra polish to some sub-sub-field, but next to impossible to publish ideas which are almost certainly wrong. Which has more value to society? In the 'old days' communication between physicists was slow (weeks to months) and thus new ideas - which are necessarily rough - could be ruminated on for years. The internet provides us with a lock down of sorts - wrong ideas are quickly crushed by the global group. reader tomandersen said... One is free to come up with alternative to special relativity, but publishing that idea or even talking about it would get one blackballed from physics. So its actually not strange at all. Sad perhaps, but not strange. We are in a place in theoretical physics where the technicians are in charge. reader tomandersen said... If QM only works in some parameter space, then its wrong. reader imho said... No... that's not really correct. Superposition is generally used to describe distinct states with distinct energies. Lubos keeps bringing up the density matrix, which is probably your source of confusion. It's probably cleaner to think about a single state. Lubos is saying that when looking at a single state the wavefunction is simply describing the probability of finding the particle at some position or momenta. I am saying I agree, but only if we do a measurement. If no measurement is made - like the double slit experiment or interacting electrons in a molecule - then the wavefunction definitely encodes something about reality. The single electron interacts with itself because it is everywhere at once... the wave function is like a charge density or an envelope function for planewaves or some other such thing. It is not that cut and dry... and this is probably what Weinberg is getting at. Have a nice day. reader Rehbock said... QM answers only certain questions. It cannot answer every question. It gives answers some have trouble accepting. QM also cannot answer questions that are nonsense. If that makes it wrong, fine. reader Luboš Motl said... It has good reasons that there are no non-technicians (when it comes to relativity) at professional departments, doesn't it? There are no non-technicians at Microsoft, either. reader Swine flu said... Physics doesn't shy away from wild ideas today, some might even say it supports too many, although that may be hard to quantify. What "technicians" will most justifiably never be interested in is in people who simply lack the background to do any physics at all. reader Luboš Motl said... I brought the density matrix as a more fundamental and natural thing than the pure state because it was really the point of Weinberg's paper, too. reader Anon said... Yes, I'm outerspacepotatoman9. I probably should have just used the same handle here but I didn't think about it. Also, from my perspective your anger is righteous anger at the prevalence of all of this unnecessary confusion! Unfortunately, a lot of the people on the other side of this issue don't see it that way so I usually warn them so they don't get all defensive. Your blog posts provide some of the most thorough discussion of these points though. Anyway, I completely agree with everything you said here. It's funny, when I have these conversations people who like many worlds often say something like "What you are suggesting is such a small difference from many worlds, wouldn't it just be simpler to accept many worlds and forget about the wave function collapse altogether?" Well yes, it would be simpler. The only problem is that it doesn't actually work! reader Luboš Motl said... Amen to that! It's strange to see that they often realize that it doesn't work but this detail doesn't seem to be important for their beliefs what's right. reader Gene Day said... I have never spoken of how QM should be interpreted, Peter; you are confused. People get off on the wrong foot by even trying to interpret it. To really understand QM one must not view it through some other looking glass such as classical mechanics or, even worse, "objective reality". The only way to see the actual face of QM is from the inside, by actually using it to solve real problems. If you go down that road for a few years the light will begin to dawn. reader Gene Day said... Wrong! The path to the next breakthrough in physics is known, Tom. It is called string theory, the only viable quantum theory of gravity. As David Gross has said many times, string theory is not wrong! Get used to it. reader Gene Day said... Quantum theory cannot be an approximation, George. It is exact and complete. reader Curious George said... So was Newton's physics - until electromagnetism came in. reader Swine flu said... I wasn't talking about the superposition of states with different energies or about the density matrix; it was more basic. I was responding to your statement, "There are plenty of molecular phenomen[a] that rely on one single electron being at all possible positions simultaneously," which you stated in support of the idea that "It is perfectly reasonable to believe that the wave function somehow also encodes something about objective reality." My point was that the fact that electron is in all possible positions simultaneously is encoded in the wavefunction being a superposition of basis states |x> localized at different points in space, and that no additional "reality" of the wavefunction is needed to explain molecular phenomena. I also stated that particle physics depends on this aspect of quantum mechanics just as much as the molecular physics, since you seemed to suggest that molecular physics needed something more from quantum mechanics ("reality") than particle physics. reader markusm said... Well, if one assumes that the environment has an infinite number of degrees of freedom (see also: and/or QFT) and that a perfect isolation of a QM system from it is never possible, from the operationalist's point of perspective, the Lindblad equation is as "fundamental" as it can be - and one has to live with a non-unitary time FAPP. reader Dilaton said... Nice Review ... ;-) reader markusm said... I agree that in certain cases the additional Lindblad terms are negligible. But I regard the coupling with the environment as the most general case. If you are in a situation where the additional terms are relevant and you want to do "better", using only, you have to specify the initial conditions for the system + the environment which may be practically or even in principle impossible. But even if one could sustain unitarity for some time, the very act of observation opens up the system to the observer and to the environment and one is back to Lindblad. " ... the non-unitarity you are talking about does not appear fundamental." - What does it mean for something to be "fundamental" in physics ? P.S. I'm not an expert on these things - just some thoughts. reader kashyap vasavada said... Very nice review of Weinberg’s paper,Luboš! Since Weinberg has done lot of brilliant work before,I am inclined to give him some benefit of doubt. It looks like this is unfinished work in progress. He is searching for alternatives. He firmly believes that there is a measurement-interpretation problem and does not believe in any known "interpretations” as he mentioned in his book on quantum mechanics. Then it is hard to imagine that he can overcome that by just changing it to an equivalent formalism of density matrix and not mentioning wave functions and state vectors. That would be like sweeping the dirt under a carpet. The dirt is still there, though not visible! So it may very well be that he is searching for alternatives to the usual Schrodinger equation, starting with novel forms of density matrix. reader Gene Day said... Your analogy with the early understanding of Newtonian physics sounds reasonable but it is deeply flawed. The absolute limits to what can ever be observed (anything that can possibly have a causal relation to us) in terms of size, mass, energy or time were completely unknown in those early days. Now these limits are understood and it is clear that modern physics covers the entirety of observation space. Since science is only about what can be observed the game is over and there is zero wiggle room for fudging or for finer approximations to string theory. There is no room for adjusting string theory; it is completely rigid. The same it true for ordinary QM, which is the correct theory when gravity is omitted from the formalism. reader TM said... That's right! I've always like to point out that it is classical mechanics that needs interpretations and not quantum mechanics. reader QsaTheory said... @Swine flu (your name is a killer!), Do you believe that the wavefunction describes infinite number of particles (in this reality) representing an electron in the hydrogen atom, as an example. If so I agree 110%. reader Peter F. said... Hi Gene, Thanks for your patient and polite comment and for the inadvertently prompting me to try to explain what I meant, differently this time! In contrast to your interpretation of my attitude to QM, I see myself as having a complete non-expert's respect and appreciation of your pragmatic attitude to QM. :-) Here is my ~point~ put slightly differently: As matter of principle, a person who strives [whether with the inertia of habit or the even greater 'inertia' of a by CURSES insidiously co-motivated addictive habit] to with increasing resolution perceive (with or without mathematics and with ordinary or extremely high intelligence) any aspect of What Is/was going on will eventually require some sort of 'tolerance principled' attitude to not end up performing or being preoccupied with an "exercise in futility" (or with producing something even worse). Besides, I wonder if one could not rather legitimately blame a deficient capacity for self-observation, specifically not being able to observe oneself being in need of adopting a ~tolerance principled~ attitude, for a lot of what you and especially Lumo more easily perceive then can explain as essentially silly attempts to revise or try to discover some discrete and immutable mathematical truths behind the observed phenomenons described and predicted by means of quantum mechanical probabilities. reader Dan said... OK, we shall look past the anti-qm rant because it is Weinberg... Could his thoughts be relevant to theories known not to have a classical limit (N=2 d=6, ...)? reader Robert Collier said...
9080afe74bce6e67
Fifty Shades of Water: Benchmarking DFT Functionals against Experimental Data for Ionic Crystalline Hydrates Authors: Getachew Kebede, Peter Broqvist, Anders Eriksson, and Kersti Hermansson We propose that crystalline ionic hydrates constitute a valuable resource for benchmarking theoretical methods for aqueous ionic systems. Many such structures are known from the experimental literature, and they contain a large variety of water–water and ion–water structural motifs. Here we have collected a data set (CRYSTALWATER50) of 50 structurally unique “in-crystal” water molecules, involved in close to 100 nonequivalent O–H···O hydrogen bonds. A dozen well-known DFT functionals were benchmarked with respect to their ability to describe these experimental structures and their OH vibrational frequencies. We find that the PBE, RPBE-D3, and optPBE-vdW methods give the best H-bond distances and that anharmonic OH frequencies generated from B3LYP//optPBE-vdW energy scans outperform the other methods, i.e., here we performed B3LYP energy scans along the OH stretching coordinate while the rest of the structure was kept fixed at the optPBE-vdW-optimized positions J. Chem. Theory Comput. 15, p. 584, 2019 DOI: 10.1021/acs.jctc.8b00423 Dynamical and Structural Characterization of the Adsorption of Fluorinated Alkane Chains onto CeO2 Authors: Giovanni Barcaro , Luca Sementa, Susanna Monti , Vincenzo Carravetta, Peter Broqvist, Jolla Kullgren, and Kersti Hermansson The widespread use of ceria-based materials and the need to design suitable strategies to prepare eco-friendly CeO2 supports for effective catalytic screening induced us to extend our computational multiscale protocol to the modeling of the hybrid organic/oxide interface between prototypical fluorinated linear alkane chains (polyethylene-like oligomers) and low-index ceria surfaces. The combination of quantum chemistry calculations and classical reactive molecular dynamics simulations provides a comprehensive picture of the interface and discloses, at the atomic level, the main causes of typical adsorption modes. The data show that at room temperature a moderate percentage of fluorine atoms (around 25%) can enhance the interaction of the organic chains by anchoring strongly pivotal fluorines to the channels of the underneath ceria (100) surface, whereas an excessive content can remarkably reduce this interaction because of the repulsion between fluorine and the negatively charged oxygen of the surface. J. Phys. Chem. C, Volume 41, 2018, Page 23405 Indirect-to-Direct Band Gap Transition of Si Nanosheets: Effect of Biaxial Strain Authors: Byung-Hyun Kim , Mina Park, Gyubong Kim, Kersti Hermansson, Peter Broqvist, Heon-Jin Choi, and Kwang-Ryeol Lee The effect of biaxial strain on the band structure of two-dimensional silicon nanosheets (Si NSs) with (111), (110), and (001) exposed surfaces was investigated by means of density functional theory calculations. For all the considered Si NSs, an indirect-to-direct band gap transition occurs as the lateral dimensions of Si NSs increase; that is, increasing lateral biaxial strain from compressive to tensile always enhances the direct band gap characteristics. Further analysis revealed the mechanism of the transition which is caused by preferential shifts of the conduction band edge at a specific k-point because of their bond characteristics. Our results explain a photoluminescence result of the (111) Si NSs [U. Kim et al., ACS Nano 2011, 5, 2176–2181] in terms of the plausible tensile strain imposed in the unoxidized inner layer by surface oxidation. J. Phys. Chem. C, Volume 27, 2018, Page 15297 Screened hybrid functionals applied to ceria: Effect of Fock exchange Authors: Dou Du, Matthew J. Wolf, Kersti Hermansson, and Peter Broqvist We investigate how the redox properties of ceria are affected by the fraction of Fock exchange in screened HSE06-based hybrid density functionals, and we compare with PBE+U results, and with experiments when available. We find that using 15% Fock exchange yields a good compromise with respect to structure, electronic structure, and calculated reduction energies, and represents a significant improvement over the PBE+U results. We also investigate the possibility to use a computationally cheaper HSE06//PBE+U protocol consisting of structure optimization with PBE+U, a subsequent lattice parameter rescaling step, and, finally, a single-point full hybrid calculation. We find that such a composite computational protocol works very well and yields results in close agreement with those where HSE06 was used also for the structure optimization. Phys. Rev. B, Volume 97, Page 235203. Unravelling in-situ formation of highly active mixed metal oxide CuInO2 nanoparticles during CO2 electroreduction Authors: Roghayeh Imani, Zhen Qiu, Reza Younesi, Meysam Pazoki, Daniel L.A. Fernandes, Pavlin D. Mitev, Tomas Edvinsson, Haining Tian Technologies and catalysts for converting carbon dioxide (CO2) to immobile products are of high interest to minimize greenhouse effects. Copper(I) is a promising catalytic active state of copper but hampered by the inherent instability in comparison to copper(II) or copper(0). Here, we report a stabilization of the catalytic active state of copper(I) by the formation of a mixed metal oxide CuInO2 nanoparticle during the CO2electroreduction. Our result shows the incorporation of nanoporous Sn:In2O3 interlayer to Cu2O pre-catalyst system lead to the formation of CuInO2 nanoparticles with remarkably higher activity for CO2 electroreduction at lower overpotential in comparison to the conventional Cu nanoparticles derived from sole Cu2O. Operando Raman spectroelectrochemistry is employed to in-situ monitor the process of nanoparticles formation during the electrocatalytic process. The experimental data are collaborated with DFT calculations to provide insight into the electro-formation of the type of Cu-based mixed metal oxide catalyst during the CO2 electroreduction, where a formation mechanism via copper ion diffusion across the substrate is suggested. Nano Energy, Volume 49, July 2018, Pages 40-50 Maximally resolved anharmonic OH vibrational spectrum of the water/ZnO(10-10) interface from a high-dimensional neural network potential Authors:  Vanessa Quaranta, Matti Hellström, Jörg Behler, Jolla Kullgren, Pavlin D. Mitev, and Kersti Hermansson Unraveling the atomistic details of solid/liquid interfaces, e.g., by means of vibrational spectroscopy, is of vital importance in numerous applications, from electrochemistry to heterogeneous catalysis. Water-oxide interfaces represent a formidable challenge because a large variety of molecular and dissociated water species are present at the surface. Here, we present a comprehensive theoretical analysis of the anharmonic OH stretching vibrations at the water/ZnO(10-10) interface as a prototypical case. Molecular dynamics simulations employing a reactive high-dimensional neural network potential based on density functional theory calculations have been used to sample the interfacial structures. In the second step, one-dimensional potential energy curves have been generated for a large number of configurations to solve the nuclear Schrödinger equation. We find that (i) the ZnO surface gives rise to OH frequency shifts up to a distance of about 4 Å from the surface; (ii) the spectrum contains a number of overlapping signals arising from different chemical species, with the frequencies decreasing in the order ν(adsorbed hydroxide) > ν(non-adsorbed water) > ν(surface hydroxide) > ν(adsorbed water); (iii) stretching frequencies are strongly influenced by the hydrogen bond pattern of these interfacial species. Finally, we have been able to identify substantial correlations between the stretching frequencies and hydrogen bond lengths for all species. The Journal of Chemical Physics, 148, 241720 (2018); Hydrogen-Bond Relations for Surface OH Species Authors: Getachew G. Kebede , Pavlin D. Mitev, Peter Broqvist, Jolla Kullgren , and Kersti Hermansson This paper concerns thin water films and their hydrogen-bond patterns on ionic surfaces. As far as we are aware, this is the first time H-bond correlations for surface water and hydroxide species are presented in the literature while hydrogen-bond relations in the solid state have been scrutinized for at least five decades. Our data set, which was derived using density functional theory, consists of 116 unique surface OH groups–intact water molecules as well as hydroxides–on MgO(001), CaO(001) and NaCl(001), covering the whole range from strong to weak to no H-bonds. The intact surface water molecules are found to always be redshifted with respect to the gas-phase water OH vibrational frequency, whereas the surface hydroxide groups are either redshifted (OsH) or blueshifted (OHf) compared to the gas-phase OH frequency. The surface H-bond relations are compared with the traditional relations for bulk crystals. We find that the “ν(OH) vs R(H···O)” correlation curve for surface water does not coincide with the solid state curve: it is redshifted by about 200 cm–1 or more. The intact water molecules and hydroxide groups on the ionic surfaces essentially follow the same H-bond correlation curve. J. Phys. Chem. C2018122 (9), pp 4849–4858 DOI: 10.1021/acs.jpcc.7b10981 Vacancy dipole interactions and the correlation with monovalent cation dependent ion movement in lead halide perovskite solar cell materials Authors: M.Pazoki, M. J. Wolf, T. Edvinsson and J.Kullgren Ion migration has recently been suggested to play critical roles in the operation of lead halide perovskite solar cells. However, so far there has been no systematic investigation of how the monovalent cation affects the vacancy formation, ion migration and the associated hysteresis effect. Here, we present density functional theory calculations on all possible ion migration barriers in the perovskite materials with different cations i.e. CH3NH3PbI3, CH(NH2)2PbI3 and CsPbI3 in the tetragonal phase and investigate vacancy monovalent-cation interactions within the framework of the possible ion migrations. The most relevant ion movement (iodide) is investigated in greater detail and corresponding local structural changes, the relationships with the local ionic dielectric response, Stark effect and current-voltage hysteresis are discussed. We observe a correlation between the energy barrier for iodine migration and the magnitude of the dipole of the monovalent cation. From the data, we suggest a vacancy-dipole interaction mechanism by which the larger dipole of the monovalent cation can respond to and screen the local electric fields more effectively. The stronger response of the high dipolar monovalent cation to the vacancy electrostatic potential in turn leads to a lower local structural changes within the neighbouring octahedra. The presented data reveal a detailed picture of the ion movement, vacancy dipole interactions and the consequent local structural changes, which contain fundamental information about the photo-physics, and dielectric response of the material. Nano Energy, 38, 2017, pp. 537-543 DOI: 10.1016/j.nanoen.2017.06.024 DFT-based Monte Carlo Simulations of Impurity Clustering at CeO2(111) Authors: Jolla Kullgren, Matthew J. Wolf, Pavlin D. Mitev, Kersti Hermansson and Wim J. Briels The interplay between energetics and entropy in determining defect distributions at ceria(111) is studied using a combination of DFT+U and lattice Monte Carlo simulations. Our main example is fluorine impurities, although we also present preliminary results for surface hydroxyl groups. A simple classical force-field model was constructed from a training set of DFT+U data for all symmetrically inequivalent (F)n(Ce3+)n nearest-neighbor clusters with n = 2 or 3. Our fitted model reproduces the DFT energies well. We find that for an impurity concentration of 15% at 600 K, straight and hooked linear fluorine clusters are surprisingly abundant, with similarities to experimental STM images from the literature. We also find that with increasing temperature the fluorine cluster sizes show a transition from being governed by an attractive potential to being governed by a repulsive potential as a consequence of the increasing importance of the entropy of the Ce3+ ions. The distributions of surface hydroxyl groups are noticeably different. J. Phys. Chem. C, 2017, 121 (28), pp 15127–15134 DOI: 10.1021/acs.jpcc.7b00299 Electronic structure of organic–inorganic lanthanide iodide perovskite solar cell materials Authors: M. Pazoki, A. Röckert, M. J. Wolf, R. Imani, T. Edvinsson, and J. Kullgren. The emergence of highly efficient lead halide perovskite solar cell materials makes the exploration and engineering of new lead free compounds very interesting both from a fundamental perspective as well as for potential use as new materials in solar cell devices. Herein we present the electronic structure of several lanthanide (La) based materials in the metalorganic halide perovskite family not explored before. Our estimated bandgaps for the lanthanide (Eu, Dy, Tm, Yb) perovskite compounds are in the range of 2.0–3.2 eV showing the possibility for implementation as photo-absorbers in tandem solar cell configurations or charge separating materials. We have estimated the typical effective masses of the electrons and holes for MALaI3 (La= Eu, Dy, Tm, Yb) to be in the range of 0.3–0.5 and 0.97–4.0 units of the free electron mass, respectively. We have shown that the localized f-electrons within our DFT+U approach, make the dominant electronic contribution to the states at the top of the valence band and thus have a strong impact on the photo-physical properties of the lanthanide perovskites. Therefore, the main valence to conduction band electronic transition for MAEuI3 is based on inner shell f-electron localized states within a periodic framework of perovskite crystal by which the optical absorption onset would be rather inert with respect to quantum confinement effects. The very similar crystal structure and lattice constant of the lanthanide perovskites to the widely studied CH3NH3PbI3 perovskite, are prominent advantages for implementation of these compounds in tandem or charge selective contacts in PV applications together with lead iodide perovskite devices. J. Mater. Chem. A, 5, 2017, pp. 23131-23138 DOI: 10.1039/C7TA07716E
648b58f5c98f2230
Skip to main content Chemistry LibreTexts 11.1: Gaussian Basis Sets • Page ID • [ "article:topic", "showtoc:no" ] A basis set in theoretical and computational chemistry is a set of functions (called basis functions) which are combined in linear combinations (generally as part of a quantum chemical calculation) to create molecular orbitals. For convenience these functions are typically atomic orbitals centered on atoms, but can theoretically be any function; plane waves are frequently used in materials calculations. The Variational Method and Basis Sets To describe the electronic states of molecules, we construct wavefunctions for the electronic states by using molecular orbitals. These wavefunctions are approximate solutions to the Schrödinger equation. A mathematical function for a molecular orbital is constructed, \(\psi _i\), as a linear combination of other functions, \(\varphi _j\), which are called basis functions because they provide the basis for representing the molecular orbital. \[\psi _i = \sum _j c_{ij} \varphi _j \label {10.8}\] The variational method is used to find values for parameters in the basis functions and for the constant coefficients in the linear combination that optimize these functions, i.e. make them as good as possible. The criterion for quality in the variational method is making the ground state energy of the molecule as low as possible. Here and in the rest of this chapter, the following notation is used: \(\sigma\) is a general spin function (can be either \(\alpha\) or \(\beta\)), \(\varphi \) is the basis function (this usually represents an atomic orbital), \(\psi\) is a molecular orbital, and \(\Psi\) is the electronic state wavefunction (representing a single Slater determinant or linear combination of Slater determinants). The ultimate goal is a mathematical description of electrons in molecules that enables chemists and other scientists to develop a deep understanding of chemical bonding and reactivity, to calculate properties of molecules, and to make predictions based on these calculations. For example, an active area of research in industry involves calculating changes in chemical properties of pharmaceutical drugs as a result of changes in chemical structure. Selecting the ab initio model for a chemical system is almost always involves a trade-off between accuracy and computational cost. More accurate methods and larger basis sets make jobs run longer. In modern computational chemistry, quantum chemical calculations are typically performed using a finite set of basis functions. In these cases, the wavefunctions of the system in question are represented as vectors, the components of which correspond to coefficients in a linear combination of the basis functions in the basis set used. The molecular spin-orbitals that are used in the Slater determinant usually are expressed as a linear combination of some chosen functions, which are called basis functions. This set of functions is called the basis set. The fact that one function can be represented by a linear combination of other functions is a general property. All that is necessary is that the basis functions span-the-space, which means that the functions must form a complete set and must be describing the same thing. For example, spherical harmonics cannot be used to describe a hydrogen atom radial function because they do not involve the distance r, but they can be used to describe the angular properties of anything in three-dimensional space. This span-the-space property of functions is just like the corresponding property of vectors. The unit vectors \((\overrightarrow {x}, \overrightarrow {y}, \overrightarrow {z})\) describe points in space and form a complete set since any position in space can be specified by a linear combination of these three unit vectors. These unit vectors also could be called basis vectors. Exercise \(\PageIndex{1}\): "Spanning the Space" Explain why the unit vectors \((\overrightarrow {x}, \overrightarrow {y})\) do not form a complete set to describe your (three-dimensional) classroom. Just as we discussed for atoms, parameters in the basis functions and the coefficients in the linear combination can be optimized in accord with the Variational Principle to produce a self-consistent field (SCF) for the electrons. This optimization means that the ground state energy calculated with the wavefunction is minimized with respect to variation of the parameters and coefficients defining the function. As a result, that ground state energy is larger than the exact energy, but is the best value that can be obtained with that wavefunction. Slater Type Orbitals (STOs) Intuitively one might select hydrogenic atomic orbitals as the basis set for molecular orbitals. After all, molecules are composed of atoms, and hydrogenic orbitals describe atoms exactly if the electron-electron interactions are neglected. At a better level of approximation, the nuclear charge that appears in these functions can be used as a variational parameter to account for the shielding effects due to the electron-electron interactions. Also, the use of atomic orbitals allows us to interpret molecular properties and charge distributions in terms of atomic properties and charges, which is very appealing since we picture molecules as composed of atoms. As described in the previous chapter, calculations with hydrogenic functions were not very efficient so other basis functions, Slater-type atomic orbitals (STOs), were invented. A minimal basis set of STOs for a molecule includes only those STOs that would be occupied by electrons in the atoms forming the molecule. A larger basis set, however, improves the accuracy of the calculations by providing more variable parameters to produce a better approximate wavefunction, but at the expense of increased computational time. STOs have the following radial part (the spherical harmonic functions are used to describe the angular part) \[R(r) = N r^{n − 1} e^{−\zeta r}\] • \(n\) is a natural number that plays the role of principal quantum number, n = 1,2,..., • \(N\) is a normalizing constant, • \(r\) is the distance of the electron from the atomic nucleus, and \(\zeta\) is a constant related to the effective charge of the nucleus, the nuclear charge being partly shielded by electrons. Historically, the effective nuclear charge was estimated by Slater's rules. Double-zeta basis Sets One can use more than one STO to represent one atomic orbital, as shown in Equation \(\ref{10.11}\), and rather than doing a nonlinear variational calculation to optimize each \(\zeta\) value, use two STOs with different \(\zeta\) variables. The linear variation calculation then will produce the coefficients (\(C_1\) and \(C_2\)) for these two functions in the linear combination that best describes the charge distribution in the molecule (for the ground state). The function with the large zeta accounts for charge near the nucleus, while the function with the smaller zeta accounts for the charge distribution at larger values of the distance from the nucleus. This expanded basis set is called a double-zeta basis set. \[R_{2s} (r) = C_1re^{-\zeta _1r} + C_2 r e^{-\zeta _2 r} \label {10.11}\] The use of double zeta functions in basis sets is especially important because without them orbitals of the same type are constrained to be identical even though in the molecule they may be chemically inequivalent. For example, in acetylene the \(p_z\) orbital along the internuclear axis is in a quite different chemical environment and is being used to account for quite different bonding than the \(p_x\) and \(p_y\) orbitals. With a double zeta basis set the \(p_z\) orbital is not constrained to be the same size as the \(p_x\) and \(p_y\) orbitals. Example \(\PageIndex{1}\) Explain why the \(p_x\), \(p_y\), and \(p_z\) orbitals in a molecule might be constrained to be the same in a single-zeta basis set calculation, and how the use of a double-zeta basis set would allow the \(p_x\), \(p_y\), and \(p_z\) orbitals to differ. Gaussian Orbitals Although any basis set that sufficiently spans the space of electron distribution could be used, the concept of Molecular Orbitals as Linear Combinations of Atomic Orbitals (LCAO) suggests a very natural set of basis functions: AO-type functions centered on each nuclei. One obvious choice are the exact hydrogen AO's, known as Slater-type orbitals (STO)--describing the radial component of the functions. However, the computation of the integrals is greatly simplified by using Gaussian-type orbitals (GTO) for basis functions. While the STO basis set was an improvement over hydrogenic orbitals in terms of computational efficiency, representing the STOs with Gaussian functions produced further improvements that were needed to accurately describe molecules. A Gaussian basis function has the form shown in Equation \(\ref{10.12}\). Note that in all the basis sets, only the radial part of the orbital changes, and the spherical harmonic functions are used in all of them to describe the angular part of the orbital. \[ G_{nlm} (r, \theta , \psi ) = N_n \underbrace{r^{n-1} e^{-\alpha r^2}}_{\text{radial part}} \underbrace{Y^m_l (\theta, \psi)}_{\text{angular part}} \label{10.12}\] Unfortunately Gaussian functions do not match the shape of an atomic orbital very well. In particular, they are flat rather than steep near the atomic nucleus at \(r = 0\), and they fall off more rapidly at large values of \(r\) (Figure \(\PageIndex{1}\)). Figure \(\PageIndex{1}\): Radial Dependence of Slater and Gaussian Basis Functions. Image used with permission. To compensate for this problem, each STO is replaced with a number of Gaussian functions with different values for the exponential parameter. These Gaussian functions form a primitive Gaussian basis set. Linear combinations of the primitive Gaussians are formed to approximate the radial part of an STO. This linear combination is not optimized further in the energy variational calculation, but rather is frozen and treated as a single function. The linear combination of primitive Gaussian functions is called a contracted Gaussian function. Although more functions and more integrals now are part of the calculation, the integrals involving Gaussian functions are quicker to compute than those involving exponentials, so there is a net gain in the efficiency of the calculation. Figure \(\PageIndex{2}\): To better represent the cusp in the electron density at the nuclei, GTO basis sets are constructed from fixed linear-combinations of Gaussian functions, contracted GTOs (CGTO). The earliest CGTO basis sets, where constructed from N GTOs that best fit the desired STO. These are called STO-NG basis sets. Gaussian basis sets are identified by abbreviations such as N-MPG*. N is the number of Gaussian primitives used for each inner-shell orbital. The hyphen indicates a split-basis set where the valence orbitals are double zeta. The M indicates the number of primitives that form the large zeta function (for the inner valence region), and P indicates the number that form the small zeta function (for the outer valence region). G identifies the set a being Gaussian. The addition of an asterisk to this notation means that a single set of Gaussian 3d polarization functions (discussed elswhere) is included. A double asterisk means that a single set of Gaussian 2p functions is included for each hydrogen atom. For example, 3G means each STO is represented by a linear combination of three primitive Gaussian functions. 6-31G means each inner shell (1s orbital) STO is a linear combination of 6 primitives and each valence shell STO is split into an inner and outer part (double zeta) using 3 and 1 primitive Gaussians, respectively (see Table \(\PageIndex{1}\) for other examples). Basis set # functions Basis set # functions Basis set # functions Table \(\PageIndex{1}\): Different Gaussian Basis sets 6-31 1G Example \(\PageIndex{2}\) The 1s Slater-type orbital \(S_1 (r) = \sqrt {4 \zeta _1 e^{-\zeta _1 r}}\) with \(\zeta _1 = 1.24 \) is represented as a sum of three primitive Gaussian functions, \[S_G (r) = \sum _{j=1}^3 C_j e^{-\alpha _j r^2} \nonumber \] This sum is the contracted Gaussian function for the STO. 1. Make plots of the STO and the contracted Gaussian function on the same graph so they can be compared easily. All distances should be in units of the Bohr radius. Use the following values for the coefficients, C, and the exponential parameters, \(\alpha\). index j \(\alpha _j\) \(C_j\) 1 0.1688 0.4 2 0.6239 0.7 3 3.425 1.3 2. Change the values of the coefficients and exponential parameters to see if a better fit can be obtained. 3. Comment on the ability of a linear combination of Gaussian functions to accurately describe a STO. When molecular calculations are performed, it is common to use a basis composed of a finite number of atomic orbitals (Equation \(\ref{10.8}\)), centered at each atomic nucleus within the molecule (linear combination of atomic orbitals ansatz). These atomic orbitals are well described with Slater-type orbitals (STOs), as STOs decay exponentially with distance from the nuclei, accurately describing the long-range overlap between atoms, and reach a maximum at zero, well describing the charge and spin at the nucleus. STOs are computationally difficult and it was later realized by Frank Boys that these Slater-type orbitals could in turn be approximated as linear combinations of Gaussian orbitals instead. Because it is easier to calculate overlap and other integrals with Gaussian basis functions, this led to huge computational savings
8956f71a30c9d6ac
Journal of Molecular Modeling , Volume 17, Issue 9, pp 2325–2336 | Cite as RNA and protein 3D structure modeling: similarities and differences • Kristian Rother • Magdalena Rother • Michał Boniecki • Tomasz Puton • Janusz M. Bujnicki Open Access Original Paper Assessment Prediction RNA Structure Tertiary  RNAs and proteins are linear polymers composed of a limited set of building blocks (ribonucleotide and amino acid residues, respectively). Despite the fundamental chemical differences of these building blocks, the higher order structure of RNA and protein molecules can be described with similar terms (Fig. 1). Each residue comprises two parts: one is common to the given type of a macromolecule and is used to form a continuous “backbone”, the other is variable and forms a “sidechain”. The order of building blocks held together by covalent bonds is called the primary structure, the local conformation of the chain stabilized mostly by hydrogen bonds is the secondary structure, while the path of the chain in three dimensions resulting from various long-range interactions is the tertiary structure. Fig. 1 Hierarchical structure of proteins and RNAs Most protein and RNA molecules, or at least their parts/domains, fold spontaneously into complex three-dimensional shapes [1, 2]. From a global perspective, there are a number of common principles that govern the folding of proteins and RNA molecules, but there are also important differences. The initial events leading to compaction of an RNA chain are driven by neutralization of the negative charge on the phosphate groups by counterions, whereas the compaction of a protein polypeptide chain is driven by burial of hydrophobic side-chains [3]. Besides, secondary structure in proteins is formed owing to hydrogen bonding of the main chains, while in RNA it involves hydrogen bonding between the side-chains. The structures of biological macromolecules provide a framework for their biological functions [4]. These functions typically involve interactions with various molecules in the cell, including other proteins and RNAs. The importance of structure for the function of protein non-coding RNAs (e.g., tRNAs, ribozymes or riboswitches) has been widely accepted [5]. Recently, it has been shown that protein-coding regions of mRNAs are also highly structured, suggesting an additional role in the regulation of translation [6, 7]. However, it is also known that many proteins and RNAs undergo conformational transitions or exhibit functionally relevant structural disorder [8, 9]. Thus, the function of both proteins and RNAs depends on the three-dimensional structure and dynamics, which in turn is encoded in the linear sequence of individual molecules [10]. It should be also mentioned that mature, functional RNA and protein sequences can be modified/edited compared to the “raw” sequence information encoded in the DNA. Apart from removal or addition of sequence fragments, individual residues can be chemically altered by dedicated enzymes. Posttranscriptional modifications in RNAs and posttranslational modifications in proteins extend the basic alphabets of four nucleotides and 20 amino acids with many additional ‘letters’ that influence the structure and function of molecules that contain them [11, 12]. The knowledge of structure is very important for the understanding of RNA and protein function. However, experimental sequence determination of genes and entire genomes, from which the sequences of RNAs and proteins can be reliably inferred, is much cheaper and simpler than experimental determination of structures. As a consequence, the rate of macromolecular structure determination lags behind the rate of determination of new sequences and the gap between the number of known structures and known sequences continues to widen. It is unlikely that structures will be solved experimentally for all protein and RNA molecules. Understanding of the “1D-3D code” provides an opportunity for theoretical prediction of protein and RNA structures from their sequences. This has proven to be a very difficult task, however a few successful strategies have been identified, which now allow for reasonably accurate (practically useful) predictions of 3D structures. Most methods have been developed initially for proteins only. However, recent developments in the RNA structural bioinformatics field suggests that essentially the same principles may be applicable for modeling of those RNAs that exhibit relatively stable 3D structures. Classification of methods for macromolecular 3D structure prediction Methods for 3D structure prediction can be divided into those based on “first principles”, i.e., the fundamental laws of physics that govern the process of folding, and those based on information about other structures, available in databases. In particular, knowledge-based methods can be used to predict macromolecular structures by modeling the process of evolution (Fig. 2). Fig. 2 Template-dependent and template-free approaches to prediction of macromolecular structures, exemplified by the modeling of evolution and folding, respectively Physics-based 3D structure prediction One approach to 3D structure prediction, sometimes termed ab initio prediction, is based on the thermodynamic hypothesis formulated by Anfinsen, according to which the native structure of a protein corresponds to the global minimum of the free energy of the system comprising the macromolecule [13]. Accordingly, physics-based methods model the process of folding by simulating the conformational changes of a macromolecule while it searches for the state of minimal free energy (review: [14]). The “score” of each conformation is calculated as the true physical energy based on the interactions within the macromolecule and between the macromolecule and the solvent [15]. While in physics the term ab initio is often used to refer to find a set of wave functions and energies by solving the Schrödinger equation without external parameters, the physics-based methods described here offer a simplified approach to calculate the energy. The functional form and parameter sets used to describe the potential energy of a system is called a force field. There exist a number of software packages for simulation of protein folding in atomic detail, they typically implement various versions of molecular dynamics (MD) and Monte Carlo (MC) protocols for searching the conformational space, and force fields such as AMBER [16], CHARMM [17] or GROMOS [18] to calculate the energy. In order to facilitate the identification of the native state as the one of the lowest energy, the energy landscape that describes the relationship between the distance from the native-like conformation and the energy should have a funnel-like shape (review: [19]). More explicitly, when plotting the energy of models versus a structural difference between the models and the native structure (e.g., expressed as the root-mean-square deviation (RMSD) between pairs of equivalent atoms in optimally superimposed structures), there should be a funnel-shaped tip at the bottom left corner of the plot. In particular, the native structure should exhibit the lowest energy, and the farther a given conformation is from the native structure, the higher its energy should be. The prediction of the native structure is easier if this relationship between the value and variability of energies and deviation from the native structure holds across the entire range of possible conformations. Figure 3 presents diagrams comparing an ‘ideal’ (from a practical point of view of macromolecular structure prediction) relationship between the energy of models and their distance from the native structure and a ‘real life’ example of such a distribution obtained from a folding simulation of an immunoglobulin light chain-binding domain of protein L (2ptl in the Protein Data Bank), carried out using the REFINER method [20]. One conceptual difference between the energy function in a folding simulation and the real physical energy becomes apparent in this and similar plots: In reality, the energy differences between the folded and unfolded state are very small, while in practice the effective discrimination of native-like models from non-native-like ones requires maximization of the energy difference. Fig. 3 A funnel-like relationship between the value of a function for scoring of structural models and their deviation from the native structure (expressed e.g., in root mean square deviation of superimposable atoms or in some other similarity measure): (a) a hypothetical “ideal” function that maximizes the discrimination between native, native-like and non-native conformations. The minimal value of energy as well as the spread of energy values for conformations at a particular distance from the native structure (corresponding to the global energy minimum) increase monotonically with the increasing distance, so conformations closer to the native structure on the average exhibit lower energies than those farther away. Here a random sample of points that fulfill this relationship is shown. (b) results of folding simulations of an immunoglobulin light chain-binding domain of protein L (2ptl in the Protein Data Bank), carried out using the REFINER method [20], which uses a Monte Carlo sampling scheme and a statistical potential The ab initio approach is plagued by serious problems. In particular, a full-atom structural model of a macromolecule has a large number of degrees of freedom (3*Natoms-5), which makes the search space enormous, and the function with which to calculate the energy of the system is very complex. As a result, both the sampling and energy calculations are very costly in terms of computational power required. Typically, the free energy landscape is extremely rugged, i.e., it possesses multiple local minima, and it is essentially impossible to perform an exhaustive evaluation of all these minima to identify the one with the globally lowest value. Further, some of the components of the free energy function (e.g., the entropy) are very difficult to calculate, and may not be inferred accurately for large molecules. For these reasons the use of ab initio methods is limited to very small molecules. Thus far, most of the reported successful all-atom folding simulations have been for very small proteins, such as the 20-residue “Trp-cage” miniprotein [21], and they rarely exceed the threshold of one microsecond. Further, even extended simulations, such as a ten microsecond simulation performed for a fast-folding WW domain [22] may not sample a native-like conformation. Hence, folding simulations have not yet matured to the state of a reliable method for protein 3D structure prediction. One important problem for algorithms that deal with macromolecular structures is the representation of coordinates. Cartesian coordinates (three numbers representing distances for each atom) are used to represent the final model in a common PDB file format, but they may be impractical at some stages of modeling, as they utilize 3*Natoms degrees of freedom. To increase the efficiency of computations, the system may be transformed into internal coordinate systems and/or bond lengths and angles may be restricted to idealized values. For instance in torsion angle dynamics (e.g., as implemented in the program DYANA [23]), torsion angles are used instead of Cartesian coordinates as degrees of freedom, and the only degrees of freedom are rotations about single bonds. A biopolymer can be represented as a tree structure consisting of n + 1 atoms connected by n rotatable bonds of fixed length. The tree structure starts from a base, typically at the N-terminus of the polypeptide chain, and terminates with “leaves” at the ends of the side-chains and at the C-terminus. The conformation of the molecule is uniquely specified by the values of all torsion angles and torsion angles may be allowed to assume only discrete values. The conversion of a model from internal coordinates to a Cartesian representation can be achieved, e.g., with the Nerf algorithm [24], which requires three coordinates per atom: a bond length, a flat angle, and a torsion angle. Another approach to reduce the number of degrees of freedom is to use coarse-grained models, which treat groups of atoms as single interaction centers, so that a smaller number of elements and interactions need to be considered (review: [25]). Actually, the first simulation of protein folding reported in the literature used a simplified chain and time-averaged forces to fold bovine pancreatic trypsin inhibitor from an open-chain conformation into a folded conformation close to that of the native molecule [26]. Another advantage of coarse-grained modeling is that the force field derived for the united interaction centers yields a much smoother energy surface than that for the all-atom energy function. As a result, many local energy minima are removed, in which the system could become trapped during the simulation. However, it must be emphasized that simplification of the model and the energy function usually leads to reduced accuracy. As of today, it is not practical to expect that a folding simulation for a macromolecule comprising more than 100 residues would confidently predict a native-like structure with a correctly estimated energy. Contemporary methods for coarse-grained protein structure prediction can be exemplified by UNRES [27], which represents side chains by ellipsoids, and the peptide bonds by united atoms located in the middle of two consecutive Cα atoms. The only degrees of freedom in the continuous space are the bond and torsion pseudoangles defined between the Cα atoms. The free energy function includes terms for interactions between the side chain centers, steric repulsion between side chains and peptide group centers, and electrostatic interactions between peptide groups. Local conformational propensities of a polypeptide are described by torsional and angle-bending potentials. Multibody interactions, which are the most important for reproducing regular secondary structure elements, are described by higher order terms. Since the same basic laws of physics apply to all types of molecules, one can postulate that analogous methods should work for RNA as well. As mentioned earlier, RNA folding relies on the modulation of electrostatic repulsion by counterions, while protein folding relies on the formation of a hydrophobic core, and the secondary structure formation requires hydrogen-bonding either via protein side chain or RNA main chain functional groups, respectively. Nonetheless, computational methods developed to study protein folding have been successfully used to simulate RNA folding (review: [28]). Examples of all-atom simulations with general-purpose software packages such as AMBER or CHARMM include the folding of small RNA hairpins [29, 30], the analysis of H-bond stability in the anticodon loop of tRNA(Asp) [31] or modeling the interaction of “kissing loops” in the dimerization initiation site (DIS) of HIV [32]. Molecular dynamics simulations restrained by experimental data have also been used to model the conformational transitions of large macromolecular complexes involving both RNAs and proteins, such as the ribosome (review: [33]). The modeling of nucleic acid structures can also take advantage of the use of local coordinate systems and/or coarse-graining to reduce the number of variables in the system. For instance the 3DNA program [34] constructs reference coordinate frames around bases and base pairs, while using idealized values for bond lengths and bond angles. The treatment of nucleobase moieties as rigid bodies allows one to drastically reduce the number of degrees of freedom. Further, a total of three angle and three translation variables are enough to describe the relative orientation of two bases (with parameters called propeller, buckle, opening, shear, stretch, and stagger) or two base pairs (twist, tilt, roll, shift, slide, and rise). Because these parameters are independent of the Cartesian system, they allow to directly compare two structures without the spatial superposition of coordinates. The miniCarlo program for energy minimizations and Monte Carlo simulations of nucleic acid structures applies a very similar scheme. It uses helical parameters that determine the relative position of bases in a pair and relative position of base pairs, pseudorotation parameters of sugars that determine internal geometry of sugar moieties, glycosidic angles that determine orientation of sugars relative to the bases, and torsion angles that determine the orientation of methyl groups in thymines and hydroxyl groups in riboses [35]. The commercially distributed program junction minimization of nucleic acids (JUMNA) uses a reduced coordinate approach to gain roughly an order of magnitude in the number of variables necessary to model a nucleic acid fragment [36]. One of the first applications of the coarse-grained approach for RNA 3D structure modeling involved the refinement of low-resolution structures of ribosomal RNAs with restraints from experimental data and a representation with pseudoatoms at different levels of detail - from a single pseudoatom per helix to a single pseudoatom for each nucleotide [37]. More recently, a number of new methods have been developed that allow for coarse-grained folding simulations with or without experimental data. YUP [38] and NAST [39] represent RNA by just one pseudoatom per nucleotide residue: phosphate and C3′, respectively. Vfold [40] and DMD [41] represent RNA by three pseudoatoms per residue, while HiRE-RNA [42] uses six or seven pseudoatoms for purine or pyrimidine residues, respectively. For bonded interactions (bonds, angles, and dihedrals) all these methods use parameters derived from a database of known RNA structures. For non-bonded interactions, the energy terms differ. NAST actually does not use a full energy function, it generates plausible 3D structures based on restraints supplied by the user (e.g., on secondary structure or tertiary contacts). Vfold and DMD use experimentally tabulated energy values [43] to parameterize (in different ways) base-pairing and base-stacking, as well as to estimate loop entropy. DMD also uses an explicit representation of hydrogen bonding to enforce base pairs formation and an additional term for phosphate-phosphate repulsion. With the simplifications introduced, both methods are capable of folding RNAs up to 100 residues long. Vfold has been recently used to successfully model pseudoknotted RNA structures and to estimate the conformational entropy for stem-loop tertiary contacts [44]. HiRE-RNA is an excellent example of a protein-like coarse-grained method for RNA modeling. It uses an implicit solvent force field similar to the protein OPEP force field [42]. It is expressed as a sum of local (bonded), nonbonded, and hydrogen-bond terms. All bonded interactions are described by harmonic terms. For non-bonded interactions, a Lennard-Jones potential is used, modified to mimic some of the excluded volume and screening effect that gets lost in coarse-graining. In particular, it implements a repulsive power law at short distances, an exponential tail at large distances to account for an extra screening given by the atoms that are missing from the coarse-grained representation, and a narrower well varying with the equilibrium distance. Hi-RNA energy model does not take into account electrostatic interactions explicitly, except for the repulsion between the phosphates. The base pairing is modeled by hydrogen bonding interactions consisting of 2-, 3-, and 4-body terms. The interactions taken into account include canonical A-U and G-C Watson-Crick pairs, A-U Hoogsteen pairs, G-U wobble pairs, as well as the relatively rare A-C, A-G, and U-C pairs. All the HiRE-RNA parameters were derived from a statistical study of 220 structures in the Nucleic Acids Database (NDB) and subsequently refined through the analysis of long molecular dynamics simulations for a poly-A molecule of 15 nucleotides. Tests of HiRE-RNA on two structures (22 and 36 nucleotide long) solved by NMR, have demonstrated that the method is capable of sampling the native state, however the selection of the most native-like conformation remains a challenge. Evolution-based structure prediction At the other end of the methodological spectrum there are approaches based on the principles of evolution. After experimental determination of the first handful of protein structures it became clear that evolutionarily related (homologous) proteins usually retain the same three-dimensional fold (i.e., the 3D arrangement and connectivity of secondary structure elements) despite the accumulation of divergent mutations [45]. It was also found that structural divergence is much slower than sequence divergence, although these two features are strongly correlated. Thus, methods have been developed to align the sequence of one protein (a target) to the structure of another protein (a template), model the overall fold of the target based on that of the template and infer how the target structure will change due to substitutions, insertions and deletions (indels), as compared with the template (reviews: [46, 47]). The process of identification of a structurally related template has been termed “fold recognition”, while the transformation of atomic coordinates of the template structure into the target has been typically referred to as “homology modeling” or “comparative modeling” (the latter takes into account a possibility that the template is not homologous, as long as it is structurally similar to the target). This entire approach has been termed “template-based modeling”. Comparative analyses of evolutionarily related RNAs (see e.g., [48]), revealed patterns of conservation that are analogous to those observed in proteins: the secondary and tertiary structure is usually more conserved than sequence, and core regions important to stability and function tend to be more conserved at all levels. In general, it can be stated that in families of homologous RNAs, the 3D fold is often conserved and alignment of sequences and secondary structure patterns can be used to recognize such structural conservation, enabling template-based modeling. Template-based modeling has two main limitations. First, the modeling of the “target” structure starts with another known structure of a structurally similar molecule to be used as a ”template”, hence if such a structure does not exist or cannot be identified reliably, then the model cannot be built or almost certainly will be completely wrong. Further, each element of the target sequence must be aligned to the structurally equivalent element in the template sequence/structure. In particular, homologous residues should be aligned to each other. High sequence similarity is not a prerequisite for template-based modeling. In fact, it is possible to create good homology models even if the sequence identity between the target and the template is zero [49]. However, on the average, molecules with higher sequence similarity tend to exhibit more similar structures [45]. Besides, for highly similar sequences it is generally easier to generate a correct alignment (to find homologous residues between the target and the template). Therefore, using templates with higher sequence similarity is recommended. Apart from sequence divergence, structures may also change because of environmental factors, e.g., the binding of other molecules or the composition of the solution (salt, pH) [50]. This is particularly true for RNA, where the binding of metal ions is often a key factor enabling a stable tertiary structure [51]. It is generally the responsibility of the user of the homology modeling software to choose a template, whose biological state corresponds best to the desired biological state of the target to be modeled. With an incorrectly chosen template and/or wrong alignment, the model will always be very far from the native structure. These limitations concern all homology modeling tools, as templates and alignments are always necessary in this approach [52]. Finally, it must be noted that like proteins, homologous RNAs need not retain the same structure in all details. Topological variability (e.g., preserving the overall 3D structure while changing the pattern of secondary structure elements) has been observed in many protein families [53], as well as in RNA families, with one prominent example being the RNA subunit of RNase P from Escherichia coli (type A) and Bacillus subtilis (type B) [54]. However, methods for automated template-based modeling of macromolecules assume that the overall fold is conserved between the template and the target, and special intervention of the user is usually required to model topological variations. Two major approaches have been developed for template-based modeling of proteins. One is to model the structure by copying the coordinates of the template (both the backbone and the side-chains) in the aligned core regions, which can also include “averaging” over coordinates of multiple templates. The variable regions are modeled by taking fragments with similar sequence from a database of previously observed loops, followed by replacing the mutated side-chains with rotamers that satisfy the stereochemical criteria, and (optionally) limited energy optimization, as implemented in SWISS-MODEL [55]. The other possibility is to use the distance and torsion angles and interatomic distances from the aligned regions of the template(s) as modeling restraints, which permits the use of information from multiple structures. This approach also requires the idealization of geometry and packing of the entire chain by satisfying stereochemical constraints derived from the database of protein structures, as implemented in MODELLER [56]. The same two types of methods have been recently proposed for RNA modeling. The Altman group has implemented a MODELLER-like strategy in RNABuilder, an extension to the SimTK molecular modeling toolkit [57]. The force field consists of forces and torques which act to fold the RNA molecule according to the restraints specified by the user. No forces act between nucleotide residues unless specified by the user, except stacking forces. A coarse-grained simulation is carried out to fold the model into a conformation that minimizes the violation of restraints. RNABuilder starts with an extended representation of the target sequence and threads it onto the template structure(s), guided by restraints derived from the target-template sequence alignment, optionally using additional user-specified restraints on base pairing, stacking and tertiary interactions, rigidifying portions of the molecule, and many more. It detects runs of three or more consecutive Watson-Crick base pairs and automatically enforces helical geometry. Given a complete description of the structural interactions, RNABuilder is able to construct an RNA model that satisfies all restraints. The structure may however get caught in local minima, especially for longer RNA, where the method cannot satisfy all constraints without further action from the user. RNABuilder has been recently used to construct a homology model of the Azoarcus group I intron, using structural information from two template structures. Our group has recently developed a protein-like RNA comparative modeling method ModeRNA ( [58]), inspired by the SWISS-MODEL method for protein structure modeling. ModeRNA interprets a pairwise sequence alignment as a set of instructions that are used to create a model by copying the conserved core from a template structure, and introducing the variable parts by taking fragments from a database of experimentally determined structures. A highlight of ModeRNA is that it can automatically add and remove nucleotide modifications. ModeRNA also offers a scripting interface that allows the users to perform more complex manipulations, such as recombination of fragments taken from unrelated structures. Hybrid methods In the protein structure prediction field the most successful approach combines the features of physics-based folding with the use of previously solved structures. The known structures that may be used explicitly as templates or implicitly, as the source of information to calculate a scoring function that may complement or replace the ’purely physical‘ energy. This type of structure prediction is often termed ’de novo modeling‘, and should not be confused with the ab initio modeling, as it heavily relies on information from databases. De novo methods for structure prediction share many problems with the ab initio approach, including a high computational cost of the conformational sampling and uncertainty as to which of the large number of alternative conformations generated is the most native-like structure. Nonetheless, so far in blind tests such as the CASP benchmark, they have outperformed methods based on either ‘pure physics’ or ‘pure evolution’ [59, 60]. Methods such as ROSETTA [61] improve the efficiency of the conformational search by restricting local conformations to those taken from known structures, which should correspond to locally energy-minimized structures. Hence, the main type of conformational transition in ROSETTA involves a replacement of conformational parameters for a short fragment in the modeled protein by parameters taken from a randomly selected fragment in a previously solved structure of another protein. Additional conformational changes are required to refine the local structure. The ROSETTA energy function combines parameters that are based on physics and statistics. Other methods such as CABS [62] restrict the conformational space by projecting all possible conformations onto a discrete three-dimensional lattice. In CABS the scoring function is entirely based on a statistical potential. REFINER is an off-lattice variant of CABS [20], which makes the method slower, but potentially more accurate. TASSER [63] goes even further into hybrid modeling by combining the fragment assembly (if any starting models are available from template-based modeling) with lattice-based modeling (in particular for fragments that lack any template). All these methods initially use a simplified (coarse-grained) model and the final refinement is usually carried out after rebuilding a full-atom model and with an energy function that is enriched into high-resolution physics-based terms. A number of methods based on the principle of fragment assembly have been recently proposed also for RNA 3D modeling. In particular, FARNA/FARFAR [64, 65] is essentially ‘ROSETTA for RNA’. The FARNA procedure assembles an RNA 3D structure from short linear fragments, using a knowledge-based energy function, which takes into account preferences of the backbone and side-chains conformations, and of base-pairing and base-stacking interactions, derived from experimentally determined RNA structures. Fragments for the assembly of RNA structure were taken from the large ribosomal subunit of Haloarcula marismortui (PDB code: 1ffk). FARFAR is an extension of FARNA, which uses a full-atom refinement in order to optimize the RNA structures generated by FARNA. The full-atom energy function is supplemented with harmonic constraints placed between Watson-Crick edge atoms in the two residues that are assumed to form each bounding canonical base pair and a term to approximately describe the screened electrostatic interactions between phosphates. It also includes terms derived from the earlier work on proteins: a potential for weak carbon hydrogen bonds, an alternative orientation-dependent model for desolvation based on occlusion of protein moieties. In the authors’ own tests of folding 32 RNA targets, 14 cases gave at least one of five FARFAR models with better than 2.0 Å all-heavy-atom RMSD to the experimentally observed structure. MC-Fold|MC-Sym [66] is based on a principle related to FARNA, as it assembles RNA structures from a library of ‘nucleotide cyclic motifs’, i.e., fragments in which all nucleotides are circularly connected by covalent, pairing or stacking interactions. MC-Fold|MC-Sym implements two energy functions, one based on non-bonded terms (van der Waals and stacking interactions) from the AMBER package and another one based on statistics of the experimentally determined structures, but neither of them can discriminate native-like models from misfolded ones. Recently, inspired by the CABS and REFINER methods for protein structure modeling, we developed SimRNA for RNA structure modeling (M.B., Konrad Tomala, Pawel Łukasz, T.P., K.R., J.M.B., in preparation). SimRNA represents the nucleotide chain by three pseudoatoms per nucleotide residue, similarly to Vfold and DMD, but instead of a physics-based potential, both bonded and non-bonded terms in its energy function are based entirely on database statistics. A conceptually related CG model represents RNA with five pseudoatoms per residue and uses a statistical potential to describe all the nonbonded interactions, including the excluded volume repulsive, the attractive force, and the electrostatic force between nonbonded particles, as well as the solvation forces due to the environment [67]. There exist methods for interactive (user-guided) modeling of macromolecular structures based on assembly of fragments derived from various structures that are predicted to be similar to different parts of the target. Computational tools and the graphics front-end facilitate the choice, the manipulation, and the visualization of fragments, and often provide specialized algorithms for local optimization of geometry to seal breaks in the chain or relieve steric clashes. The approach that allows the expert user to rearrange and recombine multiple template structures has been particularly widely used in the RNA modeling field, with methods such as S2S/Assemble [68, 69], ERNA-3D [70], or RNA2D3D [71]. However, similar methods including the ‘Frankenstein’s Monster approach’ [72] and the “protein lego” approach [73] have also been applied to model protein structures (review: [74]). Critical assessment and benchmarking of protein and RNA structure prediction For a very long time, the field of RNA 3D structure modeling has been dominated by methods based on interactive graphical interfaces that allow human experts to manipulate sequences and structures in 3D. Only recently have a number of automated methods been developed, many of which are based on concepts previously used with success in the protein 3D structure modeling field (Table 1). Thus, we conclude that protein and RNA modeling present more similarities than differences, and that it may be worthwhile for these two fields of research to inspire and ‘bootstrap’ each other to overcome some of the existing bottlenecks. Table 1 Automated methods for protein and RNA modeling reviewed in this article, arranged according to the analogous principles used Prediction method class Template-based, comparative modeling Template-free, physics-based Vfold, DMD, HiRE-RNA Automated hybrid (statistics + physics) All-atom, fragment-based The development of useful methods for protein structure prediction has been driven by the benchmarking experiments, in which blind predictions are objectively compared to the experimentally solved structures. In the protein structure prediction community there are periodic evaluation experiments that rigorously test the accuracy of prediction methods, e.g., CASP (biannually; and Livebench (continuously; The ability to objectively assess the structure prediction methods, their relative performance as well as the typical accuracy of predictions using an established set of measures [75] has proven indispensable for progress in this field of research. The assessment of model accuracy requires reliable and meaningful metrics for comparisons between the models and the experimentally determined structures used as a “gold standard”. One of the measures used commonly for comparison of macromolecular models is the RMSD between pairs of equivalent atoms in the optimally superimposed structures. Typically only backbone atoms are considered, e.g., Cα in protein structures or P in RNA structures, but RMSD can also be calculated for any (or all) atoms. However, RMSD is not a perfect measure. A small perturbation in just one part of the structure (e.g., a hinge movement of two domains) can create a large RMSD suggesting that the two structures are very different overall. To take into account both local and global structural similarities, several metrics have been developed. The global distance test (GDT_TS) score [76] and the template matching (TM) score [77] are examples of metrics developed for comparison of protein structures that have been generally accepted in the protein structure prediction field and used by assessors in the CASP experiment; they can also be applied to compare RNA structures and measure the accuracy of RNA models. Many metrics of structural similarity are dependent on the molecule size: if randomly selected molecules of the same size are compared, the score deteriorates with the molecule size. To eliminate the dependence on protein size, Levitt and Gerstein converted the structure similarity score into the P-value, i.e., a statistical significance score, based on the statistics of random structure comparisons [78]. Recently, Hajdin et al. have analyzed the dependence of the structure similarity on the molecule size in small RNAs ( < 161 nt length) with relatively complex tertiary structures [79]. They found that the compactness of folded RNA molecules is slightly lower than for proteins with the same mass. Based on their analysis they defined an expression relating RMSD with the P-value that describes prediction significance. Measures of structural similarity developed for protein models are not always ideal for RNA structures. They may capture the general 3D shape, local deviations of the structure, intradomain deformation, or interdomain deviations, but are agnostic about important features that are unique to RNA, i.e., the base-pairing and base-stacking patterns. Parisien et al. developed an RNA 3D structure comparison measure called the deformation index (DI), which evaluates the deviations between two RNA 3D structures by calculating the proportion of base interactions (stacking and pairing) that are identical in both structures [80]. They also developed another measure called a deformation profile (DP) that highlights dissimilarities between structures at the residue level for both intradomain and interdomain interactions. DP can also be used for proteins. CASP for RNA has not fully materialized yet, hence it is difficult to objectively assess how different methods and approaches for RNA modeling compare with each other and how well they perform in the hands of different users. The number of crystal and NMR structures solved for RNA molecules that are sufficiently large for meaningful analysis is probably still too small to provide a sufficient number of targets for CASP-like intense modeling over a few months every year. In the meantime we have started a project similar to Livebench (again, an inspiration from the field of protein structural bioinformatics), which aims to become an objective benchmark of fully automated methods for RNA structure prediction. The CompaRNA web server (, T.P., K.R., Łukasz Kozłowski, Ewa Tkalińska, J.M.B., manuscript in preparation) provides a continuous benchmark for standalone and web server methods. Currently it addresses only fully automated methods for RNA secondary structure prediction, but we intend to extend it to include methods for RNA 3D structure prediction that will become available as public web servers and/or local installations that can be run in a fully automated mode with default parameters and do not require large computing resources. While this approach excludes expert-based modeling and methods that are not yet fully automated or require high performance computing, we hope it will contribute to the assessment of the progress in the RNA structure prediction field. Our work on template-free modeling of RNA structures was supported by the Polish Ministry of Science (HISZPANIA/152/2006 grant to J.M.B.), and by the EU (6FP grant “EURASNET”, LSHG-CT-2005-518238). Our work on template-based modeling of RNA structures was supported by the Faculty of Biology, Adam Mickiewicz University (PBWB-03/2009 grant to M.R.) and by the Polish Ministry of Science (PBZ/MNiSW/07/2006 grant to M.B.). Software development in the Bujnicki laboratory in IIMCB has been supported by the EU structural funds ([POIG.02.03.00-00-003/09). K.R. was independently supported by the German Academic Exchange Service (grant D/09/42768). We thank Konrad Tomala and Pawel Łukasz for their participation in the development of our RNA modeling methods. We thank present and former members of the Bujnicki laboratory, in particular Ewa Wywiał, Pawel Skiba, Piotr Byzia, Irina Tuszynska, Joanna Kasprzak, Jerzy Orlowski, Tomasz Osiński, Marcin Domagalski, Anna Czerwoniec, Stanisław Dunin-Horkawicz, Marcin Skorupski, and Marcin Feder, for their comments and constructive criticism during development of our software. We also thank Neocles Leontis, Eric Westhof, Rob Knight, Sandra Smit, Magda Jonikas, Alain Laederach, Andrzej Kolinski, and Francois Major for stimulating discussions and helpful advice on various occasions. Open Access 1. 1. Dill KA (1990) Dominant forces in protein folding. Biochemistry 29:7133–7155CrossRefGoogle Scholar 2. 2. Ferre-D'Amare AR, Doudna JA (1999) RNA folds: insights from recent crystal structures. Annu Rev Biophys Biomol Struct 28:57–73CrossRefGoogle Scholar 3. 3. Thirumalai D, Hyeon C (2005) RNA and protein folding:common themes and variations. Biochemistry 44:4957–4970CrossRefGoogle Scholar 4. 4. Laskowski RA, Thornton JM (2008) Understanding the molecular machinery of genetics through 3D structures. Nat Rev Genet 9:141–151CrossRefGoogle Scholar 5. 5. Laederach A (2007) Informatics challenges in structured RNA. Brief Bioinform 8:294–303CrossRefGoogle Scholar 6. 6. Watts JM, Dang KK, Gorelick RJ et al. (2009) Architecture and secondary structure of an entire HIV-1 RNA genome. Nature 460:711–716CrossRefGoogle Scholar 7. 7. Kertesz M, Wan Y, Mazor E et al. (2010) Genome-wide measurement of RNA secondary structure in yeast. Nature 467:103–107CrossRefGoogle Scholar 8. 8. Hazy E, Tompa P (2009) Limitations of induced folding in molecular recognition by intrinsically disordered proteins. Chemphyschem 10:1415–1419CrossRefGoogle Scholar 9. 9. Fulle S, Gohlke H (2009) Constraint counting on RNA structures:linking flexibility and function. Methods 49:181–188CrossRefGoogle Scholar 10. 10. Anfinsen CB, Scheraga HA (1975) Experimental and theoretical aspects of protein folding. Adv Protein Chem 29:205–300CrossRefGoogle Scholar 11. 11. Grosjean H (2009) Fine-tuning of RNA functions by modification and editing. Springer, BerlinGoogle Scholar 12. 12. Walsh CT (2005) Posttranslational modification of proteins:Expanding nature's inventory. Roberts, Greenwood Village, COGoogle Scholar 13. 13. Anfinsen CB (1973) Principles that govern the folding of protein chains. Science 181:223–230CrossRefGoogle Scholar 14. 14. Hardin C, Pogorelov TV, Luthey-Schulten Z (2002) Ab initio protein structure prediction. Curr Opin Struct Biol 12:176–181CrossRefGoogle Scholar 15. 15. Scheraga HA (1996) Recent developments in the theory of protein folding:searching for the global energy minimum. Biophys Chem 59:329–339CrossRefGoogle Scholar 16. 16. Case DA, Cheatham TE III, Darden T et al. (2005) The Amber biomolecular simulation programs. J Comput Chem 26:1668–1688CrossRefGoogle Scholar 17. 17. Brooks BR, Brooks CL III, Mackerell AD et al. (2009) CHARMM: the biomolecular simulation program. J Comput Chem 30:1545–1614CrossRefGoogle Scholar 18. 18. Christen M, Hunenberger PH, Bakowies D et al. (2005) The GROMOS software for biomolecular simulation:GROMOS05. J Comput Chem 26:1719–1751CrossRefGoogle Scholar 19. 19. Dill KA, Chan HS (1997) From Levinthal to pathways to funnels. Nat Struct Biol 4:10–19CrossRefGoogle Scholar 20. 20. Boniecki M, Rotkiewicz P, Skolnick J et al. (2003) Protein fragment reconstruction using various modeling techniques. J Comput Aided Mol Des 17:725–738CrossRefGoogle Scholar 21. 21. Simmerling C, Strockbine B, Roitberg AE (2002) All-atom structure prediction and folding simulations of a stable protein. J Am Chem Soc 124:11258–11259CrossRefGoogle Scholar 22. 22. Freddolino PL, Liu F, Gruebele M et al. (2008) Ten-microsecond molecular dynamics simulation of a fast-folding WW domain. Biophys J 94:L75–L77CrossRefGoogle Scholar 23. 23. Stein EG, Rice LM, Brunger AT (1997) Torsion-angle molecular dynamics as a new efficient tool for NMR structure calculation. J Magn Reson 124:154–164CrossRefGoogle Scholar 24. 24. Parsons J, Holmes JB, Rojas JM et al. (2005) Practical conversion from torsion space to Cartesian space for in silico protein synthesis. J Comput Chem 26:1063–1068CrossRefGoogle Scholar 25. 25. Tozzini V (2009) Multiscale modeling of proteins. Acc Chem ResGoogle Scholar 26. 26. Levitt M, Warshel A (1975) Computer simulation of protein folding. Nature 253:694–698CrossRefGoogle Scholar 27. 27. Lee J, Liwo A, Scheraga HA (1999) Energy-based de novo protein folding by conformational space annealing and an off-lattice united-residue force field:application to the 10-55 fragment of staphylococcal protein A and to apo calbindin D9K. Proc Natl Acad Sci USA 96:2025–2030CrossRefGoogle Scholar 28. 28. McDowell SE, Spackova N, Sponer J et al. (2007) Molecular dynamics simulations of RNA:an in silico single molecule approach. Biopolymers 85:169–184CrossRefGoogle Scholar 29. 29. Zuo G, Li W, Zhang J et al. (2010) Folding of a small RNA hairpin based on simulation with replica exchange molecular dynamics. J Phys Chem B 114:5835–5839CrossRefGoogle Scholar 30. 30. Deng NJ, Cieplak P (2010) Free energy profile of RNA hairpins:a molecular dynamics simulation study. Biophys J 98:627–636CrossRefGoogle Scholar 31. 31. Auffinger P, Westhof E (1996) H-bond stability in the tRNA(Asp) anticodon hairpin:3 ns of multiple molecular dynamics simulations. Biophys J 71:940–954CrossRefGoogle Scholar 32. 32. Sarzynska J, Reblova K, Sponer J et al. (2008) Conformational transitions of flanking purines in HIV-1 RNA dimerization initiation site kissing complexes studied by CHARMM explicit solvent molecular dynamics. Biopolymers 89:732–746CrossRefGoogle Scholar 33. 33. Sanbonmatsu KY, Tung CS (2007) High performance computing in biology:multimillion atom simulations of nanoscale systems. J Struct Biol 157:470–480CrossRefGoogle Scholar 34. 34. Lu XJ, Olson WK (2003) 3DNA:a software package for the analysis, rebuilding and visualization of three-dimensional nucleic acid structures. Nucleic Acids Res 31:5108–5121CrossRefGoogle Scholar 35. 35. Ulyanov NB, Gorin AA, Zhurkin VB (1989) Conformational mechanics of the DNA double helix. A combined Monte Carlo and energy minimization approach. In: Kartashev LP, Kartashev SI (eds) Proc International Conference on Supercomputing'89: Supercomputer Applications: St. FL, Petersburg, pp 368–370Google Scholar 36. 36. Lavery R, Zakrzewska K, Sklenar H (1995) JUMNA (junction minimisation of nucleic acids). Comput Phys Commun 91:135–158CrossRefGoogle Scholar 37. 37. Malhotra A, Tan RK, Harvey SC (1990) Prediction of the three-dimensional structure of Escherichia coli 30 S ribosomal subunit: a molecular mechanics approach. Proc Natl Acad Sci USA 87:1950–1954CrossRefGoogle Scholar 38. 38. Tan RKZ, Petrov AS, Harvey SC (2006) YUP:A molecular simulation program for coarse-grained and multiscaled models. J Chem Theor Comput 2:529–540CrossRefGoogle Scholar 39. 39. Jonikas MA, Radmer RJ, Laederach A et al. (2009) Coarse-grained modeling of large RNA molecules with knowledge-based potentials and structural filters. RNA 15:189–199CrossRefGoogle Scholar 40. 40. Cao S, Chen SJ (2009) A new computational approach for mechanical folding kinetics of RNA hairpins. Biophys J 96:4024–4034CrossRefGoogle Scholar 41. 41. Ding F, Sharma S, Chalasani P et al. (2008) Ab initio RNA folding by discrete molecular dynamics: from structure prediction to folding mechanisms. RNA 14:1164–1173CrossRefGoogle Scholar 42. 42. Pasquali S, Derreumaux P (2010) HiRE-RNA: a high resolution coarse-grained energy model for RNA. J Phys Chem B 114:11957–11966CrossRefGoogle Scholar 43. 43. Mathews DH, Sabina J, Zuker M et al. (1999) Expanded sequence dependence of thermodynamic parameters improves prediction of RNA secondary structure. J Mol Biol 288:911–940CrossRefGoogle Scholar 44. 44. Cao S, Giedroc DP, Chen SJ (2010) Predicting loop-helix tertiary structural contacts in RNA pseudoknots. RNA 16:538–552CrossRefGoogle Scholar 45. 45. 46. 46. Krieger E, Nabuurs SB, Vriend G (2003) Homology modeling. Methods biochem anal 44:509–523Google Scholar 47. 47. Cohen-Gonsaud M, Catherinot V, Labesse G et al. (2004) From molecular modeling to drug design. In: Bujnicki JM (ed) Practical bioinformatics. Springer, Berlin, pp 35–71Google Scholar 48. 48. Dror O, Nussinov R, Wolfson H (2005) ARTS:alignment of RNA tertiary structures. Bioinformatics 21 Suppl 2:ii47-ii53Google Scholar 49. 49. Chothia C, Gerstein M (1997) Protein evolution. How far can sequences diverge? Nature 385:579–581CrossRefGoogle Scholar 50. 50. Kumar S, Ma B, Tsai CJ et al. (2000) Folding and binding cascades:dynamic landscapes and population shifts. Protein Sci 9:10–19CrossRefGoogle Scholar 51. 51. Pyle AM (2002) Metal ions in the structure and function of RNA. J Biol Inorg Chem 7:679–690CrossRefGoogle Scholar 52. 52. Fiser A, Feig M, Brooks CL 3rd et al. (2002) Evolution and physics in comparative protein structure modeling. Acc Chem Res 35:413–421CrossRefGoogle Scholar 53. 53. Grishin NV (2001) Fold change in evolution of protein structures. J Struct Biol 134:167–185CrossRefGoogle Scholar 54. 54. Krasilnikov AS, Xiao Y, Pan T et al. (2004) Basis for structural diversity in homologous RNAs. Science 306:104–107CrossRefGoogle Scholar 55. 55. Peitsch MC (1995) Protein Modelling by E-mail. Bio/Technology 13:658–660CrossRefGoogle Scholar 56. 56. 57. 57. Flores SC, Wan Y, Russell R et al. (2010) Predicting RNA structure by multiple template homology modeling. Pac Symp Biocomput 216-227Google Scholar 58. 58. Rother M, Rother K, Puton T et al. (2011) ModeRNA:A tool for comparative modeling of RNA 3D structure. Nucleic Acids Res. (in press)Google Scholar 59. 59. Ben-David M, Noivirt-Brik O, Paz A et al. (2009) Assessment of CASP8 structure predictions for template free targets. Proteins 77(Suppl 9):50–65CrossRefGoogle Scholar 60. 60. Cozzetto D, Kryshtafovych A, Fidelis K et al. (2009) Evaluation of template-based models in CASP8 with standard measures. Proteins 77(Suppl 9):18–28CrossRefGoogle Scholar 61. 61. Simons KT, Kooperberg C, Huang E et al. (1997) Assembly of protein tertiary structures from fragments with similar local sequences using simulated annealing and Bayesian scoring functions. J Mol Biol 268:209–225CrossRefGoogle Scholar 62. 62. Kolinski A, Bujnicki JM (2005) Generalized protein structure prediction based on combination of fold-recognition with de novo folding and evaluation of models. Proteins 61(Suppl 7):84–90CrossRefGoogle Scholar 63. 63. Zhang Y, Skolnick J (2004) Automated structure prediction of weakly homologous proteins on a genomic scale. Proc Natl Acad Sci USA 101:7594–7599CrossRefGoogle Scholar 64. 64. Das R, Baker D (2007) Automated de novo prediction of native-like RNA tertiary structures. Proc Natl Acad Sci USA 104:14664–14669CrossRefGoogle Scholar 65. 65. Das R, Karanicolas J, Baker D (2010) Atomic accuracy in predicting and designing noncanonical RNA structure. Nat Meth 7:291–294CrossRefGoogle Scholar 66. 66. Parisien M, Major F (2008) The MC-Fold and MC-Sym pipeline infers RNA structure from sequence data. Nature 452:51–55CrossRefGoogle Scholar 67. 67. Xia Z, Gardner DP, Gutell RR et al. (2010) Coarse-grained model for simulation of RNA three-dimensional structures. J Phys Chem B 114:13497–13506CrossRefGoogle Scholar 68. 68. Jossinet F, Westhof E (2005) Sequence to Structure (S2S):display, manipulate and interconnect RNA data from sequence to structure. Bioinformatics 21:3320–3321CrossRefGoogle Scholar 69. 69. Jossinet F, Ludwig TE, Westhof E (2010) Assemble:an interactive graphical tool to analyze and build RNA architectures at the 2D and 3D levels. Bioinformatics 26:2057–2059Google Scholar 70. 70. Zwieb C, Muller F (1997) Three-dimensional comparative modeling of RNA. Nucleic Acids Symp Ser 69-71Google Scholar 71. 71. Martinez HM, Maizel JV Jr, Shapiro BA (2008) RNA2D3D:a program for generating, viewing, and comparing 3-dimensional models of RNA. J Biomol Struct Dyn 25:669–683Google Scholar 72. 72. Kosinski J, Cymerman IA, Feder M et al. (2003) A "FRankenstein's monster" approach to comparative modeling:merging the finest fragments of Fold-Recognition models and iterative model refinement aided by 3D structure evaluation. Proteins 53(Suppl 6):369–379CrossRefGoogle Scholar 73. 73. Venclovas C (2003) Comparative modeling in CASP5:progress is evident, but alignment errors remain a significant hindrance. Proteins 53(Suppl 6):380–388CrossRefGoogle Scholar 74. 74. Bujnicki JM (2006) Protein-structure prediction by recombination of fragments. Chembiochem 7:19–27CrossRefGoogle Scholar 75. 75. Moult J, Fidelis K, Kryshtafovych A et al. (2009) Critical assessment of methods of protein structure prediction - Round VIII. Proteins 77(Suppl 9):1–4CrossRefGoogle Scholar 76. 76. Zemla A (2003) LGA:A method for finding 3D similarities in protein structures. Nucleic Acids Res 31:3370–3374CrossRefGoogle Scholar 77. 77. Zhang Y, Skolnick J (2004) Scoring function for automated assessment of protein structure template quality. Proteins 57:702–710CrossRefGoogle Scholar 78. 78. Levitt M, Gerstein M (1998) A unified statistical framework for sequence comparison and structure comparison. Proc Natl Acad Sci USA 95:5913–5920CrossRefGoogle Scholar 79. 79. Hajdin CE, Ding F, Dokholyan NV et al. (2010) On the significance of an RNA tertiary structure prediction. RNA 16:1340–1349CrossRefGoogle Scholar 80. 80. Parisien M, Cruz JA, Westhof E et al. (2009) New metrics for comparing and assessing discrepancies between RNA 3D structures and models. RNA 15:1875–1885CrossRefGoogle Scholar Copyright information © The Author(s) 2011 Authors and Affiliations • Kristian Rother • 1 • 2 • Magdalena Rother • 1 • 2 • Michał Boniecki • 1 • Tomasz Puton • 1 • 2 • Janusz M. Bujnicki • 1 • 2 1. 1.Laboratory of Bioinformatics and Protein EngineeringInternational Institute of Molecular and Cell BiologyWarsawPoland 2. 2.Laboratory of Structural Bioinformatics, Institute of Molecular Biology and Biotechnology, Faculty of BiologyAdam Mickiewicz UniversityPoznanPoland Personalised recommendations
fbeb1b3f49576810
Interaction of an Extra Electron with Optical Phonons in Long Molecular Chains and Ionic Crystals • V. Z. Enol’skii Part of the NATO ASI Series book series (NSSB, volume 243) After Davydov pioneer paper [I] on the energy transfer in biological systems the attention of investigators was drawn to different problems of electron-phonon interaction in molecular chains [2, 3]. The effect of acoustic phonons with the dispersion law $$ \Omega \left( k \right) = kV_{ac} $$ on the motion of an extra electron (and exciton) in a one dimensional molecular chain was studied by Davydov [4–6]. It was shown that the stable motion of electron (exciton) with velocities less than a constant group velocity Vac of a longtudinal sound is accompanied by a local chain deformation, and the motion of this collective deformation is described by a solitary wave which does not change its form and velocity. This wave, called as soliton, can travel only with the speed less than the sound velocity Vac. Molecular Chain Optical Phonon Polarization Field Electron Motion Ionic Crystal  Unable to display preview. Download preview PDF. Unable to display preview. Download preview PDF. 1. 1. A. S. Davydov, Biology and Quantum Mechanics, Pergamon, Oxford (1982).Google Scholar 2. 2. A. S. Scott, Dynamics of Davydov soliton, Phys.Rev. A 26:678 (1982).ADSCrossRefGoogle Scholar 3. 3. A. S. Scott, The vibrational structure of Davydov soliton, Phys, Scr. 25:651 (1982).ADSzbMATHCrossRefGoogle Scholar 4. 4. A. S. Davydov, The effect of electron-Dhonon interaction on the electron motion in one-dimensional molecular system, Teor. Mat. Fis., 40:408 (1979) (in Russian).Google Scholar 5. 5. A. S. Davydcv, The soliton motion in one-dimensional molecular chain with regard of thermal oscillations, Zin. Exper. Teor. Fiz., 78:789 (1980) (in Russian).ADSGoogle Scholar 6. 6. A. S. Davydov, Solitons, Bioenergetics and the mechanisms of muscle contraction, Intern. J. Quant. Chem., 16:5 (1979).CrossRefGoogle Scholar 7. 7. J. Appel, Polarons, Sol. St. Phys., 21:1 (1968).CrossRefGoogle Scholar 8. 8. A. S. Davydov, V. Z. Enol’skii, The theory of motion of an extra electron in a molecular chain with allowance for interaction with optical phonons, Zn. Exper. Teor. Fiz., 79:1888 (1980).Google Scholar 9. 9. A. S. Davydov, V. Z. Enol’skii, Translation-invariant theory of strong particle-field coupling, Zn. Exper. Teor. Fiz., 81:1088 (1981).Google Scholar 10. 10. A. S. Davydov, V. Z. Enol’skii, On the question of effective mass for Pekar polaron, zn. Exper. Teor. Fiz., 94:177 (1988).Google Scholar 11. 11. I. E. Turner, V. E. Anderson, Ground state energy eigenvalues and eigenfunctions for an electron in electron-dipole field, Phys. Rev. 174:81 (1968).ADSCrossRefGoogle Scholar 12. 12. A. Nakamura Damping and modificatiom of exciton solitary waves, J. Phys. Soc. Jap. 42:1824 (1977).ADSCrossRefGoogle Scholar 13. 13. I. V. Simenog, On the asymptotics of stationary nonlinear Schrödinger equation, Teor. Mat. Fiz., 30:3 (1977).MathSciNetCrossRefGoogle Scholar 14. 14. A. G. Litvak, A. M. Sergeev, On the one-dimensional collapse of plasma waves, Lett. Zn. Exper. Teor. Fiz., 27:549 (1978).Google Scholar 15. 15. L. D. Landau, On the electron motion in crystal lattice, Phys. Zs. Sowiet., 3:664 (1933).zbMATHGoogle Scholar 16. 16. L. D. Landau, S. I. Pekar, Polaron effective mass, Zn. Exper. Teor. Fiz., 18:419 (1948).Google Scholar Copyright information © Springer Science+Business Media New York 1990 Authors and Affiliations • V. Z. Enol’skii • 1 1. 1.Institute of Metal PhysicsKiev-142USSR Personalised recommendations
764be0d8779b6d6b
Seduced by calculus The 2010 Fields Medal was won by a French mathematician captivated by the crowning mathematical achievement of the Enlightenment. Alex Bellos explains. Jeffrey Phillips The French mathematician Cédric Villani is no ordinary looking university professor. Handsome and slender, with a boyish face and a wavy, neck length bob, he looks more like a dandy from the Belle Epoque, or a member of an avant garde student rock band. He always wears a three-piece suit, starched white collar, lavaliere cravat – the kind folded extravagantly in a giant bow – and a sparkling, tarantula-sized spider brooch. “Somehow I had to do it,” he said of his appearance. “It was instinctive.” I first met Villani in Hyderabad, India, at the 2010 International Congress of Mathematicians, or ICM, the four-yearly gathering of the tribe. Of the 3,000 delegates, Villani was the focus of most attention, not because he was the most elaborately dressed, but because he received the Fields Medal at the opening gala. The Fields is the highest honour in maths and is awarded at each ICM to two, three or four mathematicians under the age of 40. The age rule recognises the original motivation behind the prize, which was conceived by the Canadian mathematician J. C. Fields. He wanted not only to recognise work already done, but also to encourage future success. Such is the acclaim afforded by a Fields Medal, however, that since the first two were awarded in 1936, they have helped establish a cult of youth, implying that once you hit 40 you’re past it. This is unfair. Many mathematicians produce their best work after the age of 40, although Fields medallists can struggle to regain focus, since fame brings with it other responsibilities. Mathematicians gather at the ICM to take stock of their achievements, and the Fields Medal citations provide the clearest snapshot of the most exciting recent work. Unlike the citations for the other three winners in 2010, which were impenetrable to me and even to many of the mathematicians present, Villani’s citation was understandable to the non-specialist. He won “for his proofs of nonlinear Landau damping and convergence to equilibrium for the Boltzmann equation”. The Boltzmann equation, devised by the Austrian physicist Ludwig Boltzmann in 1872, concerns the behaviour of particles in a gas, and is one of the best known equations in classical physics. Not only is Villani a devotee of the 19th century’s neckwear, he is also a world authority on its applied mathematics. The Boltzmann equation is what is known as a partial differential equation, or PDE, and it looks like this: The equation is written in the vocabulary of calculus. Shortly, I’ll explain the symbols. Calculus was the crowning intellectual achievement of the Enlightenment, and Villani’s Fields Medal demonstrates that it remains a rich area of advanced mathematical study. But before we return to the flamboyantly attired Frenchman, we first need to transport ourselves from southern India in 2010 to Sicily in around the third century BCE. On the front of the Fields Medal is the bearded portrait of Archimedes, basking in the glow of his reputation as the most illustrious mathematician of antiquity. Archimedes, however, is usually remembered for his contributions to physical science, such as the screw that raises water when turned by hand. Yet Plutarch wrote that geometry was his true love. At bath times “while (his servants) were anointing of him with oils and sweet savours, with his fingers he drew lines upon his naked body, so far was he taken from himself, and brought into ecstasy or trance, with the delight he had in the study of geometry”. The initial task of geometry was the calculation of area. (According to Herodotus, geometry began as a practice devised by Egyptian tax inspectors to calculate areas of land destroyed by the Nile’s annual floods.) As we all know, the area of a rectangle is the width multiplied by the height, and from this formula we can deduce that the area of a triangle is half the base times the height. The Greeks devised methods to calculate the areas of more complicated shapes. Of these, the most impressive achievement was Archimedes’s “quadrature of the parabola”, by which is meant calculation of the area bounded by a line and a parabola, which is a specific type of U-shaped curve. Archimedes first drew a large triangle inside the parabola, as illustrated below, then on either side of this he drew another triangle. On each of the two sides of these smaller triangles, he drew an even smaller triangle, and so on, such that all three points of each triangle were always on the parabola. The more triangles he drew, the closer and closer their combined area was to the area of the parabolic section. If the process was allowed to carry on forever the infinite number of triangles would perfectly cover the desired area. The quadrature of the parabola. Archimedes’ quadrature of the parabola is the most sophisticated example from the classical age of the method of exhaustion, the technique of adding up a sequence of small areas that converge towards a larger one. The proof is considered his finest moment because it represents the first “modern” view of mathematical infinity. Archimedes was the earliest thinker to develop the apparatus of an infinite series with a finite limit. This was important not only for conquering the areas of shapes significantly more exotic than the parabola, but also for starting on the conceptual path towards calculus. Of the giants on whose shoulders Isaac Newton would eventually perch, Archimedes was the first. Infinity is a number bigger than any other. It has a twin concept, the infinitesimal, which is a number smaller than any other, yet still larger than zero. In the 17th century, mathematicians realised how useful the infinitesimal was, even though it was a concept that didn’t make much sense – it was the mathematical equivalent of having your cake and eating it. The infinitesimal was both something and nothing: large enough to be of mathematical use, but small enough to disappear when you needed it to. Calculating the area of a circle with infinitesimals. For example, consider the circle illustrated here. Inside is a dodecagon, a 12-sided shape made up of 12 identical triangles sharing a common vertex, or point. The combined area of the triangles is approximately the area of the circle. If I drew a polygon with more sides within the circle, containing more, thinner triangles, their combined area would approximate the circle more closely. And if I kept on increasing the number of sides, in the limit I would have a polygon with an infinite number of sides containing an infinite number of infinitely thin triangles. The area of each triangle is infinitesimal, yet their combined area is the area of the circle, as illustrated below left. Here's another way the infinitesimal was useful in determining gradients. For readers who have forgotten what a gradient is, it is the measure of the slope, calculated bydividing the distance moved up by the distance moved along. So, in the illustration below right, the gradient of the road is 1/4 because the distance moved up is 100m and the distance along is 400m. Mathematicians, however, wanted to find a method to calculate the gradient of tangents, which are those lines that touch a curve at a single point. A gradient the tangent The trick to finding the gradient of a tangent at point P is to make an approximation of the tangent, and then to improve the approximation until it coincides with the desired line. We do this by drawing a line through P that cuts the curve at nearby point Q, and then we bring Q closer and closer to P. When Q hits P, the line is the tangent. The gradient of the line through P and Q is ∆y/∆x. (The Greek letter delta, ∆, is a mathematical symbol meaning a small increment). As Q closes in on P, the value ∆y/∆x approaches the gradient of the tangent at P. But we have a problem. If we let Q actually reach P, then ∆y = 0 and ∆x = 0, meaning that the gradient of the curve at P is 0/0. Bad maths alert! The rules of arithmetic prohibit division by zero! The solution is to keep Q at an infinitesimal distance from P. If we do, we can say that when Q becomes infinitesimally close to P, the value ∆y/∆x is infinitesimally close to the gradient of the curve at P. Approximating a tangent. In 1665, Isaac Newton, recently graduated from Cambridge, returned to live with his mother in their Lincolnshire farmhouse. The Great Plague was devastating towns across the country. The university had closed down to protect its staff and students. Newton made himself a small study and started to fill a giant jotter he called the Waste Book with mathematical thoughts. Over the next two years the solitary scribbler, undistracted, devised new theorems that became the foundations of the Philosophiæ Naturalis Principia Mathematica, his 1687 treatise that, more than any work before or since, transformed our understanding of the physical universe. The Principia established a system of natural laws that explained why objects, from apples falling off trees to planets orbiting the Sun, move as they do. Yet Newton’s breakthrough in physics required an equally fundamental breakthrough in maths. He formalized the previous half-century’s work on infinity and infinitesimals into a general system with a unified notation. He called it the method of fluxions, but it became better known as the “calculus of infinitesimals’, and now, simply, “calculus’. A body that moves changes its position, and its speed is the change in position over time. If a body is travelling with a fixed speed, it changes its position by a fixed amount every fixed period. A car with constant speed that covers 60 miles between 4pm and 5pm is travelling at 60 miles per hour. Newton wanted to solve a different problem: how does one calculate the speed of a body that is not travelling at a constant speed? For example, let’s say the car above, rather than travelling consistently at 60mph, is continually slowing down and speeding up because of traffic. One strategy to calculate its speed at, say, 4.30pm, is to consider how far it travels between 4.30pm and 4.31pm, which will give us a distance per minute. (We just need to multiply the distance by 60 to get the value in mph.) But this figure is just the average speed for that minute, not the instantaneous speed at 4.30pm. We could aim for a shorter interval – say, the distance travelled between 4.30pm and 1 second later, which would give us a distance per second. (We’d then multiply by 3,600 to get the value in mph). But again this value is the average for that second. We could aim for smaller and smaller intervals, but we are never going to get the instantaneous speed until the interval is tinier than any other – when it is zero, in other words. But when the interval is zero, the car does not move at all! This line of reasoning should sound familiar, because I used it two paragraphs ago when explaining how to calculate the gradient of a tangent. To find the gradient we divide an infinitesimally small quantity (length) by another infinitesimally small quantity (another length). To get the instantaneous speed we also divide an infinitesimally small quantity (distance) by another infinitesimally small quantity (time). The problems are mathematically equivalent. Newton’s method of fluxions was a method to calculate gradients, which enabled him to calculate instantaneous speeds. Calculus allowed Newton to take an equation that determined the position of an object, and from it devise a secondary equation about the object’s instantaneous speed. It also allowed him to take an equation determining the object’s instantaneous speed, and from it devise a secondary equation about position, which, as it turned out, was equivalent to the calculation of areas using infinitesimals! Calculus, therefore, gave him the mathematical tools to develop his laws of motion. In his equations, he called the variables x and y “fluents” and the gradients “fluxions’, written by the “pricked letters” ẋ and ẏ. When Newton returned to Cambridge after two years avoiding the plague in Lincolnshire, he did not tell anyone about the method of fluxions. On the continent, Gottfried Leibniz was developing an equivalent system. Leibniz was German by birth but a man of the world – a lawyer, diplomat, alchemist, engineer and philosopher. Leibniz was also the mathematician most obsessed with notation. The symbols he used for his system of calculus were clearer than Newton’s, and are the ones we use today. Leibniz introduced the terms dx and dy for the infinitesimal differences in x and y. The gradient, which is one infinitesimal difference divided by the other, he wrote dy/dx. Thanks to his use of the word “difference’, the calculation of gradient became known as “differentiation’. Leibniz also introduced the distinctive stretched “s’, ∫, as the symbol for the calculation of area. It’s an abbreviation of summa, or sum, since the calculation of area is based on infinite sums of infinitesimals. On the suggestion of his friend Johann Bernoulli, Leibniz called his technique calculus integralis, and the calculation of area became known as “integration’. Leibniz’s ∫ is the most majestic symbol in maths, reminiscent of the f-hole of a cello or violin. Calculus comprises differentiation (computation of gradient) and integration (computation of area). In general terms, gradient is the rate of change of one quantity over another, and area is the measure of how much one quantity accumulates with respect to another. Calculus thus provided scientists with a way to model quantities that varied in relation to each other. It is a formidable instrument to explain the physical world because everything in the universe, from the tiniest atoms to the largest galaxies, is in a state of permanent flux. When we know the relationship between two varying quantities, we can describe them in an equation using the symbols for differentiation and integration. An equation in x and y that includes the term dy/dx is called a “simple differential equation’. If there are more than two variables, say x, y and t, the rates of change are written ∂y/∂x, or ∂y/∂t, with the rounded ∂. The equation is called a “partial differential equation’, or PDE, because terms like ∂y/∂x tell us how one variable changes with respect to another one, but not to all of them. PDEs dominate applied mathematics. They allow scientists to make predictions. If we know how two quantities vary over time, then we can predict exactly what state they will be in at any time in the future. Maxwell’s equations, which explain the behaviour of magnetic and electric fields, the Schrödinger equation, which underlies quantum mechanics, and Einstein’s field equations, which are the basis of general relativity, are all PDEs. The first important PDE described the behaviour of a violin string when bowed, a problem that had tormented scientists for decades. It was discovered in 1746 by Jean le Rond d’Alembert, the celebrity mathematician of his day. D’Alembert, the product of a brief liaison between an artillery general and a lapsed nun, was abandoned after he was born and left on the steps of the church Saint Jean Le Rond, next to Notre-Dame Cathedral in Paris, from which he took his name. Brought up by the wife of a glazier, he rose against the odds to become the permanent secretary of the Académie Française. As well as being a serious mathematician, he was also a vociferous apologist for the values of the Enlightenment. He was a public figure, a sought-after guest at aristocratic salons and one of the editors of the landmark Encyclopédie, for which he wrote the preliminary discourse and more than a thousand articles. D’Alembert was the prototype French scientific intellectual, a role now occupied with gusto by Cédric Villani. The second time I met Villani was in Paris. Since 2009 he has been director of the Institut Henri Poincaré, France’s elite maths institute, which is situated among the universities of the Latin Quarter. His office is a comfortable clutter of books, paper, coffee mugs, awards, puzzles and geometrical shapes. Villani’s appearance was unchanged since we met in India at the International Congress of Mathematicians: burgundy cravat, blue three-piece suit, and a metal spider glistening on his lapel. He said his look emerged when he was in his twenties. He wore shirts with large sleeves, then with lace, then a top hat… “It was like a scientific experiment, and gradually it was ‘this is me’.” And the spider? He enjoys its ambiguity. “Some people think the spider is a maternal symbol. Others think that the web is a symbol for the universe, or that the spider is the big architect of the world, like a way to personify God. Spiders don’t leave people indifferent. You immediately have a reaction.” The spider is an archetype rich with interpretations, I thought, just like mathematics is an abstract language with innumerable applications. Villani’s field is PDEs. Even though PDEs have been around for almost three centuries, he says they are “for a large part still poorly understood. Each PDE seems to have a theory of its own. You have many sub-branches of PDEs with only a small common basis and no general classification. People have tried to classify them, but even the best specialists have failed.” The PDE that has occupied most of Villani’s time is the Boltzmann equation. It was the subject of his PhD and formed part of the subsequent work that led to his Fields Medal. He now views it with tenderness and devotion. “It’s like the first girl you fall in love with,” he confided. “The first equation you see – you think it is the most beautiful in the world.” Feast your eyes on her again: The Boltzmann equation belongs to the field of statistical mechanics: the branch of mathematical physics that investigates how the behaviour of individual molecules in a cloud of gas influences macroscopic properties like temperature and pressure. The equation describes how a gas disseminates by considering the likelihood of any of its molecules being in any particular spot, with a particular speed, at a particular time. [The f is a “probability density function’, that gives the probability of particles having a position near x and a speed near v at time t.] The model assumes that particles in a gas bounce around according to Newton’s laws, but in random directions, and describes the effects of their collisions using the maths of probability. Villani pointed at the left side of the equation: “This is just particles going in straight lines.” He pointed to the right side of the equation: “And this is just shock. Tik-ding! Ting-dik!” He bumped his fists together several times. “Often in PDEs, you have tension between various terms. The Boltzmann equation is the perfect case study because the terms represent completely different phenomena and also live in completely different mathematical worlds.” If you filmed a single gas particle bouncing off another gas particle, and showed it to a friend, there is no way he or she would know whether you were playing the film forwards or backwards, since Newton’s laws are time-reversible. But if you filmed a gas spreading from a beaker to its surroundings, a viewer would instantly be able to tell which way the film was being played, since gases do not suck themselves back into beakers. Boltzmann established a mathematical foundation for the apparent contradiction between micro- and macroscopic behaviour by introducing a new concept, entropy. This is the measure of disorder – in theoretical terms the number of possible positions and speeds of the particles at any time. Boltzmann then showed that entropy always increases. Villani’s breakthrough paper concerned just how fast entropy increases before reaching the totally disordered state. The Boltzmann equation has straightforward applications, such as in aeronautical engineering, to determine what happens to planes when they fly through gases. Its usefulness is what first appealed to Villani when he embarked on his PhD. But as he became more intimate with the equation, its beauty seduced him. He compares it to a Michelangelo sculpture: “Not pure and ethereal and elegant, but very human, very tortured, with the strength of the energy of the world. In the equation you can hear the roar of the particles, full of fury.” He added that he prefers to spend years studying well-known equations, trying to find new insights into them, rather than inventing new concepts. “It’s what I like, and it’s part of a general attitude that says, ‘Hey, guys! High-energy physics, the Higgs boson, string theory or whatever – it may all be fascinating, but remember we still don’t understand Newtonian mechanics.’ There are still many, many open problems.” He showed me a PDE in a book. “Does this equation have smooth solutions? Nobody in hell knows that!” He shrugged his shoulders, his forehead criss-crossed with lines. Subscriber Exclusive The remainder of this article is exclusive to Cosmos subscribers Contrib alexbellos.jpg?ixlib=rails 2.1 Alex Bellos is the author of Alex’s Adventures in Numberland and Alex Through the Looking Glass. He writes a maths blog for The Guardian. Jefferyphillips.jpg?ixlib=rails 2.1 Jeffrey Phillips is an illustrator, storyboard artist and graphic designer. Latest Stories MoreMore Articles
e126acfe00cd5a88
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Joseph Keim Campbell Rudolf Carnap Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl John Martin Fischer Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Andrea Lavazza Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf Michael Arbib Walter Baade Bernard Baars Gregory Bateson John S. Bell Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Donald Campbell Anthony Cashmore Eric Chaisson Jean-Pierre Changeux Arthur Holly Compton John Conway John Cramer E. P. Culverwell Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Albert Einstein Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher Joseph Fourier Philipp Frank Lila Gatlin Michael Gazzaniga GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold Brian Goodwin Joshua Greene Jacques Hadamard Patrick Haggard Stuart Hameroff Augustin Hamon Sam Harris Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Art Hobson Jesper Hoffmeyer E. T. Jaynes William Stanley Jevons Roman Jakobson Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein Simon Kochen Hans Kornhuber Stephen Kosslyn Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Benjamin Libet Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau James Clerk Maxwell Ernst Mayr John McCarthy Ulrich Mohrhoff Jacques Monod Emmy Noether Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Adolphe Quételet Juan Roederer Jerome Rothstein David Ruelle Erwin Schrödinger Aaron Schurger Claude Shannon David Shiang Herbert Simon Dean Keith Simonton B. F. Skinner Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark William Thomson (Kelvin) Giulio Tononi Peter Tse Vlatko Vedral Heinz von Foerster John von Neumann John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium Copenhagen Interpretation of Quantum Mechanics The idea that there was a Copenhagen way of thinking was christened as the "Kopenhagener Geist der Quantentheorie" by Werner Heisenberg in the introduction to his 1930 textbook The Physical Principles of Quantum Theory, based on his 1929 lectures in Chicago (given at the invitation of Arthur Holly Compton). It is a sad fact that Einstein, who had found more than any other scientist on the quantum interaction of electrons and photons, was largely ignored or misunderstood at this Solvay, when he again clearly described nonlocality At the 1927 Solvay conference on physics entitled "Electrons and Photons," Niels Bohr and Heisenberg consolidated their Copenhagen view as a "complete" picture of quantum physics, despite the fact that they could not, or would not, visualize or otherwise explain exactly what is going on in the microscopic world of "quantum reality." From the earliest presentations of the ideas of the supposed "founders" of quantum mechanics, Albert Einstein had deep misgivings of the work going on in Copenhagen, although he never doubted the calculating power of their new mathematical methods. He described their work as incomplete because it is based on the statistical results of many experiments so only makes probabilistic predictions about individual experiments. Einstein hoped to visualize what is going on in an underlying "objective reality." Bohr seemed to deny the existence of a fundamental "reality," but he clearly knew and said that the physical world is largely independent of human observations. In classical physics, the physical world is assumed to be completely independent of the act of observing the world. In quantum physics, Heisenberg said that the result of an experiment depends on the free choice of the experimenter as to what to measure. The quantum world of photons and electrons might look like waves or look like particles depending on what we look for, rather than what they "are" as "things in themselves." The information interpretation of quantum mechanics says there is only one world, the quantum world. Averaging over large numbers of quantum events explains why large objects appear to be classical Bohr thus put severe epistemological limits on knowing the Kantian "things in themselves," just as Immanuel Kant had put limits on reason. The British empiricist philosophers John Locke and David Hume had put the "primary" objects beyond the reach of our "secondary" sensory perceptions. In this respect, Bohr shared the positivist views of many other empirical scientists, Ernst Mach for example. Twentieth-century analytic language philosophers thought that philosophy (and even physics) could not solve some basic problems, but only "dis-solve" them by showing them to be conceptual errors. Neither Bohr nor Heisenberg thought that macroscopic objects actually are classical. They both saw them as composed of microscopic quantum objects. The exact location of that transition from the quantum to the classically describable world was arbitrary, said Heisenberg. He called it a "cut" (Schnitt). Heisenberg's and especially John von Neumann's and Eugene Wigner's insistence on a critical role for a "conscious observer" has led to a great deal of nonsense being associated with the Copenhagen Interpretation and in the philosophy of quantum physics. Heisenberg may only have been trying to explain how knowledge reaches the observer's mind. For von Neumann and Wigner, the mind was considered a causal factor in the behavior of the quantum system. Today, a large number of panpsychists, some philosophers, and a small number of scientists, still believe that the mind of a conscious observer is needed to cause the so-called "collapse" of the wave function. A relatively large number of scientists opposing the Copenhagen Interpretation believe that there are never any "collapses" in a universal wave function. In the mid 1950's, Heisenberg reacted to David Bohm's 1952 "pilot-wave" interpretation of quantum mechanics by calling his own work the "Copenhagen Interpretation" and the only correct interpretation of quantum mechanics. A significant fraction of working quantum physicists say they agree with Heisenberg, though few have ever looked carefully into the fundamental assumptions of the Copenhagen Interpretation. This is because they pick out from the Copenhagen Interpretation just the parts they need to make quantum mechanical calculations. Most textbooks start the story of quantum mechanics with the picture provided by the work of Heisenberg, Bohr, Max Born, Pascual Jordan, Paul Dirac, and of course Erwin Schrödinger. What Exactly Is in the Copenhagen Interpretation? There are several major components to the Copenhagen Interpretation, which most historians and philosophers of science agree on: • The quantum postulates. Bohr postulated that quantum systems (beginning with his "Bohr atom" in 1913) have "stationary states" which make discontinuous "quantum jumps" between the states with the emission or absorption of radiation. Until at least 1925 Bohr insisted the radiation itself is continuous. Einstein said radiation is a discrete "light quantum" (later called a photon) as early as 1905. Ironically, largely ignorant of the history of quantum mechanics (dominated by Bohr's account), many of today's textbooks teach the "Bohr atom" as emitting or absorbing photons - Einstein light quanta! Also, although Bohr made a passing reference, virtually no one today knows that discrete energy states or quantized energy levels in matter were first discovered by Einstein in his 1907 work on specific heat. • Wave-particle duality. The complementarity of waves and particles, including a synthesis of the particle-matrix mechanics theory of Heisenberg, Max Born, and Pascual Jordan, with the wave mechanical theory of Louis deBroglie and Erwin Schrödinger. Again ironically, wave-particle duality was first described by Einstein in 1909. Heisenberg had to have his arm twisted by Bohr to accept the wave picture. • Indeterminacy principle. Heisenberg sometimes called it his "uncertainty" principle, which could imply human ignorance, implying an epistemological (knowledge) problem rather than an ontology (reality) problem. Bohr considered indeterminacy as another example of his complementarity, between the non-commuting conjugate variables momentum and position, for example, Δp Δx ≥ h (also between energy and time and between action and angle variables). • Correspondence principle. Bohr maintained that in the limit of large quantum numbers, the atomic structure of quantum systems approaches the behavior of classical systems. Bohr and Heisenberg both described this case as when Planck's quantum of action h can be neglected. They mistakenly described this as h -> 0. But h is a fundamental constant. The quantum-to-classical transition is when the action of a macroscopic object is large compared to h . As the number of quantum particles increases (as mass increases), large macroscopic objects behave like classical objects. Position and velocity become arbitrarily accurate as h / m -> 0. Δv Δx ≥ h / m. There is only one world. It is a quantum world. Ontologically it is indeterministic, but epistemically, common sense and everyday experience inclines us to see it as deterministic. Bohr and Heisenberg insisted we must use classical (deterministic?) concepts and language to communicate our knowledge about quantum processes! • Completeness. Schrödinger's wave function ψ provides a "complete" description of a quantum system, despite the fact that conjugate variables like position and momentum cannot both be known with arbitrary accuracy, as they can in classical systems. There is less information in the world than classical physics implies. The wave function ψ evolves according to the unitary deterministic Schrödinger equation of motion, conserving that information. When one possibility becomes actual (discontinuously), new information may be irreversibly created and recorded by a measurement apparatus, or simply show up as a new information structure in the world. By comparison, Einstein maintained that quantum mechanics is incomplete, because it provides only statistical information about ensembles of quantum systems. He also was deeply concerned about nonlocality and nonseparability, things not addressed at all by the Copenhagen interpretation. • Irreversible recording of information in the measuring apparatus. Without this record (a pointer reading, blackened photographic plate, Geiger counter firing, etc.), there would be nothing for observers to see and to know. Information must come into the universe long before any scientist can "observe" it. In today's high-energy physics experiments and space research, the data-analysis time between the initial measurements and the scientists seeing the results can be measured in months or years. All the founders of quantum mechanics mention the need for irreversibility. The need for positive entropy transfer away from the experiment to stabilize new information (negative entropy) so it can be observed was first shown by Leo Szilard in 1929, and later by Leon Brillouin and Rolf Landauer. • Classical apparatus?. Bohr required that the macroscopic measurement apparatus be described in ordinary "classical" language. This is a third "complementarity," now between the quantum system and the "classical apparatus" But Born and Heisenberg never said the measuring apparatus is "classical." They knew that everything is fundamentally a quantum system. Lev Landau and Evgeny Lifshitz saw a circularity in this view, "quantum mechanics occupies a very unusual place among physical theories: it contains classical mechanics as a limiting case [correspondence principle], yet at the same time it requires this limiting case for its own formulation. • Statistical interpretation (acausality). Born interpreted the square modulus of Schrödinger's complex wave function as the probability of finding a particle. Einstein's "ghost field" or "guiding field," deBroglie's pilot or guide wave, and Schrödinger's wave function as the distribution of the electric charge density were similar views in much earlier years. Born sometimes pointed out that his direct inspiration was Einstein. All the predicted properties of physical systems and the "laws of nature" are only probabilistic (acausal, indeterministic ). All results of physical experiments are statistical. Briefly, theories give us probabilities, experiments give us statistics. Large numbers of identical experiments provide the statistical evidence for the theoretical probabilities predicted by quantum mechanics. Bohr's emphasis on epistemological questions suggests he thought that the statistical uncertainty may only be in our knowledge. They may not describe nature itself. Or at least Bohr thought that we can not describe a "reality" for quantum objects, certainly not with classical concepts and language. However, the new concept of an immaterial possibilities function (pure information) moving through space may make quantum phenomena "visualizable." Ontological acausality, chance, and a probabilistic or statistical nature were first seen by Einstein in 1916, as Born later acknowledged. But Einstein disliked this chance. He and most scientists appear to have what William James called an "antipathy to chance." • No Visualizability?. Bohr and Heisenberg both thought we could never produce models of what is going on at the quantum level. Bohr thought that since the wave function cannot be observed we can't say anything about it. Heisenberg said probability is real and the basis for the statistical nature of quantum mechanics. Whenever we draw a diagram of the waves impinging on the two-slits, we are in fact visualizing the wave function as possible locations for a particle, with calculable probabilities for each possible location. Today we can visualize with animations many puzzles in physics, including the two-slit experiment, entanglement, and microscopic irreversibility. • No Path?. Bohr, Heisenberg, Dirac and others said we cannot describe a particle as having a path. The path comes into existence when we observe it, Heisenberg maintained. (Die “Bahn” entsteht erst dadurch, dass wir sie beobachten) Einstein's "objective reality" hoped for a deeper level of physics in which particles do have paths and, in particular, they obey conservation principles, though intermediate measurements needed to observe this fact would interfere with the experiments. • Paul Dirac formalized quantum mechanics with these three fundamental concepts, all very familiar and accepted by Bohr, Heisenberg, and the other Copenhageners: • Axiom of measurement. Bohr's stationary quantum states have eigenvalues with corresponding eigenfunctions (the eigenvalue-eigenstate link). • Superposition principle. According to Dirac's transformation theory, ψ can be represented as a linear combination of vectors that are a proper basis for the combined target quantum system and the measurement apparatus. • Projection postulate. The collapse of the wave function ψ, which is irreversible, upon interacting with the measurement apparatus and creating new information. • Two-slit experiment. A "gedanken" experiment in the 1920's, but a real experiment today, exhibits the combination of wave and particle properties. Note that what two-slit experiment really shows is There are many more elements that play lesser roles, some making the Copenhagen Interpretation very unpopular among philosophers of science and spawning new interpretations or even "formulations" of quantum mechanics. Some of these are misreadings or later accretions. They include: • The "conscious observer." The claim that quantum systems cannot change their states without an observation being made by a conscious observer. Does the collapse only occur when an observer "looks at" the system? How exactly does the mind of the observer have causal power over the physical world? (the mind-body problem). Einstein objected to the idea that his bed had diffused throughout the room and only gathered itself back together when he opened the bedroom door and looked in. John von Neumann and Eugene Wigner seemed to believe that the mind of the observer was essential, but it is not found in the original work of Bohr and Heisenberg, so should perhaps not be a part of the Copenhagen Interpretation? It has no place in standard quantum physics today • The measurement problem, including the insistence that the measuring apparatus must be described classically when it is made of quantum particles. There are actually at least three definitions of the measurement problem. 1. The claim that the two dynamical laws, unitary deterministic time evolution according to the Schrödinger equation and indeterministic collapse according to Dirac's projection postulate are logically inconsistent. They cannot both be true, it's claimed. The proper interpretation is simply that the two laws laws apply at different times in the evolution of a quantum object, one for possibilities, the other for actuality (as Heisenberg knew): • first, the unitary deterministic evolution moves through space exploring all the possibilities for interaction, • second, the indeterministic collapse randomly (acausally) selects one of those possibilities to become actual. 2. The original concern that the "collapse dynamics" (von Neumann Process 1) is not a part of the formalism (von Neumann Process 2) but is an ad hoc element, with no rules for when to apply it. If there was a deterministic law that predicted a collapse, or the decay of a radioactive nucleus, it would not be quantum mechanics! 3. Decoherence theorists say that the measurement problem is the failure to observe macroscopic superpositions, such as Schrödinger's Cat. • The many unreasonable philosophical claims for "complementarity:" e.g., that it solves the mind-body problem?, • The basic "subjectivity" of the Copenhagen interpretation. It deals with epistemological knowledge of things, rather than the "things themselves." Opposition to the Copenhagen Interpretation Albert Einstein, Louis deBroglie, and especially Erwin Schrödinger insisted on a more "complete" picture, not merely what can be said, but what we can "see," a visualization (Anschaulichkeit) of the microscopic world. But de Broglie and Schrödinger's emphasis on the wave picture made it difficult to understand material particles and their "quantum jumps." Indeed, Schrödinger and more recent physicists like John Bell and the decoherence theorists H. D. Zeh and Wojciech Zurek deny the existence of particles and the collapse of the wave function, which is central to the Copenhagen Interpretation. Perhaps the main claim of those today denying the Copenhagen Interpretation (and standard quantum mechanics) began with Schrödinger's (nd later Bell's) claim that "there are no quantum jumps." Decoherence theorists and others favoring Everett's Many-Worlds Interpretation reject Dirac's projection postulate, a cornerstone of quantum theory. Heisenberg had initially insisted on his own "matrix mechanics" of particles and their discrete, discontinuous, indeterministic behavior, the "quantum postulate" of unpredictable events that undermine the classical physics of causality. But Bohr told Heisenberg that his matrix mechanics was too narrow a view of the problem. This disappointed Heisenberg and almost ruptured their relationship. But Heisenberg came to accept the criticism and he eventually endorsed all of Bohr's deep philosophical view of quantum reality as unvisualizable. In his September Como Lecture, a month before the 1927 Solvay conference, Bohr introduced his theory of "complementarity" as a "complete" theory. It combines the contradictory notions of wave and particle. Since both are required, they complement (and "complete") one another. Although Bohr is often credited with integrating the dualism of waves and particles, it was Einstein who predicted this would be necessary as early as 1909. But in doing so, Bohr obfuscated further what was already a mysterious picture. How could something possibly be both a discrete particle and a continuous wave? Did Bohr endorse the continuous deterministic wave-mechanical views of Schrödinger? Not exactly, but Bohr's accepting Schrödinger's wave mechanics as equal to and complementing his matrix mechanics was most upsetting to Heisenberg. Bohr's Como Lecture astonished Heisenberg by actually deriving (instead of Heisenberg's heuristic microscope argument) the uncertainty principle from the space-time wave picture alone, with no reference to the acausal dynamics of Heisenberg's picture! After this, Heisenberg did the same derivation in his 1930 text and subsequently completely accepted complementarity. Heisenberg spent the next several years widely promoting Bohr's views to scientists and philosophers around the world, though he frequently lectured on his mistaken, but easily understood, argument that looking at particles disturbs them. His microscope is even today included in many elementary physics textbooks. Bohr said these contradictory wave and particle pictures are "complementary" and that both are needed for a "complete" picture. He co-opted Einstein's claim to a more "complete" picture of an objective" reality, one that might restore simultaneous knowledge of position and momentum, for example. Classical physics has twice the number of independent variables (and twice the information) as quantum physics. In this sense, it does seem more "complete." Many critics of Copenhagen thought that Bohr deliberately and provocatively embraced logically contradictory notions - of continuous deterministic waves and discrete indeterministic particles - perhaps as evidence of Kantian limits on reason and human knowledge. Kant called such contradictory truths "antinomies." The contradictions only strengthened Bohr's epistemological resolve and his insistence that physics required a subjective view unable to reach the objective nature of the "things in themselves." As Heisenberg described it in his explanation of the Copenhagen Interpretation, Copenhagen Interpretation on Wikipedia Copenhagen Interpretation on Stanford Encyclopedia of Philosophy "Copenhagen Interpretation of Quantum Theory", in Physics and Philosophy, Werner Heisenberg, 1958, pp.44-58 "The Copenhagen Interpretation", American Journal of Physics, 40, p.1098, Henry Stapp, 1972 "The History of Quantum Theory", in Physics and Philosophy, Werner Heisenberg, 1958, pp.30-43 For Teachers For Scholars • Born's statistical interpretation - brings in Schrödinger waves, which upset Heisenberg • uncertainty principle, March 1927 • complementarity - waves and particles, wave mechanics and matrix mechanics, again upsets Heisenberg • the two-slit experiment • measurements, observers, "disturb" a quantum system, - Microscope echo • loss of causality (Einstein knew), unsharp space-time description (wave-packet) • classical apparatus, quantum system • our goal not to understand reality, but to acquire knowledge Rosenfeld quote • Experimenter must choose either particle-like or wave-like experiment - need examples • Heisenberg uncertainty was discontinuity, intrusion of instruments, for Bohr it was "the general complementary character of description" - wave or particle • Complementarity a general framework, Heisenberg particle uncertainty a particular example • Einstein/Schrödinger want a field theory and continuous/waves only? Bohr wants sometimes waves, sometimes particles. Bohr wants always both waves and particles. • Combines Heisenberg's "free choice" of experimenter as to what to measure, with Dirac's "free choice" of Nature with deterministic evolution of possibilities followed by discontinuous and random appearance of one actual from all the possibles. Chapter 1.1 - Creation Chapter 1.3 - Information Home Part Two - Knowledge Normal | Teacher | Scholar
bb1520653d48c93e
Abraham Meets Abraham from a Parallel Universe And he [Abraham] lifted up his eyes and looked, and, lo, three men stood over against him…  (Gen. 18:2)   On this blog, we often discuss a collapse of the wavefunction as the result of a measurement. This phenomenon is called by some physicists the “measurement problem.” There are several reasons, why the collapse of the wavefunction—part and parcel of the Copenhagen interpretation of quantum mechanics—is called a problem. Firstly, it does not follow from the Schrödinger equation and is added ad hoc. Secondly, nobody knows how it happens or how long it takes to collapse the wavefunction.  This is not to mention that any notion that the collapse of the wavefunction is caused by human consciousness leading to Cartesian dualism is anathema to physicists. It is a problem, no matter how you [...]
f476b2b613fdf7e1
Monday, February 27, 2017 Questions related to the twistor lift of Kähler action During last couple years a kind of palace revolution has taken place in the formulation and interpretation of TGD. The notion of twistor lift and 8-D generalization of twistorialization have dramatically simplified and also modified the view about what classical TGD and quantum TGD are. The notion of adelic physics suggests the interpretation of scattering diagrams as representations of algebraic computations with diagrams producing the same output from given input are equivalent. The simplest possible manner to perform the computation corresponds to a tree diagram. As will be found, it is now possible to even propose explicit twistorial formulas for scattering formulas since the horrible problems related to the integration over WCW might be circumvented altogether. From the interpretation of p-adic physics as physics of cognition, heff/h=n could be interpreted as the order of Galois group. Discrete coupling constant evolution would correspond to phase transitions changing the extension of rationals and its Galois group. TGD inspired theory of consciousness is an essential part of TGD and the crucial Negentropy Maximization Principle in statistical sense follows from number theoretic evolution as increase of the order of Galois group for extension of rationals defining adeles. During the re-processing of the details related to twistor lift, it became clear that the earlier variant for the twistor lift can be criticized and allows an alternative. This option led to a simpler view about twistor lift, to the conclusion that minimal surface extremals of Kähler action represent only asymptotic situation near boundaries of CD (external particles in scattering), and also to a re-interpretation for the p-adic evolution of the cosmological constant: cosmological term would correspond to the entire 4-D action and the cancellation of Kähler action and cosmological term would lead to the small value of the effective cosmological constant. The pleasant observation was that the correct formulation of 6-D Kähler action in the framework of adelic physics implies that the classical physics of TGD does not depend on the overall scaling of Kähler action but that quantum classical correspondence implies this dependence. It is however too early to select between the two options. For details see the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" or the article with the same title. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Wednesday, February 22, 2017 Questions related to the quantum aspects of twistorialization The progress in the understanding of the classical aspects of twistor lift of TGD makes possible to consider in detail the quantum aspects of twistorialization of TGD and for the first time an explicit proposal for the part of scattering diagrams assignable to fundamental fermions emerges. 1. There are several notions of twistor. Twistor space for M4 is T(M4) =M4× S2 (see this) having projections to both M4 and to the standard twistor space T1(M4) often identified as CP3. T(M4)=M4× S2 is necessary for the twistor lift of space-time dynamics. CP2 gives the factor T(CP2)= SU(3)/U(1)× U(1) to the classical twistor space T(H). The quantal twistor space T(M8)= T1(M4)× T(CP2) assignable to momenta. The possible way out is M8-H duality relating the momentum space M8 (isomorphic to the tangent space H) and H by mapping space-time associative and co-associative surfaces in M8 to the surfaces which correspond to the base spaces of in H: they construction would reduce to holomorphy in complete analogy with the original idea of Penrose in the case of massless fields. 2. The standard twistor approach has problems. Twistor Fourier transform reduces to ordinary Fourier transform only in signature (2,2) for Minkowski space: in this case twistor space is real RP3 but can be complexified to CP3. Otherwise the transform requires residue integral to define the transform (in fact, p-adically multiple residue calculus could provide a nice manner to define integrals and could make sense even at space-time level making possible to define action). Also the positive Grassmannian requires (2,2) signature. In M8-H relies on the existence of the decomposition M2⊂ M2= M2× E2⊂ M8. M2 could even depend on position but M2(x) should define an integrable distribution. There always exists a preferred M2, call it M20, where 8-momentum reduces to light-like M2 momentum. Hence one can apply 2-D variant of twistor approach. Now the signature is (1,1) and spinor basis can be chosen to be real! Twistor space is RP3 allowing complexification to CP3 if light-like complex momenta are allowed as classical TGD suggests! 3. A further problem of the standard twistor approach is that in M4 twistor approach does not work for massive particles. In TGD all particles are massless in 8-D sense. In M8 M4-mass squared corresponds to transversal momentum squared coming from E4⊂ M4× E4 (from CP2 in H). In particular, Dirac action cannot contain anyo mass term since it would break chiral invariance. Furthermore, the ordinary twistor amplitudes are holomorphic functions of the helicity spinors λi and have no dependence on &lambda tile;i: no information about particle masses! Only the momentum conserving delta function gives the dependence on masses. These amplitudes would define as such the M4 parts of twistor amplitudes for particles massive in TGD sense. The simplest 4-fermion amplitude is unique. Twistor approach gives excellent hopes about the construction of the scattering amplitudes in ZEO. The construction would split into two pieces corresponding to the orbital degrees of freedom in "world of classical worlds" (WCW) and to spin degrees of freedom in WCW: that is spinors, which correspond to second quantized induced spinor fields at space-time surface (actually string world sheets- either at fundamental level or for effective action implied by strong form of holography (SH)). 1. At WCW level there is a perturbative functional integral over small deformations of the 3-surface to which space-time surface is associated. The strongest assumption is that this 3-surface corresponds to maximum for the real part of action and to a stationary phase for its imaginary part: minimal surface extremal of Kähler action would be in question. A more general but number theoretically problematic option is that an extremal for the sum of Kähler action and volume term is in question. By Kähler geometry of WCW the functional integral reduces to a sum over contributions from preferred extremals with the fermionic scattering amplitude multiplied by the ration Xi/X, where X=∑i Xi is the sum of the action exponentials for the maxima. The ratios of exponents are however number theoretically problematic. Number theoretical universality is satisfied if one assigns to each maximum independent zero energy states: with this assumption ∑ Xi reduces to single Xi and the dependence on action exponentials becomes trivial! ZEO allow this. The dependence on coupling parameters of the action essential for the discretized coupling constant evolution is only via boundary conditions at the ends of the space-time surface at the boundaries of CD. Quantum criticality of TGD demands that the sum over loops associated with the functional integral over WCW vanishes and strong form of holography (SH) suggests that the integral over 4-surfaces reduces to that over string world sheets and partonic 2-surfaces corresponding to preferred extremals for which the WCW coordinates parametrizing them belong to the extension of rationals defining the adele. Also the intersections of the real and various p-adic space-time surfaces belong to this extension. 2. Second piece corresponds to the construction of twistor amplitude from fundamental 4-fermion amplitudes. The diagrams consists of networks of light-like orbits of partonic two surfaces, whose union with the 3-surfaces at the ends of CD is connected and defines a boundary condition for preferred extremals and at the same time the topological scattering diagram. Fermionic lines correspond to boundaries of string world sheets. Fermion scattering at partonic 2-surfaces at which 3 partonic orbits meet are analogs of 3-vertices in the sense of Feynman and fermions scatter classically. There is no local 4-vertex. This scattering is assumed to be described by simplest 4-fermion twistor diagram. These can be fused to form more complex diagrams. Fermionic lines runs along the partonic orbits defining the topological diagram. 3. Number theoretic universality suggests that scattering amplitudes have interpretation as representations for computations. All space-time surfaces giving rise to the same computation wold be equivalent and tree diagrams corresponds to the simplest computation. If the action exponentials do not appear in the amplitudes as weights this could make sense but would require huge symmetry based on two moves. One could glide the 4-vertex at the end of internal fermion line along the fermion line so that one would eventually get the analog of self energy loop, which should allow snipping away. An argument is developed stating that this symmetry is possible if the preferred M20 for which 8-D momentum reduces to light-like M2-momentum having unique direction is same along entire fermion line, which can wander along the topological graph. The vanishing of topological loops would correspond to the closedness of the diagrams in what might be called BCFW homology. Boundary operation involves removal of BCFW bridge and entangled removal of fermion pair. The latter operation forces loops. There would be no BCFW bridges and entangled removal should give zero. Indeed, applied to the proposed four fermion vertex entangled removal forces it to correspond to forward scattering for which the proposed twistor amplitude vanishes. To sum up, the twistorial approach leads to a proposal for an explicit construction of scattering amplitudes for the fundamental fermions. Bosons and fermions as elementary particles are bound states of fundamental fermions assignable to pairs of wormhole contacts carrying fundamental fermions at the throats. Clearly, this description is analogous to a quark level description of hadron. Yangian symmetry with multilocal generators is expected to crucial for the construction of the many-fermion states giving rise to elementary particles. The problems of the standard twistor approach find a nice solution in terms of M8-H duality, 8-D masslessness, and holomorphy of twistor amplitudes in λi and their indepence on &lambda tilde;i. See the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix". For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Monday, February 13, 2017 A new view about color, color confinement, and twistors To my humble opinion twistor approach to the scattering amplitudes is plagued by some mathematical problems. Whether this is only my personal problem is not clear (notice that this posting is a corrected version of earlier). 1. As Witten shows, the twistor transform is problematic in signature (1,3) for Minkowski space since the the bi-spinor μ playing the role of momentum is complex. Instead of defining the twistor transform as ordinary Fourier integral, one must define it as a residue integral. In signature (2,2) for space-time the problem disappears since the spinors μ can be taken to be real. 2. The twistor Grassmannian approach works also nicely for (2,2) signature, and one ends up with the notion of positive Grassmannians, which are real Grassmannian manifolds. Could it be that something is wrong with the ordinary view about twistorialization rather than only my understanding of it? 3. For M4 the twistor space should be non-compact SU(2,2)/SU(2,1)× U(1) rather than CP3= SU(4)/SU(3)× U(1), which is taken to be. I do not know whether this is only about short-hand notation or a signal about a deeper problem. 4. Twistorilizations does not force SUSY but strongly suggests it. The super-space formalism allows to treat all helicities at the same time and this is very elegant. This however forces Majorana spinors in M4 and breaks fermion number conservation in D=4. LHC does not support N=1 SUSY. Could the interpretation of SUSY be somehow wrong? TGD seems to allow broken SUSY but with separate conservation of baryon and lepton numbers. In number theoretic vision something rather unexpected emerges and I will propose that this unexpected might allow to solve the above problems and even more, to understand color and even color confinement number theoretically. First of all, a new view about color degrees of freedom emerges at the level of M8. 1. One can always find a decomposition M8=M20× E6 so that the complex light-like quaternionic 8-momentum restricts to M20. The preferred octonionic imaginary unit represent the direction of imaginary part of quaternionic 8-momentum. The action of G2 to this momentum is trivial. Number theoretic color disappears with this choice. For instance, this could take place for hadron but not for partons which have transversal momenta. 2. One can consider also the situation in which one has localized the 8-momenta only to M4 =M20× E2. The distribution for the choices of E2 ⊂ M20× E2=M4 is a wave function in CP2. Octonionic SU(3) partial waves in the space CP2 for the choices for M20× E2 would correspond ot color partial waves in H. The same interpretation is also behind M8-H correspondence. 3. The transversal quaternionic light-like momenta in E2⊂ M20× E2 give rise to a wave function in transversal momenta. Intriguingly, the partons in the quark model of hadrons have only precisely defined longitudinal momenta and only the size scale of transversal momenta can be specified. The introduction of twistor sphere of T(CP2) allows to describe electroweak charges and brings in CP2 helicity identifiable as em charge giving to the mass squared a contribution proportional to Qem2 so that one could understand electromagnetic mass splitting geometrically. The physically motivated assumption is that string world sheets at which the data determining the modes of induced spinor fields carry vanishing W fields and also vanishing generalized Kähler form J(M4) +J(CP2). Em charge is the only remaining electroweak degree of freedom. The identification as the helicity assignable to T(CP2) twistor sphere is natural. 4. In general case the M2 component of momentum would be massive and mass would be equal to the mass assignable to the E6 degrees of freedom. One can however always find M20× E6 decomposition in which M2 momentum is light-like. The naive expectation is that the twistorialization in terms of M2 works only if M2 momentum is light-like, possibly in complex sense. This however allows only forward scattering: this is true for complex M2 momenta and even in M4 case. The twistorial 4-fermion scattering amplitude is however holomorphic in the helicity spinors λi and has no dependence on λtilde;i. Therefore carries no information about M2 mass! Could M2 momenta be allowed to be massive? If so, twistorialization might make sense for massive fermions! M20 momentum deserves a separate discussion. 1. A sharp localization of 8-momentum to M20 means vanishing E2 momentum so that the action of U(2) would becomes trivial: electroweak degree of freedom would simply disappear, which is not the same thing as having vanishing em charge (wave function in T(CP2) twistorial sphere S2 would be constant). Neither M20 localization nor localization to single M4 (localization in CP2) looks plausible physically - consider only the size scale of CP2. For the generic CP2 spinors this is impossible but covariantly constant right-handed neutrino spinor mode has no electro-weak quantum numbers: this would most naturally mean constant wave function in CP2 twistorial sphere. For the preferred extremals of twistor lift of TGD either M4 or CP2 twistor sphere can effectively collapse to a point. This would mean disappearence of the degrees of freedom associated with M4 helicity or electroweak quantum numbers. 2. The localization to M4⊃ M20 is possible for the tangent space of quaternionic space-time surface in M8. This could correlate with the fact that neither leptonic nor quark-like induced spinors carry color as a spin like quantum number. Color would emerge only at the level of H and M8 as color partial waves in WCW and would require de-localization in the CP2 cm coordinate for partonic 2-surface. Note that also the integrable local decompositions M4= M2(x)× E2(x) suggested by the general solution ansätze for field equations are possible. 3. Could it be possible to perform a measurement localization the state precisely in fixed M20 always so that the complex momentum is light-like but color degrees of freedom disappear? This does not mean that the state corresponds to color singlet wave function! Can one say that the measurement eliminating color degrees of freedom corresponds to color confinement. Note that the subsystems of the system need not be color singlets since their momenta need not be complex massless momenta in M20. Classically this makes sense in many-sheeted space-time. Colored states would be always partons in color singlet state. 4. At the level of H also leptons carry color partial waves neutralized by Kac-Moody generators, and I have proposed that the pion like bound states of color octet excitations of leptons explain so called lepto-hadrons. Only right-handed covariantly constant neutrino is an exception as the only color singlet fermionic state carrying vanishing 4-momentum and living in all possible M20:s, and might have a special role as a generator of supersymmetry acting on states in all quaternionic subs-spaces M4. 5. Actually, already p-adic mass calculations performed for more than two decades ago forced to seriously consider the possibility that particle momenta correspond to their projections o M20⊂ M4. This choice does not break Poincare invariance if one introduces moduli space for the choices of M20⊂ M4 and the selection of M20 could define quantization axis of energy and spin. If the tips of CD are fixed, they define a preferred time direction assignable to preferred octonionic real unit and the moduli space is just S2. The analog of twistor space at space-time level could be understood as T(M4)=M4× S2 and this one must assume since otherwise the induction of metric does not make sense. What happens to the twistorialization at the level of M8 if one accepts that only M20 momentum is sharply defined? 1. What happens to the conformal group SO(4,2) and its covering SU(2,2) when M4 is replaced with M20⊂ M8? Translations and special conformational transformation span both 2 dimensions, boosts and scalings define 1-D groups SO(1,1) and R respectively. Clearly, the group is 6-D group SO(2,2) as one might have guessed. Is this the conformal group acting at the level of M8 so that conformal symmetry would be broken? One can of course ask whether the 2-D conformal symmetry extends to conformal symmetries characterized by hyper-complex Virasoro algebra. 2. Sigma matrices are by 2-dimensionality real (σ0 and σ3 - essentially representations of real and imaginary octonionic units) so that spinors can be chosen to be real. Reality is also crucial in signature (2,2), where standard twistor approach works nicely and leads to 3-D real twistor space. Now the twistor space is replaced with the real variant of SU(2,2)/SU(2,1)× U(1) equal to SO(2,2)/SO(2,1), which is 3-D projective space RP3 - the real variant of twistor space CP3, which leads to the notion of positive Grassmannian: whether the complex Grassmannian really allows the analog of positivity is not clear to me. For complex momenta predicted by TGD one can consider the complexification of this space to CP3 rather than SU(2,2)/SU(2,1)× U(1). For some reason the possible problems associated with the signature of SU(2,2)/SU(2,1)× U(1) are not discussed in literature and people talk always about CP3. Is there a real problem or is this indeed something totally trivial? 3. SUSY is strongly suggested by the twistorial approach. The problem is that this requires Majorana spinors leading to a loss of fermion number conservation. If one has D=2 only effectively, the situation changes. Since spinors in M2 can be chosen to be real, one can have SUSY in this sense without loss of fermion number conservation! As proposed earlier, covariantly constant right-handed neutrino modes could generate the SUSY but it could be also possible to have SUSY generated by all fermionic helicity states. This SUSY would be however broken. 4. The selection of M20 could correspond at space-time level to a localization of spinor modes to string world sheets. Could the condition that the modes of induced spinors at string world sheets are expressible using real spinor basis imply the localization? Whether this localization takes place at fundamental level or only for effective action being due to SH, is a question to be settled. The latter options looks more plausible. To sum up, these observation suggest a profound re-evalution of the beliefs related to color degrees of freedom, to color confinement, and to what twistors really are. For details see the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" of "Towards M-matrix" or the article Some questions related to the twistor lift of TGD. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Friday, February 10, 2017 How does the twistorialization at imbedding space level emerge? One objection against twistorialization at imbedding space level is that M4-twistorialization requires 4-D conformal invariance and massless fields. In TGD one has towers of particle with massless particles as the lightest states. The intuitive expectation is that the resolution of the problem is that particles are massless in 8-D sense as also the modes of the imbedding space spinor fields are. M8-H duality indeed provides a solution of the problem. Massless quaternionic momentum in M8 can be for a suitable choice of decomposition M8= M4× E4 be reduce to massless M4 momentum and one can describe the information about 8-momentum using M4 twistor and CP2 twistor. Second objection is that twistor Grassmann approach uses as twistor space the space T1(M4) =SU(2,2)/SU(2,1)× U(1) whereas the twistor lift of classical TGD uses T(M4)=M4× S2. The formulation of the twistor amplitudes in terms of strong form of holography (SH) using the data assignable to the 2-D surfaces - string world sheets and partonic 2-surfaces perhaps - identified as surfaces in T(M4)× T(CP2) requires the mapping of these twistor spaces to each other - the incidence relations of Penrose indeed realize this map. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Wednesday, February 08, 2017 Twistor lift and the reduction of field equations by SH to holomorphy It has become clear that twistorialization has very nice physical consequences. But what is the deep mathematical reason for twistorialization? Understanding this might allow to gain new insights about construction of scattering amplitudes with space-time surface serving as analogs of twistor diatrams. Penrose's original motivation for twistorilization was to reduce field equations for massless fields to holomorphy conditions for their lifts to the twistor bundle. Very roughly, one can say that the value of massless field in space-time is determined by the values of the twistor lift of the field over the twistor sphere and helicity of the massless modes reduces to cohomology and the values of conformal weights of the field mode so that the description applies to all spins. I want to find the general solution of field equations associated with the Kähler action lifted to 6-D Kähler action. Also one would like to understand strong form of holography (SH). In TGD fields in space-time are are replaced with the imbedding of space-time as 4-surface to H. Twistor lift imbeds the twistor space of the space-time surface as 6-surface into the product of twistor spaces of M4 and CP2. Following Penrose, these imbeddings should be holomorphic in some sense. Twistor lift T(H) means that M4 and CP2 are replaced with their 6-D twistor spaces. 1. If S2 for M4 has 2 time-like dimensions one has 3+3 dimensions, and one can speak about hyper-complex variants of holomorphic functions with time-like and space-like coordinate paired for all three hypercomplex coordinates. For the Minkowskian regions of the space-time surface X4 the situation is the same. 2. For T(CP2) Euclidian signature of twistor sphere guarantees this and one has 3 complex coordinates corresponding to those of S2 and CP2. One can also now also pair two real coordinates of S2 with two coordinates of CP2 to get two complex coordinates. For the Euclidian regions of the space-time surface the situation is the same. Consider now what the general solution could look like. Let us continue to use the shorthand notations S21= S2(X4); S22= S2(CP2);S23= S2(M4). 1. Consider first solution of type (1,0) so that coordinates of S22 are constant. One has holomorphy in hypercomplex sense (light-like coordinate t-z and t+z correspond to hypercomplex coordinates). 1. The general map T(X4) to T(M4) should be holomorphic in hyper-complex sense. S21 is in turn identified with S23 by isometry realized in real coordinates. This could be also seen as holomorphy but with different imaginary unit. One has analytical continuation of the map S21→ S23 to a holomorphic map. Holomorphy might allows to achieve this rather uniquely. The continued coordinates of S21 correspond to the coordinates assignable with the integrable surface defined by E2(x) for local M2(x)× E2(x) decomposition of the local tangent space of X4. Similar condition holds true for T(M4). This leaves only M2(x) as dynamical degrees of freedom. Therefore one has only one holomorphic function defined by 1-D data at the surface determined by the integrable distribution of M2(x) remains. The 1-D data could correspond to the boundary of the string world sheet. 2. The general map T(X4) to T(CP2) cannot satisfy holomorphy in hyper-complex sense. One can however provide the integrable distribution of E2(x) with complex structure and map it holomorphically to CP2. The map is defined by 1-D data. 3. Altogether, 2-D data determine the map determining space-time surface. These two 1-D data correspond to 2-D data given at string world sheet: one would have SH. 2. What about solutions of type (0,1) making sense in Euclidian region of space-time? One has ordinary holomorphy in CP2 sector. 1. The simplest picture is a direct translation of that for Minkowskian regions. The map S21→ S22 is an isometry regarded as an identification of real coordinates but could be also regarded as holomorphy with different imaginary unit. The real coordinates can be analytically continued to complex coordinates on both sides, and their imaginary parts define coordinates for a distribution of transversal Euclidian spaces E22(x) on X4 side and E2(x) on M4 side. This leaves 1-D data. 2. What about the map to T(M4)? It is possible to map the integrable distribution E22(x) to the corresponding distribution for T(M4) holomorphically in the ordinary sense of the word. One has 1-D data. Altogether one has 2-D data and SH and partonic 2-surfaces could carry these data. One has SH again. 3. The above construction works also for the solutions of type (1,1), which might make sense in Euclidian regions of space-time. It is however essential that the spheres S22 and S23 have real coordinates. SH thus would thus emerge automatically from the twistor lift and holomorphy in the proposed sense. 1. Two possible complex units appear in the process. This suggests a connection with quaternion analytic functions suggested as an alternative manner to solve the field equations. Space-time surface as associative (quaterionic) or co-associate (co-quaternionic) surface is a further solution ansatz. Also the integrable decompositions M2(x)× E2(x) resp. E21(x)× E22(x) for Minkowskian resp. Euclidian space-time regions are highly suggestive and would correspond to a foliation by string wold sheets and partonic 2-surfaces. This expectation conforms with the number theoretically motivated conjectures. 2. The foliation gives good hopes that the action indeed reduces to an effective action consisting of an area term plus topological magnetic flux term for a suitably chosen stringy 2-surfaces and partonic 2-surfaces. One should understand whether one must choose the string world sheets to be Lagrangian surfaces for the Kähler form including also M4 term. Minimal surface condition could select the Lagrangian string world sheet, which should also carry vanishing classical W fields in order that spinors modes can be eigenstates of em charge. The points representing intersections of string world sheets with partonic 2-surfaces defining punctures would represent positions of fermions at partonic 2-surfaces at the boundaries of CD and these positions should be able to vary. Should one allow also non-Lagrangian string world sheets or does the space-time surface depend on the choice of the punctures carrying fermion number (quantum classical correspondence)? 3. The alternative option is that any choice produces of the preferred 2-surfaces produces the same scattering amplitudes. Does this mean that the string world sheet area is a constant for the foliation - perhaps too strong a condition - or could the topological flux term compensate for the change of the area? The selection of string world sheets and partonic 2-surfaces could indeed be also only a gauge choice. I have considered this option earlier and proposed that it reduces to a symmetry identifiable as U(1) gauge symmetry for Kähler function of WCW allowing addition to it of a real part of complex function of WCW complex coordinates to Kähler action. The additional term in the Kähler action would compensate for the change if string world sheet action in SH. For complex Kähler action it could mean the addition of the entire complex function. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Tuesday, February 07, 2017 Mystery: How Was Ancient Mars Warm Enough for Liquid Water? The article Mars Mystery: How Was Ancient Red Planet Warm Enough for Liquid Water? tells about a mystery related to the ancient presence of water at the surface of Mars. It is now known that the surface of Mars was once covered with rivers, streams, ponds, lakes and perhaps even seas and oceans. This forces to consider the possibility there was once also life in Mars and might be still. There is however a problem. The atmosphere probably contained hundreds of times less carbon dioxide than needed to keep it warm enought for liquid water to last. There are how these signature of flowing water there. Here is one more mystery to resolve. I proposed around 2014 TGD version of Expanding Earth Hypothesis stating that Earth has experienced a geologically fast expansion period in its past. The radius of the Earth's space-time sheet would have increased by a factor of two from its earlier value. Either p-adic length scale or heff/h=n for the space-time sheet of Earth or both would have increased by factor 2. This violent event led to the burst of underground seas of Earth to the surface with the consequence that the rather highly developed lifeforms evolved in these reservoirs shielded from cosmic rays and UV radiation burst to the surface: the outcome was what is known as Cambrian explosion. This apparent popping of advanced lifeforms out of nowhere explains why the earlier less developed forms of these complex organisms have not been found as fossile. I have discussed the model for how life could have evolved in underground water reservoirs here. The geologically fast weakening of the gravitational force by factor 1/4 at surface explains the emergence of gigantic life forms like sauri and even ciant crabs. Continents were formed: before this the crust was like the surface of Mars now. The original motivation of EEH indeed was that the observation that the continents of recent Earth seem to fit nicely together if the radius were smaller by factor 1/2. This is just a step further than Wegener went at his time. The model explains many other difficult to understand facts and forces to give up the Snowball Earth model. The recent view about Earth before Cambrian Explosion is very different from that provided by EEH. The period of rotation of Earth was 4 times shorter than now - 6 hours - and this would be visible of physiology of organisms of that time. Whether it could have left remnants to the physiology and behavior of recently living organisms is an interesting question. What about Mars? Mars now is very similar to Earth before expansion. The radius is one half of Earth now and therefore same as the radius of Earth before the Cambrian Explosion! Mars is near Earth so that its distance from Sun is not very different. Could also recent Mars contain complex life forms in water reservoirs in its interior. Could Mother Mars (or perhaps Martina, if the red planet is not the masculine warrior but pregnant mother) give rise to their birth? The water that has appeared at the surface of Mars could have been a temporarily leakage. An interesting question is whether the appearance of water might correspond to the same event that increased the radius of Earth by factor two. Magnetism is important for life in TGD based quantum biology. A possible problem is posed by the very weak recent value of the magnetic field of Mars. The value of the dark magnetic field Bend of Earth deduced from the findings of Blackman about effects of ELF em fields on vertebrate brain has strength, which is 2/5 of the nominal value of BE. Hence the dark MBs of living organisms perhaps integrating to dark MB of Earth seem to be entities distinct from MB of Earth. Could also Mars have dark magnetic fields? Schumann resonances might be important for collective aspects of consciousness. In the simplest model for Schumann resonances the frequencies are determined solely by the radius of Mars and would be 2 times those in Earth now. The frequency of the lowest Schumann resonance would be 15.6 Hz. For background see the chapters Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life and More Precise TGD Based View about Quantum Biology and Prebiotic Evolution of "Genes and Memes" . For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Monday, February 06, 2017 Chemical qualia as number theoretical qualia? Certain FB discussions led to a realization that chemical senses (perception of odours and tastes) might actually be or at least include number theoretical sensory qualia providing information about the distribution of Planck constants heff/h=n identifiable as the order of Galois group for the extension of rationals characterizing adeles. See the article Chemical qualia as number theoretical qualia?. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Thursday, February 02, 2017 Anomaly in neutron lifetime as evidence for the transformation of protons to dark protons I found a popular article about very interesting finding related to neutron lifetime (see this). Neutron lifetime turns out tobe by about 8 seconds shorter, when measured by looking what fraction of neutrons disappears via decays in box than by measuring the number of protons produced in beta decays for a neutron beam travelling through a given volume. The life time of neutron is about 15 minutes so that relative lifetime difference is about 8/15×60 ≈ .8 per cent. The statistical signficance is 4 sigma: 5 sigma is accepted as the significance for a finding acceptable as discovery. How could one explain the finding? The difference between the methods is that the beam experiment measures only the disappearences of neutrons via beta decays producing protons whereas box measurement detects the outcome from all possible decay modes. The experiment suggests two alternative explanations. 1. Neutron has some other decay mode or modes, which are not detected in the box method since one measures the number of neutrons in initial and final state. For instance, in TGD framework one could think that the neutrons can transform to dark neutrons with some rate. But it is extremely unprobable that the rate could be just about 1 per cent of the decay rate. Why not 1 millionth? Beta decay must be involved with the process. Could some fraction of neutrons decay to dark proton, electron, and neutrino: this mode would not be detected in beam experiment? No, if one takes seriously the basic assumption that particles with different value of heff/h= n do not appear in the same vertex. Neutron should first transform to dark proton but then also the disappearance could take place also without the beta decay of dark proton and the discrepancy would be much larger. 2. The proton produced in the ordinary beta decay of proton can however transform to dark proton not detected in the beam experiment! This would automatically predict that the rate is some reasonable fraction of the beta decay rate. About 1 percent of the resulting protons would transform to dark protons. This makes sense! What is so nice is that the transformation of protons to dark protons is indeed the basic mechanism of TGD inspired quantum biology! For instance, it would occur in Pollack effect in with irradiation of water bounded by gel phase generates so called exclusion zone, which is negatively charged. TGD explanation is that some fraction of protons transforms to dark protons at magnetic flux tubes outside the system. Negative charge of DNA and cell could be due to this mechanism. One also ends up to a model of genetic code with the analogs of DNA, RNA, tRNA and amino-acids represented as triplets of dark protons. The model predicts correctly the numbers of DNAs coding given amino-acid. Besides biology the model has applications to cold fusion, and various free energy phenomena. See the article Two different lifetimes for neutron as evidence for dark protons and chapter New Particle Physics Predicted by TGD: Part I. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Why metabolism and what happens in bio-catalysis? TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology. Why metabolic energy is needed? The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive. We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n. 1. Why metabolic energy would be needed? Intuitive answer is that evolution requires it and that evolution corresponds to the increase of n=heff/h. To see the answer to the question, notice that the energy scale for the bound states of an atom is proportional to 1/h2 and for dark atom to 1/heff2 ∝ n2 (do not confuse this n with the integer n labelling the states of hydrogen atom!). 2. Dark atoms have smaller binding energies and their creation by a phase transition increasing the value of n demands a feed of energy - metabolic energy! If the metabolic energy feed stops, n is gradually reduced. System gets tired, loses consciousness, and eventually dies. Also in case of cyclotron energies the positive cyclotron energy is proportional to heff so that metabolic energy is needed to generate larger heff and prerequisites for negentropy. In this case one would have very long range negentropic entanglement (NE) whereas dark atoms would correspond to short range NE corresponding to a lower evolutionary level. These entanglements would correspond to gravitational and electromagnetic quantum criticality. What is remarkable that the scale of atomic binding energies decreases with n only in dimension D=3. In other dimensions it increases and in D=4 one cannot even speak of bound states! This can be easily found by a study of Schrödinger equation for the analog of hydrogen atom in various dimensions. Life based on metabolism seems to make sense only in spatial dimension D=3. Note however that there are also other quantum states than atomic states with different dependence of energy on heff. 3. The analogy of weak form of NMP following from mere adelic physics makes it analogous to second law. Could one consider the purely formal generalization of dE=TdS-.. to dE= -TdN-... where E refers to metabolic energy and N refers to entanglement negentropy? No!: the situation is different. The system is not closed system; N is not the negative of thermodynamical entropy S; and E is the metabolic energy feeded to the system, not the system's internal energy. dE= TdN - ... might however make sense for a system to which metabolic energy is feeded. Note that the identification of N is still open: N could be identified as N= ∑pNp -S where one has sum of p-adic entanglement negentropies and real entanglement entropy S or as N = ∑pNp. For the first option one would have N=0 for rational entanglement and N>0. for extensions of rationals. Could rational entanglement be interpreted as that associated with dead matter? 4. Bio-catalysis and ATP→ ADP$ process need not require metabolic energy. A transfer of negentropy from nutrients to ATP to acceptor molecule would be in question. Metabolic energy would be needed to reload ADP with negentropy to give ATP by using ATP synthase as a mitochondrial power plant. Metabolites could be carriers of dark atoms of this kind possibly carrying also NE. They could also carry NE associated with the dark cyclotron states as suggested earlier and in this case the value of heff=hgr would be much larger than in the case of dark atoms. Conditions on bio-catalysis Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts. What does catalysis demand? 1. Catalyst and reactants must find each other. How this could happen is very difficult to understand in standard biochemistry in which living matter is seen as soup of biomolecules. I have already already considered the mechanisms making it possible for the reactants to find each other. For instance, in the translation of mRNA to protein tRNA molecules must find their way to mRNA at ribosome. The proposal is that reconnection allowing U-shaped magnetic flux tubes to reconnect to a pair of flux tube connecting mRNA and tRNA molecule and reduction of the value of heff=n× h inducing reduction of the length of magnetic flux tube takes care of this step. This applies also to DNA transcription and DNA replication and bio-chemical reactions in general. 2. Catalyst must provide energy for the reactants (their number is typically two) to overcome the potential wall making the reaction rate very slow for energies around thermal energy. The TGD based model for the hydrino atom having larger binding energy than hydrogen atom claimed by Randell Mills suggests a solution. Some hydrogen atom in catalyst goes from (dark) hydrogen atom state to hydrino state (state with smaller heff/h and liberates the excess binding energy kicking the either reactant over the potential wall so that reaction can process. After the reaction the catalyst returns to the normal state and absorbs the binding energy. 3. In the reaction volume catalyst and reactants must be guided to correct places. The simplest model of catalysis relies on lock-and-key mechanism. The generalized Chladni mechanism forcing the reactants to a two-dimensional closed nodal surface is a natural candidate to consider. There are also additional conditions. For instance, the reactants must have correct orientation. For instance, the reactants must have correct orientation and this could be forced by the interaction with the em field of ME involved with Chladni mechanism. 4. One must have also a coherence of chemical reactions meaning that the reaction can occur in a large volume - say in different cell interiors - simultaneously. Here MB would induce the coherence by using MEs. Chladni mechanism might explain this if there is there is interference of forces caused by periodic standing waves themselves represented as pairs of MEs. Phase transition reducing the value of heff/h=n as a basic step in bio-catalysis Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis. The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom. Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP. Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario. It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n. The condition that the dark binding energy is above the thermal energy gives a condition on the value of heff/h=n as n≤ 32. The size scale of the dark largest allowed dark atom would be about 100 nm, 10 times the thickness of the cell membrane. For details see the chapter Quantum criticality and dark matter. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD. Wednesday, February 01, 2017 Further details related to the induction of twistor structure The notion of twistor lift of TGD (see this and this) has turned out to have powerful implications concerning the understanding of the relationship of TGD to general relativity. The meaning of the twistor lift really has remained somewhat obscure. There are several questions to be answered. What does one mean with twistor space? What does the induction of twistor structure of H=M4× CP2 to that of space-time surface realized as its twistor space mean? In TGD one replaces imbedding space H=M4× CP2 with the product T= T(M4)× T(CP2) of their 6-D twistor spaces, and calls T(H) the twistor space of H. For CP2 the twistor space is the flag manifold T(CP2)=SU(3)/U(1)× U(1) consisting of all possible choices of quantization axis of color isospin and hypercharge. 1. The basic idea is to generalize Penrose's twistor program by lifting the dynamics of space-time surfaces as preferred extremals of Kähler action to those of 6-D Kähler action in twistor space T(H). The conjecture is that field equations reduce to the condition that the twistor structure of space-time surface as 4-manifold is the twistor structure induced from T(H). Induction requires that dimensional reduction occurs effectively eliminating twistor fiber S2 (X4) from the dynamics. Space-time surfaces would be preferred extremals of 4-D Kähler action plus volume term having interpretation in terms of cosmological constant. Twistor lift would be more than an mere alternative formulation of TGD. 2. The reduction would take place as follows. The 6-D twistor space T(X4) has S2 as fiber and can be expressed locally as a Cartesian product of 4-D region of space-time and of S2. The signature of the induced metric of S2 should be space-like or time-like depending on whether the space-time region is Euclidian or Minkowskian. This suggests that the twistor sphere of M4 is time-like as also standard picture suggests. 3. Twistor structure of space-time surface is induced to the allowed 6-D surfaces of T(H), which as twistor spaces T(X4) must have fiber space structure with S2 as fiber and space-time surface X4 as base. The Kähler form of T(H) expressible as a direct sum J(T(H)= J(T(M4))⊕ J(T(CP2) induces as its projection the analog of Kähler form in the region of T(X4) considered. There are physical motivations (CP breaking, matter antimatter symmetry, the well-definedness of em charge) to consider the possibility that also M4 has a non-trivial symplectic/Kähler form of M4 obtained as a generalization of ordinary symplectic/Kähler form (see this). This requires the decomposition M4=M2× E2 such that M2 has hypercomplex structure and E2 complex structures. This decomposition might be even local with the tangent spaces M2(x) and E2(x) integrating to locally orthogonal 2-surfaces. These decomposition would define what I have called Hamilton-Jacobi structure (see this). This would give rise to a moduli space of M4 Kähler forms allowing besides covariantly constant self-dual Kähler forms with decomposition (m0,m3) and (m1, m2) also more general self-dual closed Kähler forms assignable to integrable local decompositions. One example is spherically symmetric stationary self-dual Kähler form corresponding to the decomposition (m0,rM) and (θ,φ) suggested by the need to get spherically symmetric minimal surface solutions of field equations. Also the decomposition of Robertson-Walker coordinates to (a,r) and (θ,π) assignable to light-cone M4+ can be considered. The moduli space giving rise to the decomposition of WCW to sectors would be finite-dimensional if the integrable 2-surfaces defined by the decompositions correspond to orbits of subgroups of the isometry group of M4 or CD. This would allow planes of M4, and radial half-planes and spheres of M4 in spherical Minkowski coordinates and of M4+ in Robertson-Walker coordinates. These decomposition could relate to the choices of measured quantum numbers inducing symmetry breaking to the subgroups in question. These choices would chose a sector of WCW (see this) and would define quantum counterpart for a choice of quantization axes as distinct from ordinary state function reduction with chosen quantization axes. 4. The induced Kähler form of S2 fiber of T(X4) is assumed to reduce to the sum of the induced Kähler forms from S2 fibers of T(M4) and T(CP2). This requires that the projections of the Kähler forms of M4 and CP2 to S2(X4) are trivial. Also the induced metric is assumed to be direct sum and similar conditions holds true.These conditions are analogous to those occurring in dimensional reduction. Denote the radii of the spheres associated with M4 and CP2 as RP=klP and R and the ratio RP/R by ε. Both the Kähler form and metric are proportional to Rp2 resp. R2 and satisfy the defining condition JkrgrsJsl= -gkl. This condition is assumed to be true also for the induced Kähler form of J(S2(X4). This is the general description. How many solutions to these conditions are obtained? It seems that there are essentiablly 3 solutions. The projection of the twistor space of space-time surface to the twistor sphere of either M4 or CP2 is trivial and the solution in which it is trivial to both and twistor spheres correspond to each other by a one-to-one isometry (see this). For details see the chapter How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD? or the article Some questions related to the twistor lift of TGD. For a summary of earlier postings see Latest progress in TGD. Articles and other material related to TGD.
2479db737bde1a51
10 The Aharonov–Bohm effect Consider once more the two-slit experiment with electrons. What happens if we place a thin, long solenoid right behind (or right in front of) the slit plate, between the two slits. If no current is flowing in the solenoid, we observe the usual interference pattern. But if a current is flowing, the interference pattern is shifted sideways (right or left, depending on the direction of the current). This effect could easily have been predicted in the late 1920’s, by which time all the necessary physics was in place. Yet by many accounts it was predicted three decades later, in 1959, by Yakir Aharonov and David Bohm.[1] (Actually it was first predicted two decades later,[2] but it made a splash only after the publication of the paper by Aharonov and Bohm. This was followed by an actual demonstration of the effect in less than a year.) Why did it take that long for this effect to be predicted and taken note of? Take another look at the rectangle on the right side of Figure 2.17.1. The flux of the mag­netic field through this rec­tan­gle, we observed, deter­mines the dif­fer­ence between the actions of the paths A→B→C and A→D→C. Think of these paths as leading from the electron gun G through either L or R to a detector D at the screen: G→L→D and G→R→D. As we glean from Equation 2.17.1, the action for each path depends on the vector potential along the path. The vector potential A (as well as the magnetic field B) associated with a current-bearing solenoid are shown in Figure 3.10.1. If this solenoid is introduced into the setup of the two-slit experiment, the action for the left path (G→L→D) is increased, while the action for the right path (G→R→D) is reduced. Stepping up the current in the solenoid increases the difference between these actions. Since this difference determines the positions of the maxima and minima at the screen, the interference pattern is shifted to the left as a result. A solenoid, the current J flowing through it, and the resulting fields A and B. There are several reasons why it took that long to predict this remarkable effect. For one, classical electrodynamics can be formulated exclusively in terms of the electric field E and the magnetic field B. For another, while the four components of V and A uniquely determine the six components of E and B, they themselves are not not unique. E and B are invariant under the substitutions (3.10.1)xxxV → V−∂tα,xxxAx → Ax+∂xα,xxxAy → Ay+∂yα,xxxAz → Az+∂zα, where α is a func­tion of the space­time coor­di­nates t,x,y,z. On the other hand, it was well known that the Schrödinger equation could accommodate electromagnetic effects only in terms of the potentials V and A, and that the actions associated with loops are also invariant under these substitutions. (Since these substitution change the actions for G→L→D and G→R→D by equal amounts, the difference between the two actions, which equals the action associated with the loop G→L→D→R→G, remains unchanged.) Yet physicists were unable to disabuse themselves of the notion that E and B are physically real, while the potentials “have no physical meaning and are introduced solely for the purpose of mathematical simplification of the equations,” as Fritz Rohrlich wrote.[3] The general idea at the time was that the electromagnetic field is a physical entity in its own right, that it is locally acted upon by charges, that it locally acts on charges, and that it mediates the action of charges on charges by locally acting on itself. At the heart of this notion was the so-called principle of local action, felicitously articulated by DeWitt and Graham[4] in an American Journal of Physics resource letter: physicists are, at bottom, a naive breed, forever trying to come to terms with the “world out there” by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. With the notable exception of Roger Boscovich, a Croatian physicist and philosopher who flourished in the 18th Century, it does not seem to have occurred to anyone that local action is as unintelligible as the apparent ability of material objects to act where they are not, which the principle of local action has supposedly done away with. (The impression that local action is intelligible rests on such familiar experiences as pulling a rope or pushing a stalled car. As soon as we take a microscopic look at what happens between the pushing hands and the car, we discover that this seemingly local action involves net interatomic or intermolecular repulsive forces acting at a distance.) If it is believed that electromagnetic effects on charges are produced via a continuous sequence of local cause-effect relations, then something like the Aharonov–Bohm effect cannot be foreseen, for the values of E and B along the alternative electron paths can be made arbitrarily small by making the solenoid sufficiently long. And even if A were a physical entity and as such responsible for the Aharonov–Bohm effect, it could not produce it locally since the local values of A are arbitrary. → Next 1. [↑] Aharonov, Y., and Bohm, D. (1959). Significance of electromagnetic potentials in quantum theory. Physical Review 115, 485–491. 2. [↑] Ehrenberg, W., and Siday, R.E. (1949). The Refractive Index in Electron Optics and the Principles of Dynamics. Proceedings of the Physical Society B 62, 8–21. 3. [↑] Rohrlich, F (1965). Classical Charged Particles, Addison–Wesley, pp. 65–66. 4. [↑] DeWitt, B.S., and Graham, R.N. (1971). Resource letter IQM-1 on the interpretation of quantum mechanics, American Journal of Physics 39, 724–738.
8f51403b4d94905c
Published in Foundations of Physics, 26 (1996) 1669-1691 Relativistic Quantum Events. Ph. Blanchard 1and A. Jadczyk 2 1 1Faculty of Physics and BiBoS, University of Bielefeld Universitätstr. 25, D-33615 Bielefeld 2Institute of Theoretical Physics, University of Wroclaw Pl. Maxa Borna 9, PL-50 204 Wroclaw Standard Quantum Theory is inadequate to explain the mechanisms by which potential becomes actual. Is inadequate and therefore unable to describe generation of events. Niels Bohr emphasized long ago that the classical part of the world is necessary. John Bell stressed the same point: that ``measurement" cannot even be defined within the Standard Quantum Theory, and he sought a solution within hidden variable theories and his concept of ``beables." Today it is customary to try to explain emergence of the classical world through a decoherence mechanism due to ``environment". But, we believe, as it was with the concept of measurement, ``environment" itself cannot be defined within the Standard Quantum Theory. We have proposed a semi-phenomenological solution to this problem by introducing explicitly, from the very beginning, classical degrees of freedom, and by coupling these degrees of freedom, through a Lindblad type coupling, to the quantum world. The resulting theory, we call ``Event Enhanced Quantum Theory". EEQT allows us to describe an event generating mechanism for individual quantum systems under continuous observation. The objections of John Bell are met and precise definitions of an ``experiment" and of a ``measurement" have been given within EEQT. However EEQT is, essentially, a non-relativistic theory. In the present paper we extend the ideas of L.P. Horwitz and C. Piron and we propose a relativistic version of EEQT, with an event generating algorithm for spin one-half particle detectors. The algorithm is based on proper time formulation of the relativistic quantum theory. Although we use indefinite metric, all the probabilities controlling the random process of the detector clicks are non-negative. 1. Introduction. Quantum Theory has significantly changed our perspective on what Reality truly is. Prior to Bohr and Heisenberg, speculations about Reality were in the domain of philosophy and not of physics. For a physicist, it was clear that there was a Reality ``out there", and that physics was about making as precise a description of this reality as possible. This Reality had two kinds of concepts connected to it: static concepts and dynamic concepts. Static concepts were ``objects," ``properties" of these objects, and ``relations" between the objects. Dynamic concepts were those of ``events"; defined as ``changes of property," or ``changes of relation." And, ``change" was understood as ``change in time". With the advent of Quantum Theory the existence of this kind of simple reality was first questioned and later denied. Even if quantum physicists still used the concept of an ``object," it was no longer possible to speak about the actual properties and relations of said objects. A fortiori ``events" also disappeared from the vocabulary of quantum theory. Instead of objects and events, another concept emerged as the dominant idea - that of ``measurement" or ``observation". There were however, physicists who were not dominated by the new idea. Some of them, the most prominent ones in this camp being Albert Einstein and David Bohm, believed that quantum theory is a temporary statistical description of some complex nonlinear substructure - yet to be identified. But opposition also appeared in Quantum Theory's own camp. John A. Wheeler stressed repeatedly [1]: ``No elementary quantum phenomenon is a phenomenon until it is a registered (``observed," ``indelibly recorded") phenomenon." But, he did not give a definition of ``being recorded" - and we now understand why? Because such a definition could not be given within the orthodox quantum theory. John Bell [2,3] was the first to clearly realize that the concepts of ``measurement" and ``observation" were being used in Quantum Theory to brainwash physicists into believing that these concepts are a part of Quantum Theory, while in reality they belong to a metastructure. He had the courage to say [4]: ``Either the wave function is not everything or it is not right..." and he opted for an extended theory that would contain, in addition to the quantum wave function, classical variables, which he termed ``beables" - for ``being able to be" [5]. These theories were, however, inconsistent. The classical part was acted upon (by the wave function) but there was no back reaction. There was thus no way to falsify these theories as no new results were predicted that would go beyond those predicted by the textbook recipes of the standard quantum theory. It is our opinion that ``events" are classical in nature. They obey the classical ``yes-no" decision logic. Even if the decisions are based on fuzzy criteria - they are always sharp ``yes-no" decisions. They are points at which choices are being made. Those who adhere to the Many World Interpretation [6] would say: they are points at which the Universe splits into branches. Until we understand the true nature of quantum theory and the true nature of time, these choices must be considered as irreversible. What was done in the past cannot be undone later on. In a series of papers (see [7] and references there) we have developed a semi-phenomenological theory of events - Event Enhanced Quantum Theory (EEQT). The theory is based on the linear time of Galileo and Newton. It is a theory of a special kind of irreversible coupling between quantum and classical systems. 2 Events are defined in this theory as changes of state of the classical subsystem. They are accompanied by quantum jumps - discrete changes of the quantum state. In this theory, the Schrödinger equation is replaced by a piecewise deterministic algorithm that simulates the behavior of an individual quantum system coupled to a classical ``measuring device". We believe that the fundamental laws of Nature, that is the laws that Nature herself uses while producing the events of the world ``out there," are based on algorithms rather than on differential equations. These algorithms are discrete and probabilistic. The latter property mirrors the fact that we have to describe an infinitely complex universe with the finite means that are, at present, at our disposal. A brief description of EEQT will be given in the the following Section. The simplest model that can be taken as an example for building more sophisticated ones is the cloud chamber model developed in [12,13]. It describes an irreversible coupling between a nonrelativistic quantum particle and an arbitrary number of detectors sensitive to the presence of the particle in localized areas of space. From the very moment EEQT was born, we were aware of the unavoidable difficulties that might arise when the nonrelativistic theory would have to be replaced by one that is in agreement with Einstein's Relativity. The difficulties to be addressed in any attempt at building a relativistic theory of quantum measurement have long been known by the experts. (see [14], and for a recent discussion [15]). In a recent paper [16] we anticipated that the solution to this problem would have to involve algorithms that are non-local not only in space but also in time: ``However, if you try to work out a relativistic cloud chamber model (...) the events must be also smeared out in the coordinate time. (...). Nevertheless they can still be sharp in a different ``time", called ``proper time" after Fock and Schwinger." The proper time model of the cloud chamber is presented in Section 3. We have chosen relativistic spin ${1\over2}$ particle, as its proper-time quantum dynamics seems to cause more problems than spin $0$ case - this due to the lack of Lorentz invariant positive definite scalar product in the spin space. Our solution, presented in Section 3, consists of using positive detector coupling operators. In Section 2 we will recall the nonrelativistic detector model: its standard version, as discussed in Refs. [7,12,13,17], and also its ``proper time" version, so that a transition to the relativistic case is made easier. 2. Nonrelativistic Quantum Events. We reiterate, together with John A. Wheeler: No elementary quantum phenomenon is a phenomenon until it is a registered phenomenon. And, physics is about phenomena. The goal of physics is to understand Nature's phenomena; to be able to construct an ``artificial Nature" much in the way the goal of biology is to be able to construct artificial life, the same way the ultimate goal of the computer sciences is to create artificial intelligence. But to construct artificial Nature we must be able to reproduce, in terms of mathematical symbols, the natural phenomena. In order to do this we first must be able to define within mathematics what a phenomenon is. Or, more precisely, what a ``registered phenomenon" is. For this purpose, we will call an elementary registered phenomenon an event. And now we must define what constitutes an event in terms of the mathematical structure. We believe that it is obviously impossible to do this within the standard mathematical framework of the orthodox quantum theory. The framework must be extended, the theory must be enhanced. And, it so happens that the extension is, in fact, only a slight one. Significantly, nothing new is needed that has not already been discussed in the framework of a general algebraic quantum theory. The extension we have in mind consists of allowing algebraic quantities that are not in any kind of uncertainty relation to any other quantity, that is, allowing classical (in algebraic terms they are called ``central") quantities. In other words: a phenomenon can be registered, and an event can happen only if there is a classical subsystem of the given quantum system. It could be said, strictly speaking, that in a pure quantum world nothing could or would ever ``happen." But we see that things do happen in the world out there. This creates an obvious contradiction. One possibility to get us out of the contradiction is to negate events and phenomena, to call them ``illusion," to admit only ``approximate events". This is the road that most quantum physicists of our day are prepared to take. The other possibility is to accept that events do happen and to then introduce explicitly a classical subsystem. In this way, an event is defined as a change of state of this subsystem. This then gives us two further options. One option consists of admitting only mental events. That is to say, events do exist, they do happen, they are not illusions, but they happen only in mind. This is the option advocated by H. P. Stapp [18,19]. We are taking a less adventurous position by leaving the question ``what is truly and intrinsically classical?" open, or ``to be investigated." Therefore our theory, at the present stage, is semi-phenomenological. That is, we are building models that aim at reflecting some of the mechanisms of Nature, but which always can be improved by including more and more details in the description. A general scheme of EEQT has been described in detail in Ref. [7] (see also [17] for a short version). Here we will specify the scheme to the particular but important case of a particle position detector. We must note that in the literature it is common to meet the opinion that any quantum measurement can be, in the last instance, reduced to position measurements. Although we do not share this conclusion, we believe that modelling particle position detectors is important and allows us to understand the mechanism through which other types of quantum phenomena are being registered as events. 2.1 Particle detectors We consider a nonrelativistic particle on a line. When no detector is switched on then the quantum mechanical wave function $\Psi(x,t)$ representing the quantum state of the particle obeys the Schrödinger equation: \begin{displaymath} i\hbar {{\partial\Psi(x,t)}\over{dt}}= -{{\hbar^2}\over{2m}} {{d^2\Psi(x,t)}\over {dx^2}} + V(x,t)\Psi(x,t) \end{displaymath} (1) with a real potential $V(x,t)$. In particular the norm of the wave function is conserved. Now, according to our philosophy, the Schrödinger equation describes a continuous evolution of ``possibilities". Nothing happens. No phenomena. Now, let us add a particle detector that is coupled to the particle and can ``click" when the particle is ``nearby". How to describe the situation of the coupled pair particle+detector? The detector itself is idealized as a ``yes-no" device. As such it is a two-state classical system. For simplicity we will assume that the detector is at rest with respect to the coordinate system we are working in. When a quantum particle is coupled to a classical detector then we are dealing with a hybrid system. Its pure states are described by pairs $(\Psi,\alpha)$, where $\alpha=0,1$ describes the state of the detector. Statistical states of the total system are described by pairs $(\rho_0, \rho_1)$, where $\rho_\alpha$ is a positive operator in $L^2(R)$ such that $Tr (\rho_0)+ Tr(\rho_1)=1$. Now the coupling. We will consider here a detector that switches off after the first click and is no longer coupled to the particle. Other, more general types of particle detectors are described in the references quoted in [7]. The coupling is described by a positive operator $g$. For a particle detector that is sensitive to the particle position only, we will take for $g$ a function $g(x)\geq 0$ which describes its spatial sensitivity. One can think of $g(x)$ as a bell like function localized at the detector position. Thus for a point like detector the support of $g(x)$ would shrink to a single point. According to the general formalism of EEQT described briefly in Section 1.1 of Ref. [7] 3 the Liouville equation describing the time evolution of the statistical state of the total system reads: $\displaystyle \left. \begin{array}{ll} {\dot \rho}_0&=-i[H,\rho_0]-{1\over2}\{\... ...,\rho_0\}, \\ &\\ {\dot \rho}_1&=-i[H,\rho_1]+g\rho_0 g, \end{array} \right\}$     (2) where $\Lambda=g^2$. It follows from a general theorem proved in [20], that there is a unique Markov process on pure states of the total system that reproduces this evolution of ensembles. It is given by a piecewise deterministic algorithm (PDP of Ref. [7]) that governs the click time of the counter. In our case of a single counter the algorithm reads: PDP Algorithm 1   Suppose that at time $t=0$ the system is described by a (normalized) quantum state vector $\psi_0$ and the counter is off: $\alpha=0$. Then choose a uniform random number $p_0\in [0,1]$, and proceed with the continuous time evolution by solving the modified Schrödinger equation \begin{displaymath} {\dot \psi_t}=(-iH-{1\over2}\Lambda )\psi_t \end{displaymath} (3) with the initial wave function $\psi_0$ until $t=t_1$, where $t_1$ is determined by \begin{displaymath}\int_{0}^{t_1} (\psi_t,\Lambda \psi_t ) dt = p_0.\end{displaymath} At $t=t_1$ the counter clicks, that is its state changes from $\alpha=0$ to $\alpha=1$ and, at the same time, the state vector jumps: \begin{displaymath}\psi_{t_1}\rightarrow\psi_1=g\psi_{t_1}/ \Vert g\psi_{t_1}\Vert.\end{displaymath} The evolution starts now again and it obeys the standard unitary Schrödinger equation with the Hamiltonian $H$ - until the counter starts its monitoring again, in which a case the continuous evolution becomes again described by Eq.(3), new random number $p_1$ is selected, and so on. 2.2 ``Proper time" formulation In this subsection we will give a four-dimensional formulation of the nonrelativistic counter click model. It will essentially be just another formulation of the model above. We will see that in a certain limit it approximates the PDP algorithm of the previous subsection. The Hilbert space we consider is now $L^2(R^2,dx dt)$ and the dynamics will be given by a ``super-Hamiltonian" \begin{displaymath}{\cal H}=H-i{\partial\over{\partial}t}. \end{displaymath} (4) There is an extra time parameter associated with the counter, we will denote it by $\tau$ and call it a ``proper time". The coupling between the quantum particle and the counter is described by a positive operator $G$ on our Hilbert space. The Liouville equation and the PDP algorithm are much the same as above except that $x$ is replaced by $(x,t)$ and $t$ is replaced by $\tau$. Let us now see how this formalism includes one in the previous subsection. With the notation as in Sec. 1.1 let us take for the initial state $\Psi_0(x,t)$ a product: \begin{displaymath} \Psi_0(x,t)=\phi(t)\psi_0(x), \end{displaymath} (5) where $\phi(t)$ and $\psi(t)$ are square integrable (with respect to $dt$ and $dx$ respectively) of norm one, and $\Psi_0$ stands for $\Psi_{\tau=0}$. Let us assume that $G$ depends on $x$ only: \begin{displaymath} G(x,t)=g(x). \end{displaymath} (6) The equation (3) for $\psi(x,t)$ is now replaced by \begin{displaymath} {{\partial\Psi}\over{\partial\tau}}=(-iH-{\Lambda\over 2})\Psi -{{\partial\Psi}\over{\partial t}}, \end{displaymath} (7) which solves to: \begin{displaymath} \Psi_\tau(x,t)=\phi(t-\tau ) e^{(-iH-{\Lambda\over2})\tau }\psi_0 (x). \end{displaymath} (8) It is clear from this that identifying the coordinate time $t$ with the ``proper time" parameter $\tau$ we have the same inhomogeneous Poisson process governing the detector click as in the previous subsection. Remark We have chosen the function $g$ to depend only on $x$ and not on time for the reason that in this case the operator $\Lambda$ commutes with $\partial/\partial t$, and so Eq. (7) is easily solved. 3. Relativistic Quantum Events. Let us now consider the relativistic case. Proper time formulation of the relativistic quantum mechanics has been considered by many authors. We shall cite here the classical paper by Horwitz and Piron [21]. There are also two review papers, one by Kyprianidis [22], and one by Fanchi [23] (cf. also [24]), where more references can be found. While the case of spinless particle was relatively straightforward, the case of spin $1/2$ caused interpretational problems because of the lack of Lorentz invariant, positive definite, scalar product. Several possible ways out of this difficulty have been considered. Evans [25] proposes to accept the indefinite scalar product, while in [26,27,28,29] the authors introduce an additional superselection (classical) variable ${\bf n}$ to parametrize a family of positive definite scalar product Hilbert spaces. The latter was then discussed by Horwitz and Arshansky [30] who noticed that the Dirac operator was not Hermitian with respect to the positive definite products. Our model will use indefinite metric. In fact, according to our philosophy, there is no reason at all why scalar product is to be positive definite. This is because we do not start with the standard quantum mechanical probabilistic interpretation. We derive the interpretation from the coupling. So, the only thing that we have to worry about is that the probability of the detector click is to be non-negative. And we will see that this is indeed the case. We will take the standard representation of gamma matrices: \begin{displaymath} \gamma^0=\pmatrix{I&0\cr0&-I\cr},\; \gamma^i=\pmatrix{0&\sigma^i\cr -\sigma^i&0\cr} \end{displaymath} (9) and define an indefinite metric space by \begin{displaymath} <\Psi , \Phi >=\int {\bar Psi}(x,t)\Phi(x,t) dx dt , \end{displaymath} (10) where ${\bar\Psi}=\Psi^\dagger\gamma^0$ . The Dirac matrices are Hermitian with respect to this scalar product, and so is the Dirac operator: \begin{displaymath} {\cal D}= i\gamma^\mu (\partial_\mu+ieA_\mu)-m . \end{displaymath} (11) Let us consider now a particle position detector which, for simplicity, is at rest with respect to the coordinate system. We associate with it the operator $G$ defined by \begin{displaymath} (G\Psi)(x,t)={{I+\gamma_0}\over 2}g(x)\Psi(x,t), \end{displaymath} (12) where $g(x)$ is a positive, bell-like, function centered over the detector position. 4 It follows now that $G$ is positive, Hermitian with respect to the indefinite metric scalar product, and the same holds for $\Lambda=G^2$. We postulate the following relativistic version of the PDP algorithm: Relativistic PDP Algorithm 1   Suppose that at proper time $\tau=0$ the system is described by a quantum state vector $\Psi_0$ and the counter is off: $\alpha=0$. Then choose a uniform random number $p\in [0,1]$, and proceed with the continuous time evolution by solving the modified evolution equation \begin{displaymath} {\dot \Psi}_\tau =(-i{{{\cal D}^2}\over{2M}}-{1\over2}\Lambda )\psi_t \end{displaymath} (13) with the initial wave function $\Psi_0$ until $\tau=\tau_1$, where $\tau_1$ is determined by \begin{displaymath}\int_{0}^{\tau_1} (\Psi_\tau ,\Lambda \Psi_\tau ) d\tau = p.\end{displaymath} At $\tau=\tau_1$ the counter clicks, that is its state changes from $\alpha=0$ to $\alpha=1$ and, at the same time, the state vector jumps: \begin{displaymath}\Psi_{\tau_1}\rightarrow\Psi_1=G\Psi_{\tau_1}/ <\Psi_{\tau_1},G\Psi_{\tau_1}>.\end{displaymath} If, after the first click, the detector is deactivated, then after the click the evolution starts again and it obeys the standard unitary Schrödinger equation with the Hamiltonian ${\cal H} = \frac{D^2}{2M}$. The algorithm contains second order, proper time, Dirac equation. This equation can be geometrically derived by the method of dimensional reduction along an isotropic Killing vector field in six dimensional space of signature $(++++,--)$ in an exact analogy to the derivation of Levy-Leblond and Pauli equation from five-dimensional space of signature $(++++,-)$ - cf. Ref. [31]. It is clear from the definition that the algorithm works well and can be repeated. Let us now assume that there are two detectors, both at rest with respect to the coordinate system. They are both coupled to the quantum particle, the coupling operators $G_i, i=1,2$ being given, as before, by: \begin{displaymath} (G_i\Psi)(x,t)={{I+\gamma_0}\over 2}g_i(x)\Psi(x,t), \end{displaymath} (14) where the functions $g_i$ are localized at the detectors. The operator $\Lambda$ is now a sum of two contributions: \begin{displaymath} \Lambda=G_1^2+G_2^2. \end{displaymath} (15) The algorithm proceeds as before but now, when the event happens at $\tau=\tau_1$ a decision must be made which of the two detectors reacts. The probability $p_i$ that the $i$-th detector is activated is given by the same formula as in the nonrelativistic case (cf. Ref. [7] for a general theory): \begin{displaymath} p_i={{<\Psi_{\tau_1},G_i\Psi_{\tau_1}>}\over{ <\Psi_{\tau_1},G_1\Psi_{\tau_1}>+<\Psi_{\tau_1},G_2\Psi_{\tau_1}>}}. \end{displaymath} (16) Generalization to a larger number of detectors, that are not necessarily at rest, is straightforward. 4. Final Remarks. We have proposed a relativistic PDP algorithm that allows one to model the behavior of a detector coupled to a relativistic spin ${1\over2}$ particle. Adding more detectors does not cause any difficulties. Our algorithm is repeatable, thus allowing for a continuous monitoring of the particle position, as in the nonrelativistic cloud chamber model described in [12,13]. It would be interesting to see how our relativistic event generating algorithm can be used to test the idea of interference in time as discussed by Arshansky, Horwitz and Lavie in [32]. In the nonrelativistic case there is a dual description: by a continuous in time Liouville equation that describes time evolution of statistical states, and by the PDP Algorithm that simulates Nature's event generation for individual systems. In the relativistic case we have restricted ourselves to the individual description. In fact, at present, we do not know what would be the right mathematical formalism and its physical interpretation for a relativistic analogue of the Liouville equation. Formally we can write an equation as in the nonrelativistic case, but now using an indefinite metric space. Some of the relevant mathematics have been developed in the past by one of us [33]. What is yet to be done is to study the nonrelativistic limit of the Relativistic PDP and to see that Nonrelativistic PDP of EEQT is recovered this way. The ideas given in the papers by Horwitz [34] and Horwitz and Rotbart [35] can be applied for studying such a limit. Other problems that are yet to be investigated are: quantum field theoretical generalization along the lines of Ref. [13], and an explicit formula for time-of-click probability for a pointlike detector and a special intitial wave packet - as in Ref. [16]. It is to be observed that a quantum field theoretical version will have to involve, as it was for one particle, indefinite metric. It must be noted that the Relativistic PDP, in cases of more than one detector, involves non-local decision making algorithm. Thus, even if the detectors are treated as classical in our approach, nevertheless there is no local explanation for their behavior. To decide which of the two detectors will click at the event time involves a random choice based on probabilities that are computed non-locally. How Nature herself is doing this - is a big puzzle. A solution to this puzzle must be postponed till a later time when the very nature and origin of the Planck constant, of space and of time are better understood. One of us (A.J.) would like to thank the A. von Humboldt Foundation for support. He would also like to thank Larry Horwitz for encouragement and discussion, and Laura Knight for reading the manuscript. Wheeler, J.A.: ``Delayed-Choice Experiments and Bohr's Elementary Quantum Phenomenon", in Proc. Int. Symp. Found. of Quantum Mechanics, Tokyo 1983, pp. 140-152 Bell, J.: ``Towards an exact quantum mechanics", in Themes in Contemporary Physics II. Essays in honor of Julian Schwinger's 70th birthday, Deser, S., and Finkelstein, R. J., Ed., World Scientific, Singapore 1989 Bell, J.: ``Against measurement", in Sixty-Two Years of Uncertainty. Historical, Philosophical and Physical Inquiries into the Foundations of Quantum Mechanics, Proceedings of a NATO Advanced Study Institute, August 5-15, Erice, Ed. Arthur I. Miller, NATO ASI Series B vol. 226 , Plenum Press, New York 1990 Bell, J: ``Are there quantum jumps?" in Schrödinger, Centenary of a Polymath, Cambridge University Press (1987). Bell, J.S.: ``Beables for quantum field theory", Paper no. 19 in Speakable and unspeakable in quantum mechanics, Cambridge University Press, Cambridge 1987 DeWitt, B., Ed.: ``The many-worlds interpretation of quantum mechanics", Princeton Univ. Press, Princeton 1973 Blanchard, Ph., and Jadczyk, A.: ``Event-Enhanced-Quantum Theory and Piecewise Deterministic Dynamics", Ann. der Physik 4 (1995) 583-599 Primas, H. Chemistry, Quantum Mechanics and Reductionism: Perspectives in Theoretical Chemistry, Springer Verlag, Berlin 1981 Primas, H., The Measurement Process in the Individual Interpretation of Quantum Mechanics, in The Measurement Problem of Quantum Theory, Ed. M. Cini and J.M. Lévy-Leblond, IOP Publ. Ldt. Bristol, 1990 Amman, A.: ``Broken symmetries and the generation of classical observables in large systems", Helv. Phys. Acta 60 (1987), 384-393 Amman, A.: ``Chirality: A superselection rule generated by the molecular environment", J. Molec. Chem. 6 (1991), 1-15 Jadczyk, A.: ``Particle Tracks, Events and Quantum Theory", Progr. Theor. Phys. 93 (1995), 631-646 Jadczyk, A.: ``On Quantum Jumps, Events and Spontaneous Localization Models", Found. Phys. 25 (1995), 743-762 Aharonov, A., and Albert, D.Z.: ``States and observables in relativistic quantum field theories", Phys. Rev. D21 (1980), 3316-3324 Peres, A.: ``Relativistic Quantum Measurements", in: Fundamental Problems of Quantum Theory, Ann. NY. Acad. Sci. 755 (1995) Blanchard, Ph., and Jadczyk, A.: ``Time of Events in Quantum Theory", Preprint BiBoS 720/1/96, e-Print Archive: quant-ph/9602010, to appear in Helv. Phys. Acta Blanchard, Ph., and Jadczyk, A.: ``Events and Piecewise Deterministic Dynamics in Event-Enhanced Quantum Theory", Phys. Lett. A203 (1995), 260-266 Stapp, H.P.: Mind, Matter and Quantum Mechanics, Springer Verlag, Berlin 1993 Stapp, H.P.: ``The Integration of Mind into Physics", in opus cited under [15] Jadczyk, A., Kondrat, G., and Olkiewicz, R.: ``On uniqueness of the jump process in quantum measurement theory", Preprint BiBoS 711/12/95, quant-ph/9512002, submitted to J. Phys. A. Horwitz, L.P., and Piron, C.: ``Relativistic Dynamics", Helv. Phys. Acta 46 (1973), 316-326 Kyprianidis, A.: ``Scalar Time Parametrization of Relativistic Quantum Mechanics: the Covariant Schrödinger Formalism", Phys. Rep. 155 (1987), 1-27 Fanchi, J.R.: ``Review of Invariant Time Formulations of Relativistic Quantum Theories", Found. Phys. 23 (1993), 487-548 Fanchi, J.R.: ``Evaluating the Validity of Parametrized Relativistic Wave Equations", Found. Phys. 24 (1994), 543-562 Evans, A.B.: ``Four-Space Formulation of Dirac's Equation", Found. Phys. 20 (1990), 309-335 Horwitz, L.P., Piron, C., and Reuse, F.: ``Relativistic Dynamics for the Spin $1\over2$ Particle", Helv. Phys. Acta 48 (1975), 546-547 Piron, C., and Reuse, F.: ``Relativistic Dynamics for the Spin $1\over2$ Particle", Helv. Phys. Acta 51 (1978), 146-166 Reuse, F.: ``On Classical and Quantum Relativistic Dynamics", Found. Phys. 9 (1979), 865-882 Arensburg, A., and Horwitz, L.P.: ``A First-Order Equation for Spin in a Manifestly Relativistically Covariant Quantum Theory", Found. Phys. 22 (1992), 1025-1039 Horwitz, L.P., and Arshansky, R.: ``On relativistic quantum theory for particles with spin $1\over2$", J. Phys. A 15 (1982), L659-L662 Jadczyk, A.: ``Topics in Quantum Dynamics", in Proc. First Caribb. School of Math. and Theor. Phys., Saint-Francois-Guadeloupe 1993, Infinite Dimensional Geometry, Noncommutative Geometry, Operator Algebras and Fundamental Interactions, ed. R.Coquereaux et al., World Scientific, Singapore 1995, hep-th 9406204 Arshansky, L., Horwitz, L.P., and Lavie, Y.: ``Particles vs. Events: The Concatenated Structure of World Lines in Relativistic Quantum Mechanics", Found. Phys. 13 (1983), 1167-1194 Jadczyk, A.: ``Geometry of Indefinite Metric Spaces", Rep. Math. Phys. 2 (1971), 263-276 Horwitz, L.P.: ``On the Definition and Evolution of States in Relativistic Classical and Quantum Mechanics", Found. Phys. 22 (1992), 421-448 Horwitz, L.P., and Rotbart, F.C.: ``Nonrelativistic limit of relativistic quantum mechanics", Phys. Rev. D 24 (1981), 2127-2131 ...${}^\sharp$ 1 ... systems. 2 Necessity of such a coupling was envisaged already in the works by H. Primas [8,9] and A. Amman [10,11]. ...blaja95a 3 Using the notation of Ref. [7] we put $\alpha,\beta=0,1,$ $g_{10}=g, g_{01}=0, \Lambda_0=g^2, \Lambda_1=0$ ... position. 4 Notice that here, as in the nonrelativistic case, we assume that $g$ depends only on $x$ and not on $t$ - in the coordinate system with respect the detector is at rest.
5c19b235f90a37bb
Ah, controversy!  Physics is of course not immune from it, and sometimes the participants in an argument can let anger get the better of them. An example of this began last week, when the following video clip appeared, featuring Professor Brian Cox explaining to a lay audience the Pauli exclusion principle: For reasons that I will try and elaborate on in this post, this short video was, to say the least, eyebrow-raising to me.  Tom over at Swans on Tea picked up on the same video, and wrote a critique of it with the not quite political title, “Brian Cox is Full of **it“, in which he explained his initial critique of the video based on his own knowledge.  I piped in with a comment, Well put. I just saw this clip the other day and it was an eyebrow-raiser, to say the least. I thought I’d mull over the broader implications a bit before writing my own post on the subject, but you’ve addressed it well. A more technical way to put it, if I were to try, is that the Pauli principle applies to the *entire* quantum state of the wavefunction, not just the energy, as Cox seems to imply. This is why we can, to first approximation, have two electrons in the same energy level in an atom: they can have different “up/down” spin states. Since the position of the particle is part of the wavefunction as well, electrons whose spatial wavefunctions are widely separated are also different. Well, apparently being criticized was a bit upsetting for Professor Cox, because he fired off the following angry comment to both myself and Tom: “Since the position of the particle is part of the wavefunction as well, electrons whose spatial wavefunctions are widely separated are also different.” What on earth does this mean? What does a wave packet look like for a particle of definite momentum? Come on, this is first year undergraduate stuff. I’m glad that you, Tom, don’t need to know about the fundamentals of quantum theory in order to maintain atomic clocks, otherwise we’d have problems with our global timekeeping! So, he basically insults both Tom and I in the course of several paragraphs, without addressing the comments at all, really.  It gets worse.  In addition to me later being referred to as “sensitive” by the obviously sensitive Dr. Cox (cough cough projection cough), he doubles down on his anger by referring on Twitter to the lot of those criticizing him (including Professor Sean Carroll of Cosmic Variance) as “armchair physicists”. Well, there have been a number of responses to Cox’s angry rant, including a response on the physics from Sean Carroll and a further elaboration by Tom on his own case at Swans on Tea.  I felt that I should respond myself, at the very least because I’ve been accused of not understanding “undergraduate physics” myself, but also because the “everything is connected” lecture in my opinion represents a really dangerous path for a physicist to go down. We’ll take a look at this from two points of view; first, I’d like to comment on the style of Cox’s response to criticism, and then on the more important substance of the discussion. First, on the style.  When your response to criticism from research physicists is that they don’t understand undergraduate physics and that they are “armchair physicists”, you’ve basically admitted that you’ve lost the argument*.  Though scientists certainly get into petty spats far too often, typically sparked by research disagreements, it is not considered a good thing.  It is especially bad form for someone who is representing the field in a very public way to whine and name call: it is a very poor showing of what science is supposed to be all about. Okay, let’s get to the substance!  In order to get into the meat of the issue, I should say a few words about the quantum theory, since I don’t discuss it very often on this blog.  Dr. Francis talks a bit about one of the issues — entanglement — over at Galileo’s Pendulum, also in reaction to this “controversy”. Up through the late 19th century, “classical” physics served very well in describing the universe.  As researchers started to investigate the behavior of matter on a smaller scale, they began to encounter phenomena that couldn’t be explained by the existing laws, such as the structure of the atom (more on this in my old post here). Many of these issues were spectacularly resolved by the hypothesis that subatomic particles such as the electron and proton are not in fact point-like objects but possess wave-like properties.  This idea was introduced by the French physicist Louis de Broglie in his 1924 doctoral thesis, and it naturally explained such phenomena as the discrete energy levels that electrons in atoms possess.  The wave properties of matter can be demonstrated dramatically by using electrons in a Young’s double slit experiment; the electrons exiting the pair of slits produce a wave-like interference pattern of bright and dark bands, just like light. But this explanation raised a natural and difficult question: what, exactly, is the nature of this electron wave?  An example of the difficulties is provided by the electron double slit experiment.  Individual electrons passing through the slits don’t produce waves; they arrive at a discrete and seemingly random points on the detector, like a particle.  However, if many, many electrons are sent through the same experiment, one finds that the collection of them form the interference pattern.  This was shown quite spectacularly in 1976 by an Italian research group**: How do we explain that individual electrons act like particles but many electrons act like waves?  The conventional interpretation is known as the Copenhagen interpretation, and was developed in the mid-1920s.  In short: the wavefunction of the electron represents the probability of the electron being “measured” with certain properties.  When a property of the electron is measured, such as its position, this wavefunction “collapses” to one of the possible outcomes contained within it.  In the double slit experiment, for instance, a single electron (or, more accurately, its wavefunction) passes through both slits and has a high probability of being detected at one of the “bright” spots of the interference pattern and a low probability of being detected at one of the “dark” spots.  It only takes on a definite position in space when we actually try and measure it. This interpretation is amazingly successful; coupled with the mathematics developed for the quantum theory (the Schrödinger equation, and so forth) it can reproduce and explain the behavior of most atomic and subatomic systems.  However, the wave hypothesis raises many more deep questions!  What, exactly is a “measurement”?  How does a wavefunction “collapse” on measurement? If all particles are waves, why don’t we see their wave-like (or quantum) properties in our daily lives? Are the properties of a particle truly undetermined before measurement, or are they well-defined but somehow “hidden” from view? This latter question formed the basis of a famous counterargument to the quantum theory called the Einstein-Podolsky-Rosen paradox, published in 1935.  The paradox may be formulated in a number of ways; what follows is a simple model from optics.  By the use of a nonlinear optical material, a photon (light particle) of a given energy can be divided into two photons, each with half the energy of the original, propagating in different directions, by the process of spontaneous parametric down conversion. There is an important additional property of these half-energy photons, however; due to the physics of their creation, they have orthogonal polarizations.  That is, if the electric field of one photon is oscillating horizontally the other must be oscillating vertically, and vice-versa.  However — and this is the important part — nothing distinguishes between the two photons on creation, and nothing chooses the polarization of one or another.  Just like the position of the electron in Young’s double slit experiment is genuinely undetermined until we measure it, the polarization of the photons is undetermined until we make a measurement.  Nevertheless, there is a connection between the two photons: we don’t know which one has which polarization, but we know for certain that the polarizations are perpendicular.  If we were to look at the photon polarization head-on, we might see something of the form shown below: The photons are said to be entangled; though their specific behavior is undetermined, the physics of their creation still forced a relationship between the two. Here’s where E, P & R felt there was a paradox: suppose we point our photons to opposite ends of the galaxy.  If undisturbed, they remain in this entangled state and can in principle travel arbitrarily far away from one another.  Now suppose we measure the polarization of one of the photons, and find the result is vertical; we’ve collapsed the wavefunction, and we now know with certainty that the other photon, at the other end of the galaxy, must be horizontally polarized.  By measuring the polarization of one photon, we’ve automatically determined the state of the other one; apparently this wavefunction collapse must happen instantaneously, faster than even the speed of light! This idea of entanglement and its “spooky action at a distance” was intended to demonstrate the ridiculousness of the Copenhagen interpretation of the quantum theory, but in fact it has been verified in countless laboratory experiments.  Furthermore, E, P & R’s counter-explanation — that the polarizations of the photons are well-defined on creation, just “hidden” — has been demonstrated to not be true (though intriguing loopholes remain).  It has also been shown that entanglement is consistent with Einstein’s special relativity.  Although the collapse of the wavefunction can occur instantaneously, it is not possible to transmit any information this way, due (in short) to the random nature of the process. We’ll get to the relevance of entanglement in a moment; we still need one more piece of the puzzle before we can discuss the “everything is connected” video, namely Pauli’s exclusion principle.  As we have noted, the introduction of the quantum theory answered many questions, but raised many more.  Among other things, the quantum theory predicts that electrons exist only in particular special and discrete “orbits” around the nuclei of atoms.  This idea was first introduced in the Bohr model of the atom, as illustrated below: An electron in a hydrogen atom can only exist in certain discrete stable orbits, labeled in this picture by the index n.  Light is emitted from an atom when it drops from a higher energy (outer) orbit to a lower (inner) orbit.  The existence and nature of these discrete orbits is explained by the wave properties of matter: electrons form a “cloud” around the nucleus, rather than orbiting in a well-defined manner. But the wave nature of matter also raises a new problem: electrons are now somewhat “squishy”!  In larger atoms with multiple electrons orbiting the nucleus, it was readily found that only a finite number of electrons can fill each orbital position/energy level.  One is naturally led to wonder why all the electrons don’t just fill the lowest energy state of the atom, the “n=1” state; because the electrons are wavelike and “squishy”, there doesn’t seem to be anything prohibiting this. This was one problem that Wolfgang Pauli (1900-1958) concerned himself with.  The answer he developed became known as the Pauli exclusion principle: no two identical fermions can occupy the same quantum state.  “Fermions” include electrons, protons and neutrons: the constituent parts of ordinary matter.  Under the Pauli principle, electrons cannot all pile into the ground state of an atom.  Because electrons possess intrinsic angular momentum (“spin”) which can either be “up” or “down”, and this is part of the electron’s quantum state, two electrons can fit in the ground state with the same energy but with different spins. Keep in mind that the Pauli principle applies to the complete state of an electron; this potentially includes its energy, its momentum, its spin, and its position in space.  Any property of a pair of electrons that can be used to distinguish them counts against the exclusion principle. Now we’ve hopefully got enough information to understand what Cox is trying to say in the video linked above.  Let’s dissect it one step at a time: For example, in this diamond, there are 3 million billion billion carbon atoms, so this is a diamond-sized box of carbon atoms. And here’s the thing, the pauli exclusion principle still applies, so all the energy levels in all the 3 million billion billion atoms have to be slightly different in order to ensure that none of the electrons sit in precisely the same energy level; Pauli’s principle holds fast. This is a well-known and accepted property of matter.  The electrons in a piece of bulk material are all “squashed together”, just like the multiple electrons in a complex atom are all squashed together.  In an individual atom, the electrons must stack up into the different quantum states (different energies, different spins) that are permitted by the electron/nucleus interaction.  In a bulk piece of crystal, a similar argument applies: there are a large number of permissible quantum states allowed, in which electrons are “spread out” over the size of the crystal; Pauli’s principle indicates that each electron must be in a different state, and they end up filling a “band” of energies. But it doesn’t stop with the diamond, see, you can think that the whole universe is a vast box of atoms, that countless numbers of energy levels all filled by countless numbers of electrons. Here’s where things start to go off the rails for me, and it seems like a dirty trick is being pulled.  In a crystal, there are a large number of strongly-interacting electrons packed together, and it is natural — and demonstrable — that the wavefunctions of the electrons spread out over the entire bulk of the crystal, with the sides of the crystal forming a natural boundary.  But jumping to the cosmological scale, we don’t “see” electrons whose wavefunctions stretch over the extent of the universe — our experiments show electrons localized to relatively small regions.  Even if we treat the universe as a big box — and it’s unclear that this is even a reasonable argument to make — the behavior of electrons in the “universe box” is really, fundamentally different from the behavior of electrons in a “crystal box”.  I think that Sean Carroll over at Cosmic Variance was saying something very similar when he notes, “but in the real universe there are vastly more unoccupied states than occupied ones.”  That is: in a crystal, the electrons are “fighting” to find an unoccupied energy level to occupy, like a quantum-mechanical game of musical chairs.  Over the entire extent of the universe, however, there are plenty of open energy levels — much like finding chairs at a Mitt Romney event in Michigan. So here’s the amazing thing: the exclusion principle still applies, so none of the electrons in the universe can sit in precisely the same energy level. Now we’re really getting into trouble.  In a crystal, where the electrons are all essentially “smeared out” over the volume, the energy levels must necessarily split.  But electrons in the universe don’t seem to be smeared out in the same way.  It would seem to me that electrons separated widely in space — around different hydrogen atoms on opposite ends of the universe, for instance — would be perfectly well distinguished by their relative positions, and not need to have energy level splits.  More on this in a moment. But that must mean something very odd. See, let me take this diamond, and let me just heat it up a bit between my hands. Just gently warming it up, and put a bit of energy into it, so I’m shifting the electrons around. Some of the electrons are jumping into different energy levels. But this shift of the electron configuration inside the diamond has consequences, because the sum total of all the electrons in the universe must respect Pauli.  Therefore, every electron around every atom in the universe must be shifted as I heat the diamond up to make sure that none of them end up in the same energy level. When I heat this diamond up all the electrons across the universe instantly but imperceptibly change their energy levels. So everything is connected to everything else. Now the explanation has actually made the leap into being simply wrong!  We have noted that, with tangled quantum mechanical particles, it is possible to instantaneously modify the wavefunction of one of the entangled pair by manipulating (measuring) the properties of the other.  But, as we noted, nothing physical can be transmitted at this faster-than-light  wavefunction collapse.  Cox specifically says in this lecture that heating of electrons in his piece of diamond instantly changes the energy levels, i.e. the energy , of the electrons across the universe!  A change in energy is a physical change of a particle, and this is specifically forbidden by the laws of physics as we know them. Another thought came to me as I was reading this, and I found that it was already stated by Tom over at Swans on Tea.  If all the electrons in the universe necessarily have different energies, then they are always in different quantum states — the Pauli exclusion principle would become irrelevant!  It would seem to imply that we could pile an arbitrary number of electrons in the ground state of a hydrogen atom, although they would have slight indistinguishable energies.  Obviously, we don’t see this.  There may be a problem with this argument, as well, but it illustrates (as Tom says) that broad-reaching statements about atomic energy levels end up having potentially more implications than one would at first think. As stated, Cox’s argument is really incorrect, and violates relativistic principles.  One can argue, in his defense, that this is a consequence of trying to simplify things for a popular audience, and that he really meant something a little more subtle.  However, on an undergraduate physics page he makes a similar argument, and linked to it in defense of his lecture. Imagine two electrons bound inside two hydrogen atoms that are far apart. The Pauli exclusion principle says that the two electrons cannot be in the same quantum state because electrons are indistinguishable particles. But the exclusion principle doesn’t seem at all relevant when we discuss the electron in a hydrogen atom, i.e. we don’t usually worry about any other electrons in the Universe: it is as if the electrons are distinguishable. Our intuition says they behave as if they are distinguishable if they are bound in different atoms but as we shall see this is a slippery road to follow. The complete system of two protons and two electrons is made up of indistinguishable particles so it isn’t really clear what it means to talk about two different atoms. For example, imagine bringing the atoms closer together – at some point there aren’t two atoms anymore. You might say that if the atoms are far apart, the two electrons are obviously in very different quantum states. But this is not as obvious as it looks. Imagine putting electron number 1 in atom number 1 and electron number 2 in atom number 2. Well after waiting a while it doesn’t anymore make sense to say that “electron number 1 is still in atom number 1”. It might be in atom number 2 now because the only way to truly confine particles is to make sure their wavefunction is always zero outside the region you want to confine them in and this is never attainable. We can try and explain this more elaborate and detailed argument pictorially for two electrons.  What Cox seems to be espousing here is essentially that two electrons naturally evolve into an entangled state after some period of time.  How this would work: we start with two electrons (labeled “red” and “blue” for clarity, though they are in reality completely indistinguishable particles) surrounding two different hydrogen nuclei.  Let us suppose they start spatially separated, as shown below: Because the red electron’s wavefunction stretches into the domain of the blue atom, and the blue electron’s wavefunction stretches into the domain of the red atom, as time goes on it becomes increasingly likely that the red and blue electrons have switched places.  The wavefunctions may evolve to something like this: Eventually, after sufficient time has passed, each electron is equally likely to be in either atom, which we crudely sketch as: That is, we expect the wavefunctions to be identical for the two electrons.  But they can’t be identical, according to the Pauli principle!  Therefore something else must shift in the wavefunctions to make them distinguishable — one ends up with a slightly higher energy, one ends up with a slightly lower energy. What we’ve got here is what I imagine would be considered a form of entanglement: we know with certainty that there is only one electron around each nucleus, but the specific location of either is undetermined. This idea isn’t particularly controversial: this is essentially what happens in crystals, as we have discussed, and what happens to the electrons in molecules or otherwise interacting atoms.  But this is just a description for two atoms — can we make the same sort of arguments on a universal scale?  Here I have two problems.  The first is that there are too many questions that get raised when trying to extend this to this degree, among them: • Time.  How long does it take to get such an entanglement between two electrons?  For two electrons next to one another, I imagine it would be nearly instantaneous, but for two electrons separated by light-years?  I’m guessing the period of time is very, very large, which brings me to my next point… • Stability.  It is not easy to produce significantly entangled photons in the laboratory, and it is hard to maintain that entanglement.  Keeping two particles entangled for long periods of time is experimentally nontrivial, due to external interactions with other particles: in essence, our quantum system is being continually “measured” by outside influences.  Do widely separated electrons ever form an appreciable degree of entanglement?  Completely unclear, and rather doubtful. • Infinity.  Part of the argument for this universal entanglement is built on the idea that the spatial wavefunctions of electrons are of infinite extent, i.e. they are spread-out throughout all of space.  Indeed, stationary (definite energy) solutions of the Schrödinger equation are infinitely spread out, but I would use a lot of caution to make concrete conclusions from that observation.  In optics, classical states of “definite energy” are monochromatic waves, which are used all the time to make optics calculations convenient.  It follows from the mathematics that monochromatic waves are always of infinite extent, just like wavefunctions, but here’s the thing: nobody with any sense in optics assumes that this infinite extent is a physical behavior that one should derive concrete physical conclusions from.  A monochromatic wave is just a convenient idealization of the real physics***. A natural question to ask at this point: isn’t physics all about deriving general conclusions from simple physical laws?  Why are you being more cautious with Pauli, and quantum mechanics, than you are with, say, gravity and electromagnetism?  Part of the difference is, as we have noted above, that we simply do not understand the quantum theory well enough to derive boldly such universe-wide conclusions.  An even more important difference, though, is that I can see the universal consequences of gravitation and electromagnetism experimentally, whereas it is not clear what consequences, if any, this “universal Pauli principle” provides.  Which brings me to my final observation; returning to Cox’s lecture notes: The initial wavefunction for one electron might be peaked in the region of one proton but after waiting for long enough the wavefunction will evolve to a wavefunction which is not localized at all. In short, the quantum state is completely specified by giving just the electron energies and then it is a puzzle why two electrons can have the same energy (we’re also ignoring things like electron spin here but again that is a detail which doesn’t affect the main line of the argument). A little thought and you may be able to convince yourself that the only way out of the problem is for there to be two energy levels whose energy difference is too small for us to have ever measured in an experiment. Emphasis mine.  HOLY FUCKING PHYSICS FAIL.  Here Cox explicitly acknowledges that his “universal Pauli principle” consequences are something that not only cannot be measured today, but in principle can never be measured, by anyone. At its core, physics is all about experiment.  Experimental tests of scientific hypothesis are what distinguish physics (and all science, really) from general philosophy and, worse, mysticism and pseudoscience.  Consider the application of Cox’s conclusion to a few other situations: • Astrology is the influence of the stars upon human beings via quantum mechanical influences whose energy difference is too small for us to have ever measured in an experiment. • Homeopathy is the lingering effect of chemical forces on water via quantum mechanical changes to the water whose energy difference is too small for us to have ever measured in an experiment. • The human soul exists materially in the quantum wavefunction of a human being, manifesting itself in changes whose energy difference is too small for us to have ever measured in an experiment. In my eyes, there really is not much difference between various pseudoscientific shams being propagated in the world today and the logical argument of a “universal Pauli principle”. (When I mentioned this argument to a colleague, he said, “Ask him how many angels dance on the head of a pin.“) In a sense the whole discussion of this blog post has been a waste of time: my theoretical counterarguments may be reasonable or they may not; we can never draw any conclusion about the reality of this universal principle because it lies outside our ability to ever detect it. I tend to be rather forgiving of using simple, arguably misleading, models to introduce physical principles.  For instance, I’m a defender of the use of the Bohr model as a good tool to expose students to quantum ideas in a simple and historical way.  My criterion, however, is this: a model or explanation must, as a whole guide students in the right direction towards the greater “truth”, such as it is in science.  The “universal Pauli principle” fails this on two parts: it gives a false impression of the importance of completely unexperimental conclusions, and it opens the door to pseudoscientific nonsense.  Nevertheless, Cox doubled down on his statements in a Wall Street Journal article, somehow arguing that his original argument is a necessary evil in a world where the public needs to be excited about science. In a sense, though, we have ironically come full circle on Pauli.  It was none other than Wolfgang Pauli who coined the phrase “not even wrong” to describe theories that cannot be falsified or cannot be used to make predictions about the natural world.  It has been most recently used to describe string theory, with the argument that the predictions of string theory cannot be tested with any experimental apparatus that exists.  However, string theory can at least in principle be tested, albeit not today, where it seems that the “universal Pauli principle” described by Cox has no measurable consequences, in principle, and is immune to any test imaginable.  It serves no useful purpose in the world of physics, and as we have noted there are many objections to it actually working the way it is advertised to work. I was recently thinking of the many advantages to the explosion of science communicators on the internet, and one that struck me is that we no longer have to rely on a single or a small number of “authority figures” to tell us what is right and wrong in the scientific world.  This entire fiasco emphasizes how important this new abundance of voices will be in an ever more complex universe. With no hypothesis to test, and no measurable consequences for science, I conclude my thoughts on the “universal Pauli principle”. Requiescat in pace, “omnia conjuncta est”¹ * If someone wants to get in a pointless pissing match of who is more of an “armchair physicist” based on CVs, though, I’m your huckleberry. *** Curiously, in arguing against the use of the spatial distribution of a quantum wavefunction in providing “distinguishability” of electrons, Cox uses a “momentum eigenstate” — a particle of perfectly specified momentum and infinitely uncertain position.  This is pretty much the equivalent of a monochromatic plane wave in optics, which again nobody would use as a realistic example of how the world works. ¹Thanks to Twitter folks for suggesting the translation of the latter phrase.  Alternate translation: “omnes continent”. Postscript: A couple of friends (including @minutephysics) have pointed out that none of the discussions so far have included quantum field theory, which makes things even more complicated (non-conservation of particle number, for instance). This entry was posted in ... the Hell?, Physics. Bookmark the permalink. 55 Responses to Pauli, “armchair physicists”, and “not even wrong” 1. Also, I suppose that sign off makes you the Ezio Auditore of physics blogging? 2. This post rocks on so many levels…though, I remain skeptical as to the degree with which it rocks. 3. Phil says: Hmmmm. This is interesting stuff. I don’t buy Prof Cox’s argument, but I don’t buy your entire rebuttal above. It’s hard to argue against the infinite extent of the wavefunction of a particle. Forget about Pauli for a moment, and forget about crystals. Let’s talk about a massive bunch of Hydrogen atoms a billion light years away underoing fusion and releasing a massive bunch of high-energy photons. Is the probability that one of these photons will interact with the wavefunction of the detector elements in one of our super duper telescopes (a billion years later) vanishingly small? No, we can see them with a big enough telescope. What about if there were half as many? A tenth? A hundredth? A millionth? A 1/6.0221415 × 10^23th? What if there were just two Hydrogen atoms? We are left with the conclusion that the wavefunction never truly deteriorates to zero. There is no magic cut-off point. Strap enough similar events together, and they will have a measurable effect (eventually) on the other side of the universe, proving the non-vanishing probability of the constituent events. As for Pauli’s exclusion principle, it’s my understanding (I’m no expert) that this applies to electrons only after they have become bound to the conglomeration of orbitals making up Prof Cox’s crystal. So an electron with a peak probability of being found on the other side of the Universe is not going to concern itself with taking on a unique energy level unless it comes bound. I would maintain that it DOES have a finite chance of becoming bound though, no matter how far away it is. Of course, its ‘probability wave’ would first need to have travelled at the speed of light to reach the crystal in the first place. • Phil: thanks for the comment! Admittedly I’m a little unsure about the wavefunction extent, though it seems that, in a very simple sense, it must still be subject to relativistic effects. Otherwise, there is a non-zero probability that an electron in a definite position (after measurement) here is instantaneously on the other side of the universe. There are a lot of other arguments that could potentially be made, though, especially once one brings in the full relativistic field theory, so I can’t be sure that some other complications involving many-particle interactions mucks things up. Your hydrogen atom argument has a problem, in that when you talk about the photons being emitted by a hydrogen atom, you’re talking about the “wavefunction” of the photon, not the wavefunction of the hydrogen atom! There is certainly the possibility of long-range interaction via photons and presumably gravitons, and presumably there is a long-range correlation between the atom and the photon, but this doesn’t imply the atom’s wavefunction itself has “gone the distance”. That seems to be part of the problem, provided you assume that the electrons were measured in a definite state at some point of their existence, as noted above! • Phil says: The point was to illustrate that current QFT does not impose any limit on the spatial extent of a free particle’s wavefunction. I am not sure whether being bound to atomic matter is supposed to make the wavefunction actually go to zero (as opposed to just really small) outside a certain zone… but if not, I still can’t fault that aspect of Prof Cox’s argument (I still don’t buy the non-local part though). 4. J Thomas says: You provided a general explanation about a whole lot of things, and I want to use my imagination from that. As I see it, classical physics stopped making sense around the turn of the 20th century, and over time it was replaced by newer stuff, notably quantum mechanics. The new view was a statistical one which was never intended to make sense. Statistical ideas are notoriously confusing, and people get confused about causality, the extent that statistical results apply to individual cases, etc. I’m curious whether an alternative classical view can make sense. QM would still apply to statistical experiments — probably all of them — but with a model that made sense it might be easier to apply QM. This looks like a good place to start. They argued since Newton whether light was made of particles or waves. Then we got the same results with electrons. Ideally we could get a model which explained everything which looks like partides as waves, and everything which looks like waves as particles. Then we could choose either approach and make it work. The electron detectors detect quanta. Either there is a detection or there is not. So even if the electrons were behaving exactly like waves, they would be detected as particles. Is there a way that particles could diffract like waves? I will describe one way that could happen. I don’t claim it could happen this way with electrons, but if there is one way that particles can do it, there might be another way that actually fits the data. So any way that gets particles to diffract is a start. First, I need particles that spin in unison. The experiment is set up so they all spin clockwise around their up axis. And they are somehow asymmetrical so it matters how far along they are in their rotation. A particle turned around to 90 degrees isn’t the same as one that has rotated to 270 degrees. And then we have limited detectors. I imagine a detector which can’t detect a single particle but only the sum of say 6 particles. If it gets 6 particles in a row with spin between 0 and 180 degrees, it registers a hit. But if any one of the 6 is between 180 and 360 degrees, the hit is lost and it then will register a hit if there are 5 more between 180 and 360 degrees. The paticles go through a slit and on to the detectors. When they go through the slit their directions are randomized, but their spins are still synchronized, each one is at 0 degrees then. At some distance to the left side, particles that come near the right side of the slit will travel farther than particles that come near the left side of the slit, and they will rotate 180 degrees out of phase. All of them will be between 0 and 180 degrees. So there will be lots of detections. A bit farther to the left, half of them will be one kind and half the other. There will be very few detections. Etc. With the right kind of particles and the right kind of detectors, you can get diffraction. It does not matter that only one particle goes through the slit at a time, provided the detector state changes whenever the next particle arrives. Particles can appear to diffract. There are probably multiple ways to get diffracting particles. Perhaps one of them might fit the data for light or for electrons. • Thanks for the comment! Actually, the wave nature of particles seems pretty airtight at this point, especially after the theoretical work of John Stewart Bell on Bell’s inequalities and the experimental verification of this, which strongly demonstrates that no “local” theory of particle behavior can reproduce the observed properties of quantum mechanics. It’s interesting to note, however, that in the history of optics, physicists were even able to explain interference effects and polarization effects to their satisfaction using a particle theory. It was only after diffraction was successfully explained using waves, and led to verifiable predictions, that the wave theory really took off. • J Thomas says: I can explain diffraction myself using particles, given a fistful of assumptions. I’m not sure I have the distribution right but I can get the distribution right. If there are two hypotheses that both explain the phenomenon, how do you decide which to accept? I say, as long as both explain the phenomenon there is no need to decide which to accept. I have not studied the details of Bell’s theorem carefully enough to judge it, but I rather doubt it. I’ve seen this sort of thing a lot in probability theory. Start with a collection of assumptions, one of which is wrong. Reason from the assumptions to a conclusion that looks astounding. Argue that the astounding conclusion must be true. But usually one of the original assumptions was wrong instead. Typically people assume that their sampling is not biased, for example, and it almost always is. 5. csrster says: Peierls also discusses this in his “Surprises in Theoretical Physics” in the context of two electrons at the opposite end of a metre-long metal bar. His conclusion is the same as yours – that if the electron-states are spatially localised to opposite ends of the bar then, by definition, they are in distinct quantum states and the Pauli principle is automatically satisfied regardless of their energy. Iirc he takes the argument further to make some quantitative estimates, but I forget the details. I like the Peierls example better than the “whole universe” discussion because i) one can imagine scaling smoothly from a small metallic crystal to a macroscopic metal bar and ii) a one metre bar is effectively as big as the universe anyway, seen on the quantum scale. 6. J Thomas says: OK, I’ve seen the video, the blog response, and the comments on that blog. I have some conclusions. 1. If you want somebody to dispassionately discuss science and ways to improve his presentation of science, do not title your criticism “Brian Cox is full of **it”. That does not promote careful dispassionate thought, at least among primates. A primate that sees this will tend to interpret it as poo-flinging, and will tend to fling poo back. 2. Quantum Mechanics (QM) is not intuitive. It is similar to statistics and probability theory that way. There is a mixup between what’s true about the reality, versus what you know about it. We describe our knowledge and our ignorance and then look at what we still know when things change. The transformation rules are complex and unintuitive. It’s hard. Ideally we design our language to make it easy to think about stuff. The language does some of the thinking for you. Stupid things sound wrong. We’re far from that with QM. The truth sounds wrong. Stuff that sounds plausible pretty much has to be wrong — if it sounds right then it can’t be right. We try to design our mathematics so that the right answers just fall out easily. This has mostly not been done for QM yet. It’s hard to do the math right. Given the problems, doesn’t it make sense to display a whole lot of tolerance? 3. Various people say that the discussion is all first-year stuff. But they disagree. It looked like they chose sides quick enough, and they agree with other people on the same side. But I strongly suspect that for many simple first-year problems stated in simple english, 10 physicists would give 5 or more answers. This stuff is *hard*. Figuring out what the question is when it’s stated in English is even harder. More later. 7. J Thomas says: This is not really off topic. One time a friend told me that the Monty Hall solution was wrong. He argued it out. He said, suppose you come look at the Monty Hall problem at the last minute. There are two doors that aren’t open and one that is open. The right answer is one of the two doors. The probability is .5 that it’s either one. How is that different from the guy who saw the door get opened? For him it was 1/3 for each door, and then he saw that it wasn’t door A. He knows the same thing you know, it’s one of the two that are left. So it’s 50:50. It took me weeks to persuade him. I wrote a short computer program to model the problem, and showed him the answer came out 2:1. He said I must have done it wrong. I tried to tell him that the guy who saw the door opened does know something the other guy doesn’t know. That different people can come up with different statistics, and both be right as far as they know. “No, there’s a real probability and if somebody guesses wrong what it is that just means they’re wrong.” I guess he was right on that one, but I was right too. I finally showed him that Monty was not behaving at random. If Monty instead opened any of the three doors no matter where the prize was, and the game was over if he opened a door that had the prize, then it did come out 50:50 between the two that were left. But it wasn’t easy to show him how that mattered. Probability theory is *hard*. Professionals get it wrong sometimes. Tiny details can change the whole problem. And QM is inherently probabilistic. If you have to argue about it, for gods sake don’t do it in English. You haven’t got a chance unless you show the math. And yet, that’s so very tedious…. Indeed, that’s perhaps the major issue with all of this discussion — there’s no way to mathematically model the wavefunction properties of the entire universe to draw the conclusions being made! Even worse, it was conceded in the original talk *and* in the undergraduate lecture that the conclusions being drawn have absolutely no observable physical consequences! I can almost — almost — understand making such assertions in a popular physics lecture, provided they’re couched in appropriate caveats (“it is possible to view this result of having the surprising consequences of…”), but telling undergraduate students that unmeasurable, unprovable effects of no consequence are important is really, really doing a disservice to people who want to be physicists. If I gave a talk at a physics meeting where I said, “the following hypothesis has no consequences for physics, explains no unanswered questions and cannot be detected ever”, I would rightly be tarred & feathered, at least metaphorically. • J Thomas says: I think I’ve seen this before, though I have no idea where to find links. A long time ago people believed that electric fields and gravitational fields were instantaneous. And somebody in a lecture to laymen said that this meant that everything you did would have some instantaneous effect, no matter how tiny, out to the farthest star. But then they decided that those fields take time to act. So somebody making a similar lecture said that everything you did would eventually have some effect, no matter how tiny, out to the farthest star. And now here’s somebody saying the same thing from QM. It doesn’t sound like it means anything beyond feel-good talk to laymen. Oh. Physics undergraduates. Hmm. Are they physics undergraduates who do the math? If so, it probably won’t hurt them much at all. Are they physics undergraduates who don’t do the math? Then they’re laymen who aren’t really learning much, and it won’t matter until they learn the math. I swear, when I squint a little and let the details fuzz out, this seems a whole lot like arguments I used to hear Baptists make. “Did you hear him? He said it was OK to try to follow Jesus Christ’s example!” “What’s wrong with that? Jesus said to follow him, didn’t he?” “But nobody can be like Jesus, Christ was God and sinful human beings can’t be God.” “Well, what’s wrong with trying?” “Wash your mouth out with soap! If you try to be like Christ you’re guilty of the sin of pride! You don’t understand the least little thing about theology and you have the nerve to ask questions! This idiot was telling people it was OK to be sinful! He told them to try to follow Jesus Christ’s example, and you don’t even see what’s wrong with that! You’re just ignorant, but he’s preaching evil and he hasn’t the right!” 8. yoron says: If you debate you will argue 🙂 No news there, the problem comes when one gets a little to enthusiastic in ones arguments . I like your blog for several reasons, open mindedness, a urge to try to present it right, and very good knowledge of what you discuss. And I don’t mind either of you guys getting it ‘wrong’ now and then. That’s what open minds do, they speculate and find connections and insights, sometimes wrong but even when right forced to present in a painstakingly sound mathematical notation, as stringently and clear as possible. Einstein took a lot of ‘wrong turns’ in his hunt for relativity, but he got it right in the end, and also had to invent/learn new ways of presenting it mathematically as I remember it. So ‘fight’, with a smile, life is too short to take it seriously Isn’t it so that in a Big Bang it’s possible to assume a ‘entanglement’ of it all? If I combine this with a later ‘instant’ inflation creating ‘distances’, then all particles ‘untouched’ by their isolation still could be entangled? Eh, maybe 🙂 Also it seems to me as if both a wave function and relativity questions ‘distance’, although from different point of views naturally? Not that I doubt the concept, ‘distance’ is here to stay, but what does it mean? And now I’m weirder than any of you 🙂 I found it quite nice reading and I hope that you, as well as all other guys involved, get something fruitful out of your discussion in the end. • I agree that there will always be some arguing! My PhD advisor is one who essentially told me, “When we discuss physics we will get into terrible fights. That’s okay, though, because we’ll all be friends in the end.” In fact, I had some knock-down, drag-out fights with him over physical principles (metaphorically speaking, of course), and we’re still good friends! There’s no doubt that Tom’s original post was rather tactlessly titled (as J Thomas also noted below); however, Tom’s post raised some valid questions, and certainly my comment was about as polite as one could be (“eyebrow-raiser” doesn’t seem to me to be a horrific insult). The petty sniping that Cox responded with to me and Tom (not understanding undergraduate physics) did not forward the discussion at all, and was really just mean-spirited. And that is the sort of thing that pisses me off. >:) 9. yoron says: Yeah, things happen. I looked at Swans on Tea:s blog too 🙂 What’s good with a discussion like this one is that people present their ideas, and interpretations, and so put a lot of otherwise strange concepts into different ‘lights’, making me see how it is thought too work in new ways. Sort of ‘holistic perspective’ reading the comment section there, at least for me. And both you and Swans belong to my favorite bloggers. Just keep on 🙂 10. yoron says: Thinking of it. The definition of Entanglements are truly confusing, maybe you have discussed it? Probably you have, and I seem to think I get what it should be at times, and then, some year later, I find myself wondering if I’ve understood the definition of a entanglement at all? You have the simple way by down converting a photon into two. That one is easy to understand. But then you have thingies ‘bumping’ into each other for example, sending momentum into each other, and of course the ‘indistinguishable electrons’ etc. And to make my headache even worse you can also find those defining it as if you have a entangled ‘pair’ there can be no ‘wave function’ breaking down, until both are measured, if I now got that one right? Been some time since I discussed that. It’s worth going through, if you haven’t? • I need to write a more detailed post on quantum stuff, perhaps as a “basics” post or two. In entanglement, though, the idea is that the wavefunction breaks down as soon as one of the particles is measured. This results in the “instantaneous collapse of the wavefunction” that so upset Einstein, Podolsky and Rosen. I’ll go into it in more detail soon; in the meantime, you can check out the post on entanglement at Galileo’s Pendulum. 11. yoron says: this is my view, and I’m trying to keep it simple. “as I said a description I like was the one of ‘one particle’. I can go with a ‘wave function’ describing it too though, as long as we then assume it to be in a pristine ‘superposition’ prior to the measurement, with ‘both sides’ falling out in the interaction/measurement, no matter if the side not making that initial measuring, will measure it later, or not.” And here is DrChinese view “Nope, generally this is not the case (although there are some complex exceptions that are really not relevant to this discussion). Once there is a measurement on an entangled particle, it ceases to act entangled! (At the very least, on that basis.) So you might potentially get a new entangled pair [A plus its measuring apparatus] but that does not make [A plus its measuring apparatus plus B] become entangled. Instead, you terminate the entangled connection between A and B. You cannot EVER say specifically that you can do something to entangled A that changes B in any specific way. For all the evidence, you can just as easily say B changed A in EVERY case! This is regardless of the ordering, as I keep pointing out. There is NO sense in QM entanglement that ordering changes anything in the results of measurements. Again, this has been demonstrated experimentally. My last paragraph, if you accept it, should convince you that your hypothesis is untenable. Because you are thinking measuring A can impart momentum to the A+B system, when I say it is just as likely that it would be B’s later measurement doing the same thing. (Of course neither happens in this sense.) Because time ordering is irrelevant in QM but would need to matter to make your idea be feasible.” And if I get the idea right here? You might say that it’s a consequence of SR, and the possibility of different observers getting different ‘time stamps’ for ‘A’ relative ‘B’, so there might be no ‘universal order’. Instead it will be defined locally. Which is a very interesting thought, if correct. At least it’s the way I interpret it for now 🙂 I will follow that link. 12. J Thomas says: Imagine that you and your girlfriend make an agreement. You take the king of hearts and the queen of diamonds out of a deck of cards. You shuffle them around so nobody knows which is which, and you seal them into two envelopes. You each keep one of them, and you agree that in 30 years you’ll open the envelopes and look at them. It’s a romantic gesture. But 5 years later she dies and she asks that her envelope be buried with her. After 30 years you open your envelope and see the queen of diamonds. You immediately know that her envelope has the king of hearts. But how can you know that? You haven’t dug up her grave and opened her envelope. The difference between this and Bell’s theorem is that Bell’s theorem says that in the QM case, the decision which card was which could not have been made when the envelopes were sealed. That decision was made when one of the envelopes was opened. And at that time two things changed, two things that might be light years apart or buried in separate graves. Probability theory doesn’t distinguish between things that have been decided — but are unknown– versus things that have not been decided yet. Somebody flipped a coin yesterday and you don’t know which face came up. Somebody will flip a coin tomorrow. Either way, assuming a fair coin, your best guess is 50% either way. If you have reason to think it’s a false coin that comes up heads 55% of the time, then your best guess is 55% heads either way. Once you find out the truth, then it isn’t 50% or 55%, you know. It’s either heads or tails. The easy interpretation is that it doesn’t matter whether something is real but unknown versus not-decided-yet. Either way, you have your best guess now and when you find out the truth you’ll know. There’s no point arguing whether really the cards are separated and in their envelopes, or whether a ghost magically paints the cards just before you open the envelopes, because there’s no way to tell which it is. Just use what you do know, which is first the guess and then the reality. But Bell’s theorem proves that it cannot be true that the truth is real but unknown. It has to be true that the state does not exist until two random but correlated states are created when one of them is observed. Without that proof, there is nothing special going on. Somehow, quantum mechanics is arranged so that it is impossible for the truth to exist but be unknown. That’s the part that’s hard to understand. What we need is a good simple explanation why, for example, two photons that are created to have complementary polarization but we don’t know what polarization they have, are not actually polarized any way in particular until we measure the polarization on one of them. QM says it can’t be true that their polarization state was set when they were created, and we only discover it later. QM says that in reality they have only a probability of polarization until it becomes real when one of them is measured. Why does QM make it impossible for the truth to be real but unknown? 13. yoron says: A nice description J 🙂 And one that I agree too, and actually can understand. Your definition has been proven a lot of times, that even though you know that there will be a complementary ‘polarization’ for ‘B’, you can’t define it until measuring ‘A’, or vice versa. As you can’t know what the polarization will be for any of the space separated objects until measuring on one of them, only that they will be opposite. That if I got you right? And it’s there my headache begins, although physics is on the whole a headache, mostly nice though. If you have a wave function describing a ‘particle’ or an entanglement, it must be your observation that ‘sets’ it. And if what you observe is separated in space, then what you observe of it should set it all. But I’m getting an impression of that a ‘space like’ separation, as in this entanglement case, now allows me to state that no matter what I observe, ‘A’ or ‘B’ I could define it such as the observation I do has nothing to do with what ‘sets’ what. That’s why I like the ‘one particle’ definition better, because in that one it becomes meaningless to discuss ‘causality chains’ as in such a definition that ‘wave function’ is a whole object, in which you ‘instantaneously’ set a state for ‘both’ . But then I have SR of course, but it shouldn’t really matter there, should it? As no matter what ‘time stamp’ different observers will give, ‘A’ or ‘B’ first, it still have to be ‘one particles wave function’ getting set? Then again, I really need to look at it from first principles, and see if I really get it.. 14. J Thomas says: Yoron, the following link (which Dr. Skull provided) is extremely unclear because as a Wikipedia page it incorporates lots of different ideas which disagree. It does include a quote from Jaynes. He was very good at statistics and probability theory, and had a lot to say about QM as a result. So, some random things have to happen late, near the time they are measured. But maybe others can happen early, at the time of the entanglement. Which is true in the cases people are interested in? How can we find out? 15. yoron says: What I mean is that in some ways the question from SR becomes slightly metaphysical. Because if ‘the arrow of time’ is a local definition, which I see it as. Then that also will mean that any observation you do must be valid from your frame of reference, just as a Lorentz contraction should be for that speeding muon impacting on earth. That another frame of reference will define it differently doesn’t invalidate the muon’s frame. But that is from the assumption of ‘time’ always being a ‘local phenomena’, so invalidating the assumption of a ‘same SpaceTime’ for us all time and space wise. But the ‘arrow’ is always a local definition as far as I can see. Even though you can join any other frame of reference to find the time dilation you observed earlier ‘gone’, from your new ‘local perspective’ in SpaceTime that only state that relative your life span, your clock never changes. The thing joining SpaceTime is a constant. Lights speed in a vacuum. And that is also what gives us ‘time dilations’ and the complementary Lorentz fitzGerald contractions, well, as I see it 🙂 16. yoron says: Hmm, sorry.. That was me explaining myself, not answering your post. I liked your citation on ‘deduction’. SpaceTime as ‘whole experience’ I see as conceptual, described through diverse transformations between ‘frames of reference’, joining them into a ‘whole, and so also becoming the exact same ‘deductions’ he described. If a time dilation is ‘true’ from your frame of reference, and differs from my frames observations, which we know to be true through experiments, then locality defines your ‘reality’. and it has to be real if one accept Relativity. And that should mean that radiation is what joins us. 17. yoron says: Would you have an example, or link of that? On the other hand you write “Somehow, quantum mechanics is arranged so that it is impossible for the truth to exist but be unknown.” I like ‘indeterminacy’, as a principle I find it rather comforting, reminding me of ‘free will’, in/from some circumstances. But I also have faith in that we will find a way to fit it into a model where that indeterminacy becomes a natural effect of something else. Wasn’t that what Einstein meant too? Or did he expect ‘linear’ causality chains to rule everything? I’m not sure how he thought of it there? I know he found entanglements uncomfortable in that they contained this mysterious ‘action at a distance’. But Relativity splits into local definitions of reality as I think of it, brought together by Lorentz transformations, describing the ‘whole universe’ we observe through radiations constant. So from my point of view, ‘reality’ and the arrow both follow one constant, and that one will always be a local phenomena. That simplifies a lot of things for me at last, although it makes ‘constants’ into something defining the rules of the game, and the question of a whole unified SpaceTime int something of a misnomer. • Regarding hidden variable theories, the major distinction is between local and nonlocal ones. As I understand it (and I am admittedly not an expert on these controversies, so take this explanation with a grain of salt), EPR concerned itself with the idea of a local hidden variable theory (LHVT): that a particle’s properties such as momentum and spin are well-determined, and spatially localized to the particle. This is the concept that would be consistent with classical mechanics: localized and definite particle properties. Bell’s theorem suggests that physical experiments *can* tell the difference between a local hidden variable theory and conventional quantum mechanics. Experiments results are inconsistent with the LHVT and consistent with conventional quantum theory, suggesting that an LHVT cannot be correct. There is still a possibility of a *nonlocal* HVT, however, in which the properties of a particle are definite but “spread out” in space-time in some manner. This is typically done by imagining that a definite particle exists coupled to a “guiding wave” that controls its motion. No experiments have been done to conclusively rule out a nonlocal HVT. This puts physicists in a bit of a philosophical conundrum: they can either accept QM, which requires throwing away determinism, or they can accept NLHVT, which requires throwing away causality (the theory requires faster-than-light influences between particles). As I understand it, most physicists at this point act under the assumption that conventional QM is the better interpretation, though the question is by no means solved. It is these sort of controversies, BTW, that make me dubious of any attempt to extend simple quantum postulates to a universal scale without qualification. • J Thomas says: “Regarding hidden variable theories, the major distinction is between local and nonlocal ones.” The big distinction is about causation that happens instantaneously at large distances. That’s spooky. A hidden variable theory that gives you instantaneous causation at large distances is no improvement, and presumably some of those can easily be tuned to give the same result as QM. They may be taking that too far. What if some local variables are set, and others are not? Then you could have some variables set locally and the information travels at lightspeed or slower, and later is revealed. But other things could be strictly probabilistic and could be set later. Then it might turn out that nothing spooky happens, and at the same time it could definitely be shown that some events cannot be determined by local variables. Smith and Jones are physicists at the same university. They both own red Ferraris and it is impossible to tell the cars apart by satellite imagery. Smith lives to the north and Jones lives to the south. So it is predictable that whenever the satellite photos show a red Ferrari going south from the university, the other will go north. By satellite studies we cannot tell whether these Ferraris have hidden variables (namely Smith and Jones) or whether they are merely entangled. Maybe the choice which of them will go south is never made until one of them actually turns, and that information is then instantaneously transmitted to the other one so it knows to turn the opposite direction. (But *we* know that it’s really Smith and Jones, the hidden variables, and they don’t decide completely at random while on the road which will go home to Mrs. Smith and which to Mrs. Jones.) But as it turns out, there are nine red Ferraris at the university. Two homes are more or less to the northeast, two to the northwest, three to the southwest, and two to the southeast. So when one red Ferrari goes south you can’t be sure the next one will go north, though the physicists are somewhat likely to leave around the same time because of departmental meetings etc. In reality, it occasionally happens that Smith and Jones both go to Smith’s house. Sometimes a paper they are working on together is approaching deadline and they work late into the night. Occasionally they go bowling together. Just possibly they might occasionally swap wives — but only after near-instantaneous phone communication with both wives to get their approval. Then the red Ferraris go the opposite directions, but by satellite imagery you’ll never know…. These various links do not at all make it obvious why local hidden variables are impossible, or even that there is an example where the result has to be spooky because local hidden variables cannot apply to that example. They give some of the details of the arguments, but do not show how those arguments fit together to forbid the existence of any local hidden variables though they claim there can never be any local hidden variables. • Phil says: Non-locality itself is widely accepted, non? 18. yoron says: Interesting, I will have to relearn this again. Actually entanglements are the ‘spookiest’ thing(s) I know of, and gives me one of the biggest headaches too 🙂 I will need to reread it all. But if we take the simple definition when you downgrade one ‘photon’ into two ‘entangled’ we already have proofed that they ‘know’ each others spin, instantaneously. Assume they are ‘the exact same’. How does that fit with ‘locality’? Both ‘the arrow’ and radiation are local phenomena to me, always the same locally. Maybe the arrow is another name for ‘motion’, meaning that if it is local it has to spring from something ‘jiggling’. But it doesn’t answer ‘time’, as a notion from where that ‘jiggling’ can come to exist, if you see what I mean? to have a ‘motion’ you need a arrow as I see it, as it is through the arrow that ‘motion’ finds its definition. Maybe entanglements is what the universe really is? ‘Motion’ becoming the way we observe it through? Hmm, and now I’m getting mystical again Sorry, I will have to blame it on it being ‘the day after’, after Friday I mean 🙂 19. yoron says: Another thing that’s confusing is this statement. “The violation of Bell’s inequalities in Nature implies that either locality fails, or realism in the sense of classical physics fails in Nature, or both. When one looks at other types of data, it becomes totally unequivocal that locality holds while classical realism fails in Nature.” by Luboš Motl I can see that locality holds, if I by that mean what we measure directly, it’s sort of obvious. But isn’t an entanglement a ‘space like’ separation, although a ‘instantaneous’ correlation, in a observation? And by that also becoming a ‘non local’ phenomena? The point being that you can’t know what the polarization will be for any of the space separated objects until measuring on one of them, only that they will be opposite. That is, there is no ‘standard’ to any entanglements polarization other than this ‘oppositeness’ we expect. That you can’t say which state/polarization ‘A’ have until measured? And as you can’t specify that you also can prove that two separated ‘particles’ in space then must ‘know’ each other, or is it something more I’m missing there? • J Thomas says: Yoron, we cover the same material repeatedly. When two entangled photons are created, we know that their polarization is related but we don’t know what the polarization will be. There are two obvious ways to look at that. Maybe the polarization is set when they are created, and we don’t know what it is — it is a “hidden variable”. Or maybe the polarization is not set until one of them is measured, and then the other one instantaneously gets its polarization set too in violation of lightspeed etc. At first sight there’s nothing that says one of these ideas is better than the other. But somehow physicists know that the first is impossible and the second must be true. I have not yet seen any explanation about the argument why the first is impossible, but a whole lot of physicists say they know it’s impossible according to quantum theory and also there are experiments which show it. I don’t understand it yet. • Long term, I’m going to try and go through and understand in more detail the whole “hidden variable” argument and blog about it, so hold tight! I should note that, as said earlier, “hidden variables” are not completely excluded but Bell’s theorem and the battery of experiments more or less confirming it strongly suggest that local hidden variable theories are inconsistent with observations. Physicists who really study this stuff carefully haven’t excluded the possibility of nonlocal hidden variable effects. 20. yoron says: Yes, I know. Wasn’t it John Bell that first proved statistically that there could be no classical ‘hidden variables’. [url=]Bell’s theorem.[/url] I don’t believe in any FTL communication myself, not macroscopically anyway. But neither am I sure what a ‘distance’ is, and that goes for both QM and ‘SpaceTime’. 21. yoron says: Yes, I’m afraid we do. It’s me trying to see it from the start, and keeping it as simple as possible. [url=]Bell’s theorem[/url] is what proves it statistically. It states that a classical solution isn’t possible, assuming that there is local ‘hidden variable(s)’ although still opens for the possibility of non-local variables, as FTL ‘communication’. But FTL would be a violation of causality in where we would get improbable effects from some frames of reference, aka you answering me before I’ve even asked, according to relativity. So, macroscopically FTL is a strict ‘nono’ as far as I understands it. That leaves us the question how the geometry of the universe can change with relative motion and mass, relative the observer? And that’s where I wonder, as I don’t expect FTL to be allowed macroscopically, But then again, I may all too easy be wrong 🙂 22. yoron says: Sorry, my first reply didn’t show up, until after I had posted the second? 23. yoron says: Eh, by ‘macroscopically’ I just meant ‘SpaceTime’ here, and relativity, nothing more. We have two views, one is QM, the other is Einsteins relativity. Some physicists try to join them, most maybe? I’m not sure there. One discuss ‘superpositions’ etc, and statistics creating probabilities. The other discuss linear functions mostly, involving macroscopic as well as microscopic causality-chains, following an ‘arrow of time’. You can use radiations speed and ‘motion’ as a microscopic example of our ‘classic’ causality chains, and planets orbits as an example of macroscopic causality chains. 24. yoron says: Oh, thanks, wish there was a way to edit and also remove a double post though. It looks silly with two posts stating the same 🙂 And, thinking of it. Both QM and Relativity assumes an arrow of time existing. Otherwise you can’t have statistics, as there would be no order from where you could base your expectations in quantum mechanics. 25. “Imagine putting electron number 1 in atom number 1 and electron number 2 in atom number 2. Well after waiting a while it doesn’t anymore make sense to say that “electron number 1 is still in atom number 1″. It might be in atom number 2 now […]” Seems like this guy needs to read again his Hartree-Fock method for solving Schrödinger’s equation for a multielectron system; not to mention how to build a Slater determinant for a 2 electron system. Then again, lets just hope some kids listen to his lectures, become interested in science and eventually realise he is just (way) overextending some metaphors. Pseudoscience propagates very fast through media like the internet basically because it doesn’t require to think/test/prove anything; it just requires you to believe in the premises and flow along the dodgy logic with which the conclusions are weaved. Efforts like yours and other bloggers, such as the ones you mentioned in your post, will counteract this propagation and proliferation but only in time. I don’t know what is worse: A country like mine (Mexico) where science is disregarded and neglected or the USA where science is even outlawed (well, not exactly but you know what I mean) like in the infamous Kansas School Board case! I mean, nowadays it seems like any politicians stand on evolution should be part of his campaign platform! Ridiculous. Congratulations on yet another wonderful post. • Thank you for the comment, and the compliment! Indeed, pseudoscience flows far too quickly through the media and the political systems. It’s not even a new problem, really; the use of wordplay to justify a “scientific” conclusion reminds me of this comment from an article criticizing perpetual motion back in the late 1800s: The propounder of perpetual motion theories does not always confine himself to diagrams, but sometimes deludes himself in a cloud of verbiage. Here is a sample. “Let us,” says the theorist, “construct a wheel of immense dimensions. On one side of it, let there be hung a huge mass. On the opposite side suspend innumerable small weights. Then shall it be found that the wheel will continually revolve. For when the huge mass is at the top, its weight will cause it to descend. Why is this? The answer is obvious — because it is so heavy. In the meantime the innumerable small weights will reach the top, and thereupon they will descend. Why is this? The answer again is clear — because there are so many.” 26. Jason Buckley says: Thanks for this patient and layman-friendly rebuttal. Just seen the lecture for first time in a 2013 repeat. Not a physicist but thought the final claims were rather extravagant. Annoying the bbc just repeats the programme as if its not controversial. • You’re very welcome! Yeah, it is rather irritating. Part of the point of science is admitting when an argument isn’t quite right and revising it accordingly, something that hasn’t been done. 27. Yoron says: It’s nice rereading this one again. Reminds of all the things I don’t understand 🙂 The reason why this simple transparent mirror effect can’t be simply explained as a ‘set variable’ constructed by the mirror is that by probability that wave function can’t be set, until measured, if I remember right? And it is experimentally proved (as I remember it) that there is no way to know, until measured, which polarization the measured particle will have. You can argue that for identical experiments, but with no way to determine how that mirror will ‘influence/polarize’ the photon you measure, there must be a hidden variable? Or you can define it such as the two photons are ‘linked’ in a ‘spooky action at a distance’. But to define that ‘hidden variable’ craves a clearer mind than mine, not that it ever is that clear:) How would a hidden variable exist? Assuming ‘identical experiments’ giving you different polarizations from identical photons? If I now remember this correctly. 28. Yoron says: A crazy thought, how does a photon see the universe? Does it see it, or is it just us seeing? Then it comes down to the arrow. 29. Phil says: A photon “sees” the Universe as a flat 2D plane. Since it takes no time to travel anywhere, everything is at the same “depth”. • Yoron says: 1. photons are timeless (as far as physics know experimentally) 2. Lorentz contraction as observed in the direction of motion should in the case of a photon reach? Infinite contraction, or is there a limit where you can assume a point like existence? 3. What would happen to a signal from a relativistically moving object, sent in the opposite direction from its motion? It would redshift (waves), and in the case of a photon? Would it warp? And the redshift itself then, is there a limit to how red shifted something can become relative the moving observer? I could assume that there must be a limit, as I can imagine a stationary (inertial) observer able to watch that ships signal, but I’m not sure, although it seems a contradiction in terms. If there are no limits to a redshift, what would that imply in the case of those two observers? 30. Yoron says: The redshift produced by the motion seen from the moving object will still be at ‘c’. And the stationary observer should see it redshifted too, at ‘c’ from his point of view too. This is assuming a reciprocal effect relative ‘c’, different coordinate systems, and energy. But then you have the light quanta itself, that shouldn’t change intrinsically? 31. Yoron says: I know. Sometime one just have to let go But, it is confusing 🙂 32. This post just earned a “follow”, though I’ll take issue that physics in general rests on a number of theoretical assumptions, from standard cosmology, to the “realness” of the wave function. To wit, “…the wave nature of particles seems pretty airtight at this point, especially after the theoretical work of John Stewart Bell on Bell’s inequalities and the experimental verification of this, which strongly demonstrates that no ‘local’ theory of particle behavior can reproduce the observed properties of quantum mechanics.” I’ll just note here that when pressed, Bell himself admitted that a deterministic universe negated this assumption. The press of academic compliance is a powerful influence, especially when it accounts for funding. But even back in 1956 when Chien-Shiung Wu experimentally demonstrated the asymmetry of nuclear electron emissions with regard to spin, the established scientific consensus of that time compelled Wolfgang Pauli (who had first proposed the idea of an electron’s “spin”) to call her work, “…total nonsense.” It took two more years of experimental replications for the discovery to be accepted, resulting in a Nobel Prize (though not for Wu). Compounding the problem nowadays are the pop-media personalities who seem to have no problem publicly conflating philosophy and metaphysics with actual science… Cox, Carroll, Neil DeGrasse Tyson, and a seemingly endless stream of “discoveries” that threaten to bring down the established scientific paradigm by attempting to again resuscitate some long dead theory-of-everything (Fermilab). It’s become profoundly refreshing to hear a genuine scientist have the courage to say, “That’s a good question. I don’t know the answer.” Leave a Reply to yoron Cancel reply You are commenting using your account. Log Out /  Change ) Twitter picture Facebook photo Connecting to %s
abbdba5a07f72ab2
Iterative Parallel Crank-Nicolson Method Published: 4 November 2020| Version 1 | DOI: 10.17632/6kch68byyn.1 Jordan Taylor, Michael Bromley We demonstrate the rapid convergence of an iterative approximation that enables the Crank-Nicolson method for solving partial differential equations to be parallelized with arbitrarily small error, so long as the temporal step size is kept sufficiently small compared to the square of the spatial step size. Unlike the original Crank-Nicolson method, our method can handle non-linear terms. We apply the new method specifically to linear and non-linear 1-D Schrödinger equations, but it can be applied to any parabolic partial differential equation, such as those used to describe heat flow, diffusion, or dynamics of financial markets. The extension of the method to handle cyclic boundary conditions or 3-D wavefunctions is straightforward. Example code is given in C++. Included is the [iterative_parallel_CN.cpp] code file, which can be compiled with $ g++ -fopenmp iterative_parallel_CN.cpp -std=c++11 -lm -Wall -O3 -o iterative_parallel_CN.exe The input file [iterative_parallel_CN.inp] contains the variables to be given to the program when run: $ ./iterative_parallel_CN.exe The results are written to multiple raw files [iterative_parallel_CN_Xiterations_XXXXXX.dat], and one summary file [iterative_parallel_CN_summary.dat]. The raw files can be turned into an animation with gnuplot using the file animate.gpi: $ gnuplot animate.gpi This will produce a GIF of whatever scenario you put into the .inp file (by default, a Gaussian oscillating in a harmonic trap). University of Queensland Physics, Partial Differential Equation, Numerical Algorithm, Parallelization, Iterative Method, Domain Decomposition Methods, Computer Simulation, Numerical Modeling, Crank's Equation, Bose-Einstein Condensate, Quantum Physics, Non-Linear Partial Differential Equations, Schrödinger Equation
d9cc105ca4712fab
From Physics(US): “Quantum Mechanics Must Be Complex”  About Physics From Physics(US) January 24, 2022 Alessio Avella, The National Institute of Metrological Research [Istituto Nazionale di Ricerca Metrologica](IT) Two independent studies demonstrate that a formulation of quantum mechanics involving “complex” rather than real numbers is necessary to reproduce experimental results. Credit: Carin Cain/American Physical Society(US) Figure 1: Conceptual sketch of the three-party game used by [Chen and colleagues] and [Li and colleagues] to demonstrate that a real quantum theory cannot describe certain measurements on small quantum networks. The game involves two sources distributing entangled qubits to three observers, who calculate a “score” from measurements performed on the qubits. In both experiments, the obtained score isn’t compatible with a real-valued, traditional formulation of quantum mechanics. “Complex” numbers are widely exploited in classical and relativistic physics. In electromagnetism, for instance, they tremendously simplify the description of wave-like phenomena. However, in these physical theories, “complex” numbers aren’t strictly needed, as all meaningful observables can be expressed in terms of real numbers. Thus, “complex” analysis is just a powerful computational tool. But are “complex” numbers essential in quantum physics—where the mathematics (the Schrödinger equation, the Hilbert space, etc.) is intrinsically “complex”-valued? This simple question has accompanied the development of quantum mechanics since its origins, when Schrödinger, Lorentz, and Planck debated it in their correspondence [1]. But early on, the pioneers of quantum mechanics abandoned the attempt to develop a quantum theory based on real numbers because they thought it impractical. However, the possibility of using real numbers was never formally ruled out, and recent theoretical results suggested that a real-valued quantum theory could describe an unexpectedly broad range of quantum systems [2]. But this real-number approach has now been squashed by two independent experiments, performed by Ming-Cheng Chen of The University of Science and Technology [中国科学技术大学](CN) at Chinese Academy of Sciences [中国科学院](CN) [3] and by Zheng-Da Li of The Southern University of Science and Technology[南方科技大學](CN) [4]. The two teams show that within a standard formulation of quantum mechanics “complex” numbers are indispensable for describing experiments carried out on simple quantum networks. A basic starting point for quantum theory is to represent a particle state by a vector in a “complex”-valued space called a Hilbert space. However, for a single, isolated quantum system, finding a description based purely on real numbers is straightforward: It can simply be obtained by doubling the dimension of the Hilbert space, as the space of complex numbers is equivalent, or “isomorphic,” to a two-dimensional, real plane, with the two dimensions representing the real and imaginary part of “complex” numbers, respectively. The problem becomes less trivial when we consider the unique quantum correlations, such as entanglement, that arise in quantum mechanics. These correlations can violate the principle of local realism, as proven by so-called Bell inequality tests [5]. Violations of Bell tests may appear to require “complex” values for their description [6]. But in 2009, a theoretical work demonstrated that, using real numbers, it is possible to reproduce the statistics of any standard Bell experiment, even those involving multiple quantum systems [2]. The result reinforced the conjecture that “complex” numbers aren’t necessary, but the lack of a general proof left open some paths for refuting the equivalence between “complex” and “real” quantum theories. One such path was identified in 2021 through the brilliant theoretical work of Marc-Olivier Renou of the The Institute of Photonic Sciences [Instituto de Ciencias Fotónicas](ES)and co-workers [7]. The researchers considered two theories that are both based on the postulates of quantum mechanics, but one uses a “complex” Hilbert space, as in the traditional formulation, while the other uses a real space. They then devised Bell-like experiments that could prove the inadequacy of the real theory. In their theorized experiments, two independent sources distribute entangled qubits in a quantum network configuration, while causally independent measurements on the nodes can reveal quantum correlations that do not admit any real quantum representation. Chen and colleagues and Li and colleagues now provide the experimental demonstration of Renou and co-workers’ proposal in two different physical platforms. The experiments are conceptually based on a “game” in which three parties (Alice, Bob, and Charlie) perform a Bell-like experiment (Fig. 1). In this game, two sources distribute entangled qubits between Alice and Bob and between Bob and Charlie, respectively. Each party independently chooses, from a set of possibilities, the measurements to perform on their qubit(s). Since the sources are independent, the qubits sent to Alice and Charlie are originally uncorrelated. Bob receives a qubit from both sources and, by performing a Bell-state measurement, he generates entanglement between Alice’s and Charlie’s qubits even though these qubits never interacted (a procedure called “entanglement swapping” [8]). Finally, a “score” is calculated from the statistical distribution of measurement outcomes. As demonstrated by Renou and co-workers, a “complex” quantum theory can produce a larger score than the one produced by a real quantum theory. The two groups follow different approaches to implement the quantum game. Chen and colleagues use a superconducting quantum processor in which the qubits have individual control and readout. The main challenge of this approach is making the qubits, which sit on the same circuit, truly independent and decoupled—a stringent requirement for the Bell-like tests. Li and colleagues instead choose a photonic implementation that more easily achieves this independence. Specifically, they use polarization-entangled photons generated by parametric down-conversion and detected in superconducting nanowire single-photon detectors. The optical implementation comes, however, with a different challenge: The protocol proposed by Renou and co-workers requires a complete Bell-state measurement, which can be directly implemented using superconducting qubits but is not achievable exploiting linear optical phenomena. Therefore, Li and colleagues had to rely on a so-called “partial” Bell-state measurement. Despite the difficulties inherent in each implementation, both experiments deliver compelling results. Impressively, they beat the score of real theory by many standard deviations (by 43 σ and 4.5 σ for Chen’s and Li’s experiments, respectively), providing convincing proof that complex numbers are needed to describe the experiments. Interestingly, both experiments are based on a minimal quantum network scheme (two sources and three nodes), which is a promising building block for a future quantum internet. The results thus offer one more demonstration that the availability of new quantum technologies is closely linked to the possibility of testing foundational aspects of quantum mechanics. Conversely, these new fundamental insights on quantum mechanics could have unexpected implications on the development of new quantum information technologies. We must be careful, however, in assessing the implications of these results. One might be tempted to conclude that “complex” numbers are indispensable to describe the physical reality of the Universe. However, this conclusion is true only if we accept the standard framework of quantum mechanics, which is based on several postulates. As Renou and his co-workers point out, these results would not be applicable to alternative formulations of quantum mechanics, such as Bohmian mechanics, which are based on different postulates. Therefore, these results could stimulate attempts to go beyond the standard formalism of quantum mechanics, which, despite great successes in predicting experimental results, is often considered inadequate from an interpretative point of view [9]. C. N. Yang, “Square root of minus one, complex phases and Erwin Schrödinger,” Selected Papers II with Commentary (World Scientific, Hackensack, 2013)[Amazon][WorldCat]. M. McKague et al., “Simulating quantum systems using real Hilbert spaces,” Phys. Rev. Lett. 102, 020505 (2009). M.-C. Chen et al., “Ruling out real-valued standard formalism of quantum theory,” Phys. Rev. Lett. 128, 040403 (2022). Z.-D. Li et al., “Testing real quantum theory in an optical quantum network,” Phys. Rev. Lett. 128, 040402 (2022). A. Aspect, “Closing the door on Einstein and Bohr’s quantum debate,” Physics 8, 123 (2015). N. Gisin, “Bell Inequalities: Many Questions, a Few Answers,” in Quantum Reality, Relativistic Causality, and Closing the Epistemic Circle, edited by W. C. Myrvold et al. The Western Ontario Series in Philosophy of Science, Vol. 73 (Springer, Dordrecht, 2009)[Amazon][WorldCat]. M.-O. Renou et al., “Quantum theory based on real numbers can be experimentally falsified,” Nature 600, 625 (2021). J.-W. Pan et al., “Experimental entanglement swapping: Entangling photons that never interacted,” Phys. Rev. Lett. 80, 3891 (1998). T. Norsen, Foundations of Quantum Mechanics – An Exploration of the Physical Meaning of Quantum Theory, Undergraduate Lecture Notes in Physics (Springer, Cham, 2017)[Amazon][WorldCat]. See the full article here . Please help promote STEM in your local schools. Stem Education Coalition Physicists are drowning in a flood of research papers in their own fields and coping with an even larger deluge in other areas of physics. How can an active researcher stay informed about the most important developments in physics? Physics (US) highlights a selection of papers from the Physical Review journals. In consultation with expert scientists, the editors choose these papers for their importance and/or intrinsic interest. To highlight these papers, Physics features three kinds of articles: Viewpoints are commentaries written by active researchers, who are asked to explain the results to physicists in other subfields. Focus stories are written by professional science writers in a journalistic style and are intended to be accessible to students and non-experts. Synopses are brief editor-written summaries. Physics provides a much-needed guide to the best in physics, and we welcome your comments.
c5ec2511a0542218
book review Quantum Mechanics textbooks Reviewed by: T.J. Nelson O kay. You haven't renormalized a Schrödinger equation since 1976. The last time you ever did an integration by parts, people were driving around in Ford Pintos and Jimmy Carter was being held hostage in the White House by Iranians. Suddenly your quantum breaks down and you need to have it repaired. Your local quantum mechanic professes ignorance about such things, and, never having seen it, is skeptical that your quantum even exists, let alone whether it can be repaired. So you have to read a book. But which one? Introduction to Quantum Mechanics by David J. Griffiths G riffiths is an introductory textbook on quantum mechanics that is written with good clarity and simplicity. The author provides anecdotal details that enhance the reader's intuitive understanding of the subject. However, there are no worked examples in the book, and the answers to the problems are available only to instructors. For most subjects this would not be a serious drawback, but physics is not one of those subjects. Physics is the study of phenomena that can be studied mathematically. The concepts in physics are relatively few in number and relatively simple, but the student must learn how to manipulate the equations to solve problems. This can only be learned by working through the exercises. Unfortunately, the absence of worked examples in Griffiths' book makes it impossible for readers to check their answers, making the book useless outside of a classroom setting. The book also is not particularly rigorous. However, in practice this should not pose too much of a problem since a qualified instructor would be essential for this 394-page book to be useful as anything more than a doorstop. (Indeed, I have found that this book is the perfect size for keeping my door open.) Principles of Quantum Mechanics, 2nd ed. by R. Shankar S hankar starts out with the basic mathematical tools needed to understand quantum mechanics. Shankar's book is well written, and is far friendlier than Griffiths for students who are learning the subject on their own, or who are returning to it later after moving on to some other field. Many elementary aspects of QM, such as Dirac's ugly 'ket' notation, create endless problems for students schooled in statistics or information theory, where the same notation is employed with quite different meanings. This 676-page book introduces ket notation from the very beginning. Bigger is not necessarily better, however, and this book starts at a lower level than the other books, making the pace slow. QM is not introduced until page 115. However, this book does contain solved problems, and covers Feynman path integrals more thoroughly than the other books. Quantum Theory: Concepts and Methods by A. Peres N either the writing style nor the print quality of this book is in the same class as the previous books. Peres has an idiosyncratic approach to physics that is reflected in a more haphazard coverage of the field. The author expresses negative opinions about the more exotic aspects of QM (such as the many-worlds theory) throughout the book. This may tend to demoralize some readers. However, the book does cover topics like Bell's Inequality, information theory, and the Kochen-Specker theorem. Problems in Quantum Mechanics by I.I. Gol'dman and V.D. Krivchenkov S peaking of old-fashioned, this one is a reprint of an old 1963 book that focuses on perturbation matrices. The print quality is significantly poorer than the other books, but if you're stuck on this problem (which is an important and difficult one), it's very cheap. Quantum Mechanics: Concepts and Applications by Nouredine Zettili T his book is almost as big as Shankar (648 vs 676 pages), but is crammed full of solved problems. Most of the problems are more than just of the "Prove this equation" type, and like Sakurai, relate to its experimental basis, and do so without sacrificing rigor. The only drawback to this book, aside from being a paperback, is that, like the other books reviewed here, it doesn't cover more exotic topics like hidden variables or the role of the observer. However, this book covers both the theory and problem solving in an integrated way. The solutions to the problems are mostly easy to follow, even if your math is rusty. However, the author sometimes gives the wrong starting point for solving the problems; a book like Handbook of Mathematics by Bronshtein and Semendyayav is highly recommended to avoid hours of head-scratching. Because familiarity with basic formulas from physics is also assumed, it also helps to have a copy of Handbook of Physics or a regular physics textbook like Fundamentals of Physics by Halliday, Resnick, and Walker on hand. Because the examples tend to break up the text, the writing style is less engaging than that in Shankar or Griffiths. But, of course, none of these books is intended to be Shakespeare; and to be fair, Shakespeare never gave any worked examples for calculating eigenfunctions of orbital angular momentum, and most of his plays barely even mention the Wentzel-Kramers-Brillouin method for calculating wave functions of a particle. Modern Quantum Mechanics by J.J. Sakurai T his nicely-printed and well-written book is distinguished by a greater emphasis on actual experimental phenomena than most other books. Unlike the other books described here, Sakurai's book touches on important questions like Bell's Inequality. The material is also introduced at a higher level than Griffiths and Shankar, with lots of mathematics, but suffers from the same problem: lots of problems, precious few answers. Sakurai often gives concise verbal explanations of what each thing actually means. This is counterbalanced by an annoying tendency to pull equations out of a hat and skip steps in his derivations. This book, while much better than Griffiths, would still be useful only in a classroom setting or in conjunction with some other book that contains worked examples and derivations whose steps are explained better. Schaum's Outlines of Theory and Problems in Quantum Mechanics T his is an ugly book printed on cheap newsprint-like paper, like that found in SAT booklets, and is aimed at struggling undergraduate students practicing for exams. Very little theory; mostly solved problems with a few badly-drawn diagrams. Also has a short chapter in numerical methods that includes snippets of Fortran code. Anoher textbook is the two-volume work by Cohen-Tannoudji et al.. Whichever QM textbook you use, you will probably need to bounce from one to the other many times before finding one that describes any given topic with any degree of clarity. It is obvious that each of these books sucks in its own unique and wonderful way. Proof of this is left as an exercise for the reader.
3fcd9e845ac34a0a
Why did Nature Invent Spin? I think this issue receives too little attention. Usually, it is said that spin is a consequence of the Dirac equation and thus, something that follows necessarily from relativity and quantum mechanics. Let us have a brief look at the argument. Schrödinger’s non-relativistic equation is . The momentum operator p = m v is and thus, the term on the left-hand side of Schrödinger’s equation is derived simply from the kinetic energy  ½ mv2. It is interesting that none of the successful predictions of Schrödinger’s equation for the hydrogen atom make specific reference to the nature of the electron (for which the wave function gives a probability that it will be found in a certain state). They refer only to the kinetic energy; irrespective of the type of wave-natured particle that orbits the nucleus (in fact, it also works for muonic atoms). Dirac used the correct special relativistic term for energy E = and replaced Schrödinger’s term. However, there was no explicit justification for switching from the kinetic energy to the total energy of the particle. This conceptual problem was overshadowed somehow by the mathematical problem arising from the Delta operator in the square root, to which Dirac found an ingenious solution using the matrices named after him. The algebra of Dirac matrices transpired to be a description of spin. In the following, the opinion spread that spin was a consequence of putting relativistic energy into the basic equations of quantum mechanics. However, the initial problem of the missing equivalence of kinetic and total energy persisted. Dirac was also disappointed that he could not deduce any concrete properties for the electron from his equation. The retrospective narrative is that the positron, undiscovered in 1928, was a ‘prediction’ of Dirac’s theory, but Dirac had rather sought to explain the huge mass relation of the proton and electron, which is 1836.15. In fact, in his latter days, Dirac distanced himself a little from his earlier findings and according to his biographer Helge Kragh, he was “disposed to give up everything for what he had become famous”. Let us adopt another perspective regarding the nature of spin, one which is related to the properties of three-dimensional space; the world we perceive (those who perceive more dimensions should see a doctor). The group of rotations SO(3) obviously must have some significance, but its topology is a little intricate. It lacks a property called ‘simple connectedness’ because the paths in SO(3) may not be contracted. Objects connected to a fixed point with a ribbon must twist 720 degrees, not just 360 degrees, in order to perform a full rotation that leaves the ribbon untwisted (see the visualization here). It seems that nature has a predilection for the generalized rotations called SU(2), which are simpler mathematically and have a surprising feature; they represent precisely the electron’s spin – you need to perform a double twist of 720 degrees, rather than just 360 degrees, to get into the original position. However, there are no Dirac matrices and thus, I think there is an open problem. It seems that the properties of three-dimensional space alone are sufficient to cause spin to emerge – no relativity or quantum mechanics are needed. To put it another way, a direct understanding of quantum mechanics from the geometrical properties of space, if there is one, is still missing. 13 thoughts on “Why did Nature Invent Spin? 1. There are papers on this sort of thing, Alexander, but they struggle to get into journals, and then they struggle to get any publicity. See for example http://www.cybsoc.org/electron.pdf and look at the picture on page 6. Note the dark line. It’s essentially the same as Qiu-Hong Hu’s helix at http://arxiv.org/abs/physics/0512265. Also look at gamma-gamma pair production, electron diffraction, the Einstein-de Haas effect, magnetic moment, the wiki atomic orbitals article where you can read that “electrons exist as standing waves”, and of course annihilation. The electron is a 511keV photon perpetually displacing its own path into a closed Dirac’s-belt path. This only works at 511keV because that wavelength “fits” with h. See the spindle-sphere torus at http://www.antiprism.com/album/860_tori/imagelist.html and try to imagine it without a surface. It has as much surface as a subterranean seismic wave. The electron’s “intrinsic” spin is something like the intrinsic spin of a cyclone. Take that away using an anticyclone, and all you’ve got is wind. Take the electron’s spin away with a positron, and all you’ve got is light. A medical doctor called Andrew Worsley told me about something else that looks interesting: Planck length is l=√(ћG/c³). Replace √(ћG) with 4πn where n is a suitable value with the correct dimensionality. You’ve still got your Planck length. But now set n to 1, and work out 4πn/√(c³). There’s a binding energy adjustment, but it’s small, like 2.002319 compared to 2. Look at the Watt balance section of the Wikipedia kilogram article. That refers to g rather than G, but if you can define the kilogram using h and c and not much else, surely you can do the same for the mass of the electron. Photon momentum is resistance to change-in-motion for a wave propagating linearly. Electron mass is resistance to change-in-motion for a wave going round and round. See http://www.tardyon.de/mirror/hooft/hooft.htm but note that the ‘t Hooft here is not the Nobel ‘t Hooft. 2. Rotations in 3D in our theory are not physical rotations, but recalculation formulas from one reference frame to another one oriented differently. That is why, for example there is no the angular velocity of rotations in such formulas. And in 3D space there are only $latex 2 \pi$ different angles (reference frames), speacking figuratively. 3. Lol, this is so wrong I don’t know where to begin – what a target-rich environment. 1) You forgot to include the potential energy term in Schrodinger’s equation. Without it you will have no prediction for the hydrogen atom. 2) You can’t derive Schrodinger’s equation from the relationship between energy and momentum. At best, it acts as a motivation. 3) The ribbon twisting of 720 degrees doesn’t have any connection to the fact that you need 720 degrees rotation for an electron to get back to it’s original state. It’s just a coincidence that SU(2) is the universal cover of SO(3). 4) Spin is a quantum mechanical property, period. You simply can’t have spin, i.e. an intrinsic angular momentum without actually moving parts in classical physics. For example, trying to ascribe electron’s spin to something like axial rotation will immediately lead to faster than light speeds. Apart from these glaring mistakes this post is a collection of inchoate sentences and charming ignorance. Rather than dissing particle physicists make an honest-to-god effort to understand what they have accomplished. You’ll never be able to repeat what they did but at least you’ll be able to appreciate what a supreme edifice to human intellect particle physics is. • Would be nice if you attempted to answer Ray’s legitimate questions, Alexander Unzicker. Why, for example, do you show the Schrödinger equation (SE) for a free particle when writing about the hydrogen atom? You claim that the non-relativistic SE had made successful predictions for the hydrogen atom (H) without explicitly reference to the features of the electron. Hold on! Even in the free SE shown by you, a feature of the electron is clearly seen on the left side: its mass. The most simple SE for H has only a term for the Coulomb potential; there’s no spin. Nevertheless, the mass and charges of the electron and the proton need to be entered for concrete predictions. Four values falling from heavens. Historically the simple SE for H soon turned out to not sufficiently describe the H. For example, a term for the spin needed to be added, another feature of the electron not being an intrinsic part of the SE, but added manually (reminds me of adding another epicycle :) That disappointment was a strong motivation to look for a more general description. By the way, although the general formulas for the muonic hydrogen are similar, actual solutions are different and a muonic H is easily distinguished from a “normal” one. 4. Since the end of the 19th century, there have been many hard-to-drill problems encountered in physics that have often been bypassed in favour of making rapid progress in other areas. These range from failed attempts to create a viable electromagnetic worldview, failed attempts to determine a finite extended structure for the electron, interpretation of Planck’s Blackbody radiation equation, and wave-particle dualism to infinity removal methods in QED. The concept of ‘spin’ has also been a controversial matter. Mathematical notation has been one of the controversial areas and some physicists such as Heisenberg have even advocated abandoning connecting of the mathematics to physical concepts. There is also a methodology in mathematical physics that is blind to the physics and has only the achievement of a known value as its goal. Dirac leaned more towards the mathematical side in his formulation of relativistic QM. “I must say that I am very dissatisfied with the situation, because this so called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it just because it is infinitely great and you do not want it!” (Dirac, On Quantum Mechanics and Mathematics, 1937)http://www.spaceandmotion.com/physics-paul-dirac.htm There were disputes regarding the use of quaternions vs. vectors in EM theory and general physics that have resurfaced in Quantum Mechanics (see Mendel Sachs and David Hestenes). It should be noted that a notation system can hamper efforts at understanding if it doesn’t fully reflect or make explicit the underlying physical reality. “The physicist cannot simply surrender to the philosopher the critical contemplation of the theoretical foundations; for he himself knows best and feels most surely where the shoe pinches…. he must try to make clear in his own mind just how far the concepts which he uses are justified… The whole of science is nothing more than a refinement of everyday thinking.” – Albert Einstein I would add that physicists should not surrender critical contemplation of the theoretical foundations to mathematicians either, even if the mathematicians obtain equations that work extremely well. E.g. QED, Matrix Mechanics, String Theory, So here is something interesting from Dr. David Hestenes regarding the employment of mathematics in physics: “My purpose is to lay bare some serious misconceptions that complicate quantum mechanics and obscure its relation to classical mechanics. The most basic of these misconceptions is that the Pauli matrices are intrinsically related to spin. On the contrary, I claim that their physical significance is derived solely from their correspondence with orthogonal directions in space. The representation of σi by 2×2 matrices is irrelevant to physics.” – Dr. David Hestenes, • This sounds… … like Noether’s Theorem for dummies. Please learn to understand physics, 5. I came up with the idea of a rotating wave for an electron back in 1987 while finishing a grad project in architecture and studying physics which I did in my undergrad. Architecture problem solving taught me how to step outside the box and that does not mean disregarding math and laws of physics. It means looking down a different road and wondering. Last year I got back into it ernestly and finally properly derived the Gkl using the affine connection. The electron wave traces a helical path in constant forward motion. The Rotor position vector is the eigenvector. Gravity slows down the rotation and the helical path spirals outward. The cylinder becomes a flute with intrinsic curvature. So now I am relearning the Weyl, Dirac, Schrodinger, Pauli equations and comparing with the wavefunction of the Rotating Wave. It is all such beautiful poetry. As a young boy I learned how to hoe the vineyard and that led me to wonder what is gravity. The soil is fertile and it needs be turned. Google Christie Wavicle to see the model. Disregard the attempt at the long derivation of Gkl. It has been corrected by the more recent and simple derivation just last year. My email address is billchristiearchitect@gmail.com Bill Christie Leave a Reply to Vladimir Kalitvianski Cancel reply
c082a261f8a0137c
Tables for Volume C Mathematical, physical and chemical tables Edited by E. Prince International Tables for Crystallography (2006). Vol. C, ch. 4.1, pp. 186-187 Section 4.1.2. Electromagnetic waves and particles V. Valvodaa aDepartment of Physics of Semiconductors, Faculty of Mathematics and Physics, Charles University, Ke Karlovu 5, 121 16 Praha 2, Czech Republic 4.1.2. Electromagnetic waves and particles | top | pdf | Both electromagnetic waves and particles can be described by the wavefunction ψ(r), as a complex function of spatial coordinates, by the wavelength λ, the wavevector k, which indicates the direction of propagation and is of magnitude 2π/λ, the frequency ν or angular frequency [\omega] in rad  s−1, and the phase velocity v (and the group velocity). Intensity in r is given by |ψ(r)|2. These wavefunctions are solutions of the same type of differential equation [see, for example, Cowley (1975[link])]: [\nabla ^2\psi + k^2\psi=0. \eqno (] For electromagnetic waves, [k^2=\varepsilon\mu\omega^2=\omega^2/\nu^2,\eqno (]where k is the wavenumber, [\varepsilon] is the permittivity or dielectric constant and μ is the magnetic permeability of the medium; [\mu\approx 1] for most cases. The velocity of the waves in free space is [c=1/(\varepsilon _0\mu_0)^{1/2};] otherwise [\nu = c/n], where [n=(\varepsilon/\varepsilon_0)^{1/2}] is the refraction index. For particles of mass m and charge q with kinetic energy Ek in field-free space, the wave equation ([link] is the time-independent Schrödinger equation and [k^2={8\pi^2m\over h^2}\{E_k+q{\scr S}({\bf r})\},\eqno (]where [{\scr S}](r) is the electrostatic potential function and the bracket gives the sum of the kinetic and potential energies of the particles. Important nontrivial solutions of ([link] are (after adding the time dependence) the plane wavefunctions [\psi=\psi_0\exp \{i(\omega t-{\bf {k \cdot r}})\} \eqno (]or the spherical wavefunctions [\psi = \psi_0{\exp \{i(\omega t-kr)\}\over r}.\eqno (]Thus, relatively simple semi-classical wave mechanics, rather than full quantum mechanics, is needed for interactions with no appreciable loss of energy. The interaction of the waves with matter depends on the spatial variation of the refractive index given by the spatial variations of the electron density or the electrostatic potential functions. Electromagnetic waves can also be described in terms of energy quanta, photons, with energy given by Planck's law [E=h\nu.\eqno (]The values of E, ν, and λ of the electromagnetic waves used in general crystallography are scaled in Fig.[link] . It should be noted that there are several types of electromagnetic waves in the most important wavelength range near 1 Å, which are called X-rays (when generated in X-ray tubes), γ-rays (when emitted by radioactive isotopes) or synchrotron radiation (emitted by electrons moving in a circular orbit). Figure | top | pdf | Comparison of the energy, frequency, and wavelength of the electromagnetic waves used in crystallography (logarithmic scale). On the other hand, the beam of particles of mass m, moving with velocity v, behaves like waves with wavelength given by de Broglie's law [\lambda={h\over mv}\eqno (]or using [E_k={1\over 2}mv^2] for the kinetic energy of particles [\lambda = {h\over (2mE_k)^{1/2}}.\eqno (]When relativistic effects are taken into account, [\lambda =\lambda _0\bigg\{1+{E_k\over 2m_0c^2}\bigg\}^{-1/2},\eqno (]where m0 is the rest mass and λ0 the non-relativistic wavelength. High-energy electrons ([E_k\approx 10^5\,{\rm eV},\ \lambda \approx 10^{-2}] Å) and neutrons ([E_k\approx10^{-2}\,{\rm eV},\ \lambda \approx 10^0] Å) belong to the most prominent particles used in diffraction crystallography (see Table[link]). However, low-energy electrons ([E_k\approx 10^2] eV, [\lambda\approx10^0] Å), protons or ions of elements with quite high atomic number and energy ([E_k\approx] 103–106 eV) are also used in scattering, channelling or shadowing experiments (see Section 4.1.5[link]). Table| top | pdf | Average diffraction properties of X-rays, electrons, and neutrons (1) Charge 0 −1 e 0 (2) Rest mass 0 9.11 × 10−31 kg 1.67 × 10−27 kg (3) Energy 10 keV 100 keV 0.03 eV (4) Wavelength 1.5 Å 0.04 Å 1.2 Å (5) Bragg angles Large Large (6) Extinction length 10 µm 0.03 µm 100 µm (7) Absorption length 100 µm 1 µm 5 cm (8) Width of rocking curve 5′′ 0.6° 0.5′′ (9) Refractive index n [\lt] 1 n [\gt] 1 n ≶ 1   n = 1 + δ δ [\approx] − 1 × 10−5 δ [\approx] +1 × 10−4 δ [\approx] [\mp] 1 × 10−6 (10) Atomic scattering amplitudes f 10−3 Å 10 Å 10−4 Å (11) Dependence of f on the atomic number Z [\sim Z] [\sim Z ^{2/3}] Nonmonotonic (12) Anomalous dispersion Common Rare (13) Spectral breadth 1 eV 3 eV 500 eV Δλ/λ [\approx] 10−4 Δλ/λ [\approx] 10−5 Δλ/λ [\approx] 2 Cowley, J. M. (1975). Diffraction physics, Chap. 1. Amsterdam: North-Holland. to end of page to top of page
f6cbe4e876315902
tisdag 31 januari 2017 The End of CO2 Alarmism • Climate sensitivity to CO2 emission vastly exaggerated. • Climate industrial complex a very dangerous special interest. And read about this historic press conference: Radiation as Superposition or Jumping? This is a continuation of this post on understanding of atomic radiation of frequency $E_2-E_1$ as resonance of "superposition of two eigenstates" of different frequencies $E_2>E_1$ according to realQM. In the standard view of the Copenhagen Interpretation by Bohr as stdQM, radiation is instead connected to the "jumping" of electrons between two energies/frequencies $E_2>E_2". Which is more convincing: Superposition or jumping? Superposition connects to linearity and realQM, while not linear (for more than one electron), may still show features of "near linearity" and thus allow understanding in the form of  "superposition", while realQM carries full non-linear dynamics. On the other hand, "jumping" of electrons in stdQM either requires new physics, which is missing, or has no meaning at all. This connects to the non-physical nature of the atom of stdQM discussed in a previous post presenting a contradiction in particular in the case of atomic radiation, where atoms are observed to interact with the physics of electro-magnetics and thus must be physical, because interaction between non-physics and physics is telekinesis or psychokinesis, which is viewed as pseudo-science: String theory and multiversa are spin-offs of stdQM with the non-physical aspects driven to an extreme, and accordingly by many physicists viewed as pseudo-science. PS In Quantum Theory at the Crossroads Reconsidering the Solvay Conference 1927 we read (p 132): • In 1926, with the development of wave mechanics, Schrödinger saw a new possibility of conceiving a mechanism for radiation: the superposition of two waves would involve two frequencies and emitted radiation could be understood as some kind of "difference tone" (or beat). • In his first paper on quantisation, Schrödinger states that this picture would "much more pleasing than the one of quantum jump". • This idea is still the basis of today's semi-classical radiation theory (often used in quantum optics), that is, the determination of classical electromagnetic radiation from the current associated with a charge density proportional to $\vert\psi\vert^2$. • The second paper refers to radiation only in passing. Clearly, Schrödinger was heading in a fruitful direction, but he was stopped by Born, Bohr and Heisenberg. måndag 30 januari 2017 Towards a Model of Atoms In my search for a realistic atom model I have found the following pieces: 1. Atom in ground state as harmonic oscillator: 3d free boundary Schrödinger equation: realQM. 2. Radiating atom as harmonic oscillator with small Abraham-Lorentz damping: previous post and Mathematical Physics of Black Body Radiation. 3. Radiating atoms in collective resonance with exterior electromagnetic field with acoustic analog: Piano Secret    which I hope to assemble into a model which can describe: • ground states and excited states as solutions of a 3d free boundary Schrödinger equation  • emission and absorption of light by collections of atoms in collective in phase resonance with an exterior electromagnetic field generated by oscillating atomic electric charge and associated Abraham-Lorentz damping.    The key concepts entering into such a model describing in particular matter-light interaction, are: • physical deterministic computable 3d continuum model of atom as kernel + electrons • electrons as clouds of charge subject to Coulomb and compression forces • no conceptual difference between micro and macro • no probability, no multi-d  • generalised harmonic oscillator • small damping from Abraham-Lorentz force from oscillating electro charge • near resonant forcing with half period phase shift • collective phase coordination by resonance between many atoms and one exterior field.  Note that matter-light interaction is the scope of Quantum Electro Dynamics or Quantum Field Theory, which are very difficult to understand and use. What I seek is something which can be understood and which is useful. A model in the spirit of Schrödinger as a deterministic 3d multi-species continuum mechanical wave model of microscopic atoms interacting with macroscopic electromagnetics. I don't see that anything like that is available in the literature within the Copenhagen Interpretation of Bohr or any of its clones... Schrödinger passed away in 1961 after a life in opposition to Bohr since 1926 when his equation was hijacked, but his spirit lives... ...compare with the following trivial text book picture of atomic radiation in the spirit of Bohr: söndag 29 januari 2017 The Radiating Atom In the analysis on Computational Blackbody Radiation I used the following model of a harmonic oscillator of frequency $\omega$ with small damping $\gamma >0$ subject to near resonant forcing $f(t)$: • $\ddot u+\omega^2u-\gamma\dddot u=f(t)$ with the following characteristic energy balance between outgoing and incoming energy: • $\gamma\int\ddot u^2dt =\int f^2dt$ with integration over a time period and the dot signifying differentiation with respect to time $t$.  An extension to Schrödingers equation written as a system of real-valued wave functions $\phi$ and $\psi$ may take the form • $\dot\phi +H\psi -\gamma\dddot \psi = f(t)$            (1) • $-\dot\psi +H\phi -\gamma\dddot \phi = g(t)$          (2) where $H$ is a Hamiltonian, $f(t)$ and $g(t)$ represent near-resonant forcing, and $\gamma =\gamma (\dot \rho )\ge 0$ with $\gamma (0)=0$ and $\rho =\phi^2 +\psi^2$ is charge density. This model carries the characteristics displayed of the model $\ddot\phi+H^2\phi =0$ as the 2nd order in time model obtained after eliminating $\psi$ in the case $\gamma =0$ as displayed in a previous post.  In particular, multiplication of (1) by $\phi$ and (2) by $-\psi$ and addition gives conservation of charge if $f(t)\phi -g(t)\psi =0$ as a natural phase shift condition.  Further, multiplication of (1) by $\dot\psi$ and (2) by $\dot\phi$ and addition gives a balance of total energy as inner energy plus radiated energy  • $\int (\phi H\phi +\psi H\psi)dt +\gamma\int (\ddot\phi^2 +\ddot\psi^2)dt$ in terms of work of forcing. lördag 28 januari 2017 Physical Interpretation of Quantum Mechanics Needed torsdag 26 januari 2017 Why Atomic Emission at Beat Frequencies Only? An atom can emit radiation of frequency $\nu =E_2-E_1$ (with Planck's constant $h$ normalized to unity and allowing to replace energy by frequency) and $E_2>E_1$ are two frequencies as eigenvalues $E$ of a Hamiltonian $H$ with corresponding eigenfunction $\psi (x)$ depending on a space coordinate $x$ satisfying $H\psi =E\psi$ and corresponding wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ satisfying Schrödingers wave equation • $i\frac{\partial\Psi}{\partial t}+H\Psi =0$ and $t$ is a time variable. Why is the emission spectrum generated by differences $E_2-E_1$ of frequencies of the Hamiltonian as "beat frequencies" and not the frequencies $E_2$ and $E_1$ themselves? Why does an atom interact/resonate with an electromagnetic field of beat frequency $E_2-E_1$, but not $E_2$ or $E_1$? In particular, why is the ground state of smallest frequency stable by refusing electromagnetic resonance?   This was the question confronting Bohr 1913 when trying to build a model of the atom in terms of classical mechanics terms. Bohr's answer was that "for some reason" only certain "electron orbits" with certain frequencies "are allowed" and that "for some reason" these electron orbits cannot resonate with an electromagnetic field, and then suggested that observed resonances at beat frequencies came from "electrons jumping between energy levels".  This was not convincing and prepared the revolution into quantum mechanics in 1926. Real Quantum Mechanics realQM gives the following answer: The charge density $\vert\Psi (t,x)\vert^2=\psi^2(x)$ of a wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ with $\psi (x)$ satisfying $H\psi =E\psi$, does not vary with time and as such does not radiate. On the other hand the difference $\Psi =\Psi_2-\Psi_1$ between two wave functions $\Psi_1(x,t)=\exp(iE_1t)\psi_1(x)$ and $\Psi_2(x,t)=\exp(iE_2t)\psi_2(x)$ with $H\psi_1=E_1$ and $H\psi_2=E_2\psi_2$, is a solution to Schrödinger's equation and can be written • $\Psi (x,t)=\exp(iE_1t)(\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x))$ with corresponding charge density • $\vert\Psi (t,x)\vert^2 = \vert\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x)\vert^2$ with a visible time variation in space scaling with $(E_2-E_1)$ and associated radiation of frequency $E_2-E_1$ as a beat frequency.  Superposition of two eigenstates thus may radiate because the corresponding charge density varies in space with time, while pure eigenstates have charge densities which do not vary with time and thus do not radiate. In realQM electrons are thought of as "clouds of charge" of density $\vert\Psi\vert^2$ with physical presence, which is not changing with time in pure eigenstates and thus does not radiate, while superpositions of eigenstates do vary with time and thus may radiate, because a charge oscillating at a certain frequency generates a electric field oscillating at the same frequency. In standard quantum mechanics stdQM $\vert\Psi\vert^2$ is instead interpreted as probability of configuration of electrons as particles, which lacks physical meaning and as such does not appear to  allow an explanation of the non-radiation/resonance of pure eigenstates and radiation/resonance at beat frequencies. In stdQM electrons are nowhere and everywhere at the same time, and it is declared that speaking of electron (or charge) motion is nonsensical and then atom radiation remains as inexplicable as to Bohr in 1913. So the revolution of classical mechanics into quantum mechanics driven by Bohr's question and unsuccessful answer, does not seem to present any real answer. Or does it? PS I have already written about The Radiating Atom in a sequence of posts 1-11 with in particular 3: Resolution of Schrödinger's Enigma connecting to this post. onsdag 25 januari 2017 Ny Läroplan med Programmering på Regeringens Bord SVT Nyheter i Gävleborg meddelar att den nya läroplanen med programmering som nytt studieämne nu ligger på Regeringens bord för beslut och att flera skolor i Gävle och Sandviken redan rivstartat och infört ämnet. Snart måste övriga skolor följa efter. Mitt bidrag för att möta behovet av nya läromedel är Matematik-IT, färdigt att provas! tisdag 24 januari 2017 Is the Quantum World Really Inexplicable in Classical Terms? Peter Holland describes in the opening statement of The Quantum Theory of Motion the state of the art of modern physics in the form of quantum mechanics, as follows: • The quantum world is inexplicable in classical terms. • The predictions pertaining to the interaction of matter and light embodied in Newton's laws of motion  and Maxwell's equations governing the propagation of electromagnetic fields, are in flat contradiction with the experimental facts at the microscopic scale. • A key feature of quantum effects is their apparent indeterminism, that individual atomic events are unpredictable, uncontrollable and literally seem to have no cause. • Regularities emerge onlywhen one considers a large ensemble of such events. • This indeed is generally considered to constitute the heart of the conceptual problems posed by quantum phenomena, necessitating a fundamental revision of the deterministic classical world view. No doubt this describes the predicament of modern physics and it is a sad story: It is nothing but a total collapse of rationality, and as far as I can understand, there are no compelling reasons to give up the core principles of classical continuum physics so well expressed in Maxwell's equations.  If classical continuum physics is modified just a little by adding a new element of finite precision computation, then the apparent contradiction of the ultra-violet catastrophe of black-body radiation as the root of "quantization", can be circled and rationality maintained.  You can find these my arguments by browsing the labels to this post and the web sites Computational Black Body Radiation and The World as Computation with further development in the book Real Quantum Mechanics. And so No, it may not be necessary to give up the deterministic classical world view when doing atom physics, the view which gave us Maxwell's equations and opened a new world of electro-magnetics connecting to atoms. It may suffice to modify the deterministic classical view just a little bit without losing anything to make it work also for atom physics. After all, what can be more deterministic than the ground state of a Hydrogen atom? Of course, this is not a message that is welcomed by physicists, who have been locked since 90 years into finding evidence that quantum mechanics is inexplicable, by inventing contradictions of concepts without physical reality. The root to such contradictions (like wave-particle duality) is the linear multi-d Schrödinger equation which is picked from the air as a formality without physics content, but just because of that being inexplicable. To advance, it seems that a new Schrödinger equation with physical meaning should be derived... The question is how to generalise Schrödinger's equation for the Hydrogen atom with one electron, which works fine and can be understood, to Helium with two electrons and so on...The question is then how the two electrons of Helium find co-existence around the kernel. In Real Quantum Mechanics they split 3d space without overlap....like East and West of global politics or Germany... Quantum Mechanics as Retreat to (German) Romantic Irrational Ideal Quantum theory is widely held to resist any realist interpretation and to mark the advent of a ‘postmodern’ science characterised by paradox, uncertainty, and the limits of precise measurement. Keeping his own realist position in check, Christopher Norris provides a remarkably detailed and incisive account of the positions adopted by parties on both sides of this complex debate. James Cushing gives in Bohmian Mechanics and Quantum Theory (1996): An Appraisal, an account of the rise to domination of the Born-Heisenberg-Bohr Copenhagen Interpretation of quantum mechanics: • Today it is generally assumed that the success of quantum mechanics demands that we accept a world view in which physical processes at the most fundamental level are seen as being irreducibly and ineliminably indeterministic.  • That is, one of the great watersheds in twentieth-century scientific thought is the "Copenhagen" insight that empirical evidence and logic are seen as necessarily implying an indeterministic picture of nature.  • This is in marked contrast to any classical representation of a clockwork universe.  • A causal program would have been a far less radical departure from the then-accepted framework of classical physics than was the so-called Copenhagen version of quantum mechanics that rapidly gained ascendancy by the late 1920s and has been all-but universally accepted ever since.  • How could this happen?  • It has been over twenty years now since the dramatic and controversial "Forman thesis" was advanced that acausality was embraced by German quantum physicists in the Weimar era as a reaction to the hostile intellectual and cultural environment that existed there prior to and during the formulation of modem quantum mechanics.  • The goal was to establish a causal connection between this social intellectual milieu and the content of science, in this case quantum mechanics.  • The general structure of this argument is the following. Causality for physicists in the early twentieth century "meant complete lawfulness of Nature, determinism [(i.e., event-by-event causality)]".  • Such lawfulness was seen by scientists as absolutely essential for science to be a coherent enterprise. A scientific approach was also taken to be necessarily a rational one.  • When, in the aftermath of the German defeat in World War I, science was held responsible (not only by its failure, but even more because of its spirit) for the sorry state of society, there was a reaction against rationalism and a return to a romantic, "irrational" ideal. Yes, quantum mechanics (in its Copenhagen Interpretation forcefully advocated by Bohr under influence from the anti-realist positivist philosopher Höffding) was a product of German physics in the Weimar republic of the 1920s, by Heisenberg and Born.  It seems reasonable to think that if the defeat of Germany in World War I was blamed on a failure of "rationality" and "realism", then a resort to "irrationality" and "anti-realism" would be rational in particular in Germany...and so quantum mechanics in its anti-realist form took over the scene as Germany rebuilt its power... But maybe today Germany is less idealistic and anti-realistic  (although the Energiewende is romantic anti-realism) and so maybe also a more realistic quantum mechanics can be allowed to develop...without the standard "shut-up and calculate" suppression of discussion... måndag 23 januari 2017 Quantum Mechanics as Classical Continuum Physics and Not Particle Mechanics As you understand, this is just nonsense: lördag 21 januari 2017 Deconstruction of CO2 Alarmism Started Directly after inauguration the White House web site changes to a new Energy Plan, where all of Obama's CO2 alarmism has been completely eliminated: Nothing about dangerous CO2! No limits on emission! Trump has listened to science! CO2 alarmism will be defunded and why not then also other forms of fake physics... This is the first step to the Fall of IPCC and the Paris agreement and liberation of resources for the benefit of humanity, see phys.org. The defunding of CO2 alarmism will now start, and then why not other forms of fake science? PS1 Skepticism to CO2 alarmism expressed by Klimatrealisterna is now getting published in media in Norway, while in Sweden it is fully censored.  I have recently accepted an invitation to become a member of the scientific committee of this organisation (not yet visible on the web site). PS2 Read Roy Spencer's analysis of the Trump Dump: Bottomline: With plenty of energy, poverty can be eliminated. Unstopped CO2 alarmism will massively increase poverty with no gain whatsoever. Trump is the first state leader to understand that the Emperor of CO2 Alarmism is naked, and other leaders will now open their eyes to see the same thing...and skeptics may soon say mission complete... See also The Beginning of the End of EPA. The Origin of Fake Physics Peter Woit on gives on Not Even Wrong a list of fake physics most of which can be traced back to the fake physics character of Schrödinger's linear multi-dimensional equation, as exposed in recent posts. Woit's list of fake physics thus includes different fantasies of multiversa all originating from the multi-dimensional form of Schrödinger's equation giving each electron its own separate 3d space/universe to dwell in. But the linear multi-d Schrödinger equation is a postulate of modern physics picked from out of the blue as a ready-made and as such like a religious dogma beyond human understanding and rationality. Why modern physics has been driven into such an unscientific approach remains to be understood and exposed, and discussed... The standard view is presented by David Gross as follows: • Quantum mechanics emerged in 1900, when Planck first quantized the energy of radiating oscillators. • Quantum mechanics is the most successful of all the frameworks that we have discovered to describe physical reality. It works, it makes sense, and it is hard to modify.  • Quantum mechanics does make sense, although the transition, a hundred years ago, from classical to quantum reality was not easy.  •  The freedom one has to choose among different, incompatible, frameworks does not influence reality—one gets the same answers for the same questions, no matter which framework one uses.  • That is why one can simply “shut up and calculate.” Most of us do that most of te time.  • By now...we have a completely coherent and consistent formulation of quantum mechanics that corresponds to what we actually do in predicting and describing experiments and observations in the real world.  • For most of us there are no problems. • Nonetheless, there are dissenting views.  So, the message is that quantum mechanics works if you simply shut up and calculate and don't ask if it makes sense, as physicists are being taught to do, but here are dissenting views... Note that the standard idea ventilated by Gross is that quantum mechanics somehow emerged from Planck's desperate trick of "quantisation" of blackbody radiation 1900 when taking on the mission of explaining the physics of radiation while avoiding the "ultra-violet catastrophe" believed to torpedo classical wave mechanics. Planck never believed that his trick had a physical meaning and in fact the trick is not needed because an explanation can be given within classical wave mechanics in the form of computational blackbody radiation with the ultraviolet catastrophe not showing up. This is what Anthony Leggett, Nobel Laureate and speaker at the 90 Years of Quantum Mechanics Conference, Jan 23-26, 2017, says (in 1987): • If one wishes to provoke a group of normally phlegmatic physicists into a state of high animation—indeed, in some cases strong emotion—there are few tactics better guaranteed to succeed than to introduce into the conversation the topic of the foundations of quantum mechanics, and more specifically the quantum measurement problem. • I do not myself feel that any of the so-called solutions of the quantum measurement paradox currently on offer is in any way satisfactory. • I am personally convinced that the problem of making a consistent and philosophically acceptable 'join' between the quantum formalism which has been so spectacularly successful at the atomic and subatomic level and the 'realistic' classical concepts we employ in everyday life can have no solution within our current conceptual framework;  • We are still, after three hundred years, only at the beginning of a long journey along a path whose twists and turns promise to reveal vistas which at present are beyond our wildest imagination.  • Personally, I see this as not a pessimistic, but a highly optimistic, conclusion. In intellectual endeavour, if nowhere else, it is surely better to travel hopefully than to arrive, and I would like to think that the generation of students now embarking on a career in physics, and their children and their children's children, will grapple with questions at least as intriguing and fundamental as those which fascinate us today—questions which, in all probability, their twentieth-century predecessors did not even have the language to pose. The need of a revision, now 30 years later,  of the very foundations of quantum mechanics is even more clear, 90 years after conception. The starting point must be the wave mechanics of Schrödinger without particles, probabilities, multiversa, measurement paradox, particle-wave duality, complementarity and quantum jumps with atom microscopics described by the same continuum mathematics as the macroscopic world. PS Is quantum computing fake physics or possible physics? Nobody knows since no quantum computer has yet been constructed. But the hype/hope is inflated: perhaps by the end of the year... fredag 20 januari 2017 Shaky Basis of Quantum Mechanics Schrödinger's equation! Where did we get that equation from? Nowhere. It is not possible to derive it from anything you know. It came out of the mind of Schrodinger.  (Richard P. Feynman) In the final analysis, the quantum mechanical wave equation will be obtained by a postulate, whose justification is not that it has been deduced entirely from information already known experimentally (Eisberg and Resnick in Quantum Physics) Schrödinger's equation as the basic mathematical model of quantum mechanics is obtained as follows: Start with classical mechanics with a Hamiltonian of the following form for a system of $N$ interacting point particles of unit mass with positions $x_n(t)$ and momenta $p_n=\frac{dx_n}{dt}$ varying with time $t$ for $n=1,...N$: • $H(x_1,...,x_N)=\frac{1}{2}\sum_{n=1}^Np_n^2+V(x_1,....,x_N)$      where $V$ is a potential depending on the particle positions $x_n$, with the corresponding equations of motion • $\frac{dp_n}{dt}=\frac{\partial V}{\partial x_n}$ for $n=1,...,N$.           (1) Proceed by formally replacing momentum $p_n$ by the differential operator $-i\nabla_n$ where $\nabla_n$ is the gradient operator acting with respect to $x_n$ now viewed as the coordinates of three-dimensional space (and $i$ is the imaginary unit), to get the Hamiltonian  • $H(x_1,...,x_N)=-\frac{1}{2}\sum_{n=1}^N\Delta_n +V(x_1,...,x_N)$ supposed to be acting on a wave function $\psi (x_1,...,x_N)$ depending on $N$ 3d coordinates $x_1,...,x_N$, where $\Delta_n$ is the Laplacian with respect to coordinate $x_n$.  Then postulate Schrödinger's equation with a vague reference to (1) as a linear multi-d equation of the form: • $i\frac{\partial \psi}{\partial t}=H\psi$.         (2) Schrödinger's equation thus results from inflating single points to full 3d spaces in a purely formal twist of classical mechanics by brutally changing the meaning of $x_n$ from point to full 3d space and then twisting (1) as well. The inflation gives a wave function which depends on $3N$ space coordinates and as such has no physicality and is way beyond computability. The inflation corresponds to a shift from actual position, which may be of interest, to possible position (which can be anywhere), which has no interest.  The inflation from point to full 3d space has become the trade mark of modern physics as expressed in Schrödinger's multi-d linear equation, with endless speculation without conclusion about the possible physics of the inflation and the meaning of (2).  The formality and lack of physicality of the inflation of course should have sent Schrödinger's multi-d linear equation (2) to the waste-bin from start, but it didn't happen with the argument that even if the physics of the equation was beyond rationale, predictions from the equation always (yes, always!!) agree with observation. The lack of scientific logic was thus acknowledged from start, but it was taken for granted that anyway the equation describes physics very accurately. If a prediction from computation with Schrödinger's equation does not compare well with observation, there must be something wrong with the computation or comparison, never with the equation itself... But solutions of Schrödinger's multi-d equation cannot be computed in any generality and thus claims of general validity has no real ground. It is simply a postulate/axiom and as such true by assumption as a tautology which can only be true. The main attempts to give the inflation of classical mechanics into Schrödinger's multi-d linear equation a meaning, are: • Copenhagen Interpretation CI (probabilistic) • Many World Interpretation MWI (infinitely many parallel universa in certain contact)  • Pilot-Wave (Bohm)  with no one explanation gathering clear acceptance.   In particular,  Schrödinger did not like these interpretations of his equation and dreamed of a different version in 3d with physical "anschaulich" meaning, but did not find it... In the CI the possibilities become an actualities by observation, while in MWI all possibilities are viewed as actualities and in Bohmian mechanics the pilot wave represents the possibilities with a particle somehow carried by the wave representing actuality...all very strange...         onsdag 18 januari 2017 Many Worlds Interpretation vs Double Slit Experiment When I ask David Deutsch what his basic motivation is to believe that the Many Worlds Interpretation MWI of the multi-d linear Schrödinger equation describes real physics, I get the response that it is in particular the single electron double slit experiment, which he claims is difficult to explain otherwise. But is this so difficult to explain assuming that electrons are always waves and never particles? I don't think so. Here is my argument: In the single electron double slit experiment a screen displays an interference pattern created by a signal passing through a double slit, even with the input so weak that the interference pattern is created dot by dot as if being hit by a stream of single electron particles. This is presented as a mystery, by arguing that an electron particle must chose one of the slits to pass through, and doing so cannot create an interference pattern because that can only arise if the single electron is a wave freely passing through both slits. So the experiment cannot be explained which gives evidence that quantum mechanics is a mystery, and since it is a mystery anything is possible, like MWI. But there is no mystery if following Schrödinger we understand that electrons are always waves and never particles, and that the fact that the effect on the screen of an incoming wave on may be a dot somewhere on the screen triggered by local perturbations. A dot as effect does not require the cause to be dot-like. It is thus possible to understand the single electron double slit experiment under the assumption that electrons are always wave-like and always pass through both slits and thus can create an interference pattern, in accordance with the original objective of Schrödinger to describe electrons as waves, and then physical waves and not probability waves as in the Copenhagen Interpretation as another form of MWI. The trouble with quantum mechanics is the multi-d linear Schrödinger equation which describes probability waves or many worlds waves, which are not physical waves. The challenge is to formulate a Schrödinger equation which describes physical waves, that is to reach the objective of Schrödinger, which may possibly be done with something like realQM... Ironically, Schrödinger's equation for just one electron is a physical wave equation, and so if anything can be explained by that equation it is the single electron double slit experiment and its mystery then evaporates... PS The fact that putting a detector at one of the slits destroys the interference pattern, is also understandable with the electron as wave, since a detector may affect a wave and thus may destroy the subtle interference behind the pattern. tisdag 17 januari 2017 David Deutsch on Quantum Reality David Deutsch is a proponent of Everett's Many Worlds Interpretation MWI of quantum mechanics under a strong conviction that (from Many Worlds? Everett, Quantum Theory and Reality, Oxford Press 2010): • Science can only be explanation: asserting what is there in reality. • The only purpose of formalism, predictions, and interpretation is to express explanatory theories about what is there in reality, not merely predictions about human perceptions. • Restricting science to the latter would be arbitrary and intolerably parochial. These convictions forces Deutsch into claiming that the multiverse of MWI is reality, which many physicists find hard to believe, including me.  But I share the view of Deutsch that science is explanation of what is there in reality (in opposition to the Copenhagen Interpretation disregarding reality), and this is the starting point of realQM. Concerning the development and practice of quantum mechanics Deutsch says: • It is assumed that in order to discover the true quantum-dynamical equations of the world, you have to enact a certain ritual.  • First you have to invent a theory that you know to be false, using a traditional formalism and laws that were refuted a century ago.  • Then you subject this theory to a formal process known as quantization (which for these purposes includes renormalization).  • And that’s supposed to be your quantum theory: a classical ghost in a tacked-on quantum shell • In other words, the true explanation of the world is supposed to be obtained by the mechanical transformation of a false theory, without any new explanation being added.  • This is almost magical thinking.  • How far could Newtonian physics have been developed if everyone had taken for granted that there had to be a ghost of Kepler in every Newtonian theory—that the only valid solutions of Newtonian equations were those based on conic sections, because Kepler’s Laws had those. And because the early successes of Newtonian theory had them too?  Yes, quantum mechanics (based on Schrödinger's linear multi-d equation)  is ritual, formality and magical thinking, and that is not what science is supposed to be. The logic about Schrödinger's linear multi-d equation then is: 1. Interpretations must be made to give the equation a meaning.  2. All interpretations are basically equivalent. 3. One interpretation is MWI. 4. MWI is absurd non-physics. 5. Linear multi-d Schrödinger equation does not describe physics. måndag 16 januari 2017 Is Quantum Computing Possible? • .....may or may not be mystery as to what the world view that quantum mechanics represents. At least I do, because I'm an old enough man that I haven't got to the point that this stuff is obvious to me. Okay, I still get nervous with it. And therefore, some of the younger students ... you know how it always is, every new idea, it takes a generation or two until it becomes obvious that there's no real problem. It has not yet become obvious to me that there's no real problem. I cannot define the real problem, therefore I suspect there's no real problem, but I'm note sure there's no real problem.  • So that's why I like to investigate things. So I know that quantum mechanics seem to involve probability--and I therefore want to talk about simulating probability. (Feynman asking himself about a possibility of quantum computing in 1982) The idea of quantum computing originates from a 1982 speculation by Feynman followed up by Deutsch on the possibility of designing a quantum computer supposedly making use of the quantum states of subatomic particles to process and store information. The hope was that quantum computing would allow certain computations, such as factoring a large natural number into prime factors, which are impossible on a classical digital computer. A quantum computer would be able to crack encryption based on prime factorisation and thus upset the banking system and the world. In the hands of terrorists it would be a dangerous weapon...and so do we have to be afraid of quantum computing? Not yet in any case! Quantum computing is still a speculation and nothing like any real quantum computer cracking encryption has been constructed up to date, 35 years later. But the hopes are still high...although so far the top result is factorisation of 15 into 3 x 5...(...in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with Shor's algorithm...) But what is the reason behind the hopes? The origin is the special form of Schrödinger's equation as the basic mathematical model of the atomic world viewed as a quantum world fundamentally different from the macroscopic world of our lives and the classical computer, in terms of a wave function • $\psi (x_1,...,x_N,t)$  depending on $N$ three-dimensional spatial coordinates $x_1$,...,$x_N$ (and time $t$) for a system of $N$ quantum particles such as an atom with $N$ electrons. Such a wave function thus depends on $3N$ spatial variables of $N$ different versions of $R^3$ as three-dimensional Euclidean space. The multi-dimensional wave function $\psi (x_1,...,x_N,t)$ is to be compared with a classical field variable like density $\rho (x,t)$ depending on a single 3d spatial variable $x\in R^3$. The wave function $\psi (x_1,...,x_N,t)$ depends on $N$ different copies of $R^3$, while for $\rho (x,t)$ there is only one copy, and that is the copy we are living in. In the Many Worlds Interpretation MWI of Schrödinger's equation the $N$ different copies of $R^3$ are given existence as parallel universes or multiversa, while our experience still must be restricted to just one of them, with the other as distant shadows. The wave function $\psi (x_1,...,x_N,t)$ thus has an immense richness through its contact with multiversa, and the idea of quantum computing is to somehow use this immense richness by sending a computational task to multiversa for processing and then bringing back the result to our single universe for inspection. It would be like sending a piece of information to an immense cloud for complex computational processing and then bringing it back for inspection. But for this to work the cloud must exist in some form and be accessible. Quantum computing is thus closely related to MWI and the reality of a quantum computer would seem to depend on a reality of multiversa. The alternative to MWI and multiversa is the probabilistic Copenhagen Interpretation CI, but that does not make things more clear or hopeful. But what is the reason behind MWI and multiversa? The only reason is the multi-dimensional aspect of Schrödinger's equation, but Schrödinger's equation is a man-made ad hoc variation of the equations of motion of classical mechanics obtained by a purely formal procedure of representing momentum $p$ by a multi-dimensional gradient differential operator as $p=i\nabla$ thus formally replacing $p^2$ by the action on $\psi$ by a multi-dimensional Laplacian $-\Delta =-\sum_j\Delta_j$ with $\Delta_j$ the Laplacian with respect to $x_j$, thus acting with respect to all the $x_j$ for $j=1,...,N$. But by formally replacing $p$ by $i\nabla$ is just a formality without physical reason, and it is from this formality that MWI and multiversa arise and then also the hopes of quantum computing.  Is there then reason to believe that the multi-dimensional $-\Delta\psi$ has a physical meaning, or does it rather represent some form of Kabbalism or scripture interpretation? My view is that multiversa and quantum computing based on a multi-dimensional Schrödinger equation based on a formality, is far-fetched irrational dreaming, that Feynman's feeling of a real problem sensed something important,  and this is my reason for exploration of realQM based on a new version of Schrödinger's equation in physical three-dimensional space. PS1 One may argue that if MWI is absurd, which many think, then CI is also absurd, which many think, since both are interpretations of one an the same multi-dimensional Schrödinger equation, and the conclusion would then be that if all interpretations are absurd, then so is what is being interpreted, right? Even more reason for realQM and less hope for quantum computing... PS2 MWI was formulated by Hugh Everett III in his 1956 thesis with Wheeler. Many years later, Everett laughingly recounted to Misner, in a tape-recorded conversation at a cocktail party in May 1977, that he came up with his many-worlds idea in 1954 "after a slosh or two of sherry", when he, Misner, and Aage Petersen (Bohr’s assistant) were thinking up "ridiculous things about the implications of quantum mechanics". (see Many Worlds? Everett, Quantum Theory and Reality, Oxford University Press) PS3 To get a glimpse of the mind-boggling complexity of $3N$-dimensional space, think of the big leaps form 1d to 2d and from 2d to 3d, and then imagine the leap to the 6d of the two electrons of Helium with $N=2$ as the simplest of all atoms beyond Hydrogen with $N=1$. In this perspective a single Helium atom as quantum computer could be imagined to have the computational power of a laptop. Yes, many dimensions and many worlds are mind-boggling, and as such maybe just phantasy. lördag 14 januari 2017 The Quantum Manifesto Contradiction The scientific basis of the Manifesto is:  The idea is that superposition and entanglement will open capabilities beyond imagination: • Quantum computers are expected to be able to solve, in a few minutes, problems that are unsolvable by the supercomputers of today and tomorrow. But from where comes the idea that the quantum world is a world of superposition and entanglement? Is it based on observation? No, it is not, because the quantum world is not open to such inspection.   Instead it comes from theory in the form of a mathematical model named Schrödinger's equation, which is linear and thus allows superposition, and which includes Coulombic forces of attraction and repulsion as forms of instant (spooky) action at distance thus expressing entanglement.  But Schrödinger's equation is an ad hoc man-made theoretical mathematical model resulting from a purely formal twist of classical mechanics, for which a  deeper scientific rationale is lacking.  Even worse, Schrödinger's equation for an atom with $N$ electrons involves $3N$ space dimensions, which makes computational solution impossible even with $N$ very small.  Accordingly, the Manifesto does not allocate a single penny for solution of Schrödinger's equation, which is nowhere mentioned in the Manifesto. Note that the quantum simulators of the grand plan shown above are not digital solvers of Schrödinger's equation, but Q • Several platforms for quantum simulators are under development, including ultracold atoms in optical la ices, trapped ions, arrays of superconducting qubits or of quantum dots and photons. In fact, the rst prototypes have already been able to perform simulations beyond what is possible with current supercomputers, although only for some particular problems. The Quantum Manifesto is thus based on a mathematical model in the form of a multi-dimensional Schrödinger equation suggesting superposition and entanglement, from which the inventive physicist is able to imagine yet unimagined capabilities, while the model itself  is considered to be useless for real exploration of possibilities, because not even a quantum computer can be imagined to solve the equation.  This is yet another expression of quantum contradiction. Recall that the objective of RealQM is to find a new version of Schrödinger's equation which is computable and can be used for endless digital exploration of the analog quantum world. See also Quantum Europe May 2017. onsdag 4 januari 2017 Update of realQM and The Trouble of Quantum Mechanics I have made an update of realQM as start for the New Year! More updates will follow... The update contains more computational results (and citations) and includes corrections of some misprints. The recent book by Bricmont Making Sense of Quantum Mechanics reviews the confusion concerning the meaning of quantum mechanics, which is still after 100 years deeply troubling the prime achievement of modern physics. As only salvation Bricmont brings out the pilot-wave of Bohm from the wardrobe of dismissed theories, seemingly forgetting that it once was put there for good reasons. The net result of the book is thus that quantum mechanics in its present shape does not make sense...which gives me motivation to pursue realQM...and maybe someone else sharing the understanding that science must make sense...see earlier post on Bricmont's book ... Yes, the trouble of making sense of quantum mechanics is of concern to physicists today, as expressed in the article The Trouble with Quantum Mechanics in the January 2017 issue of The New York Review of Books by Steven Weinberg, sending the following message to the world of science ultimately based on quantum mechanics: • On the other hand, the problems of understanding measurement in the present form of quantum mechanics may be warning us that the theory needs modification.  • The goal in inventing a new theory is to make this happen not by giving measurement any special status in the laws of physics, but as part of what in the post-quantum theory would be the ordinary processes of physics. • Unfortunately, these ideas about modifications of quantum mechanics are not only speculative but also vague, and we have no idea how big we should expect the corrections to quantum mechanics to be. Regarding not only this issue, but more generally the future of quantum mechanics, I have to echo Viola in Twelfth Night: “O time, thou must untangle this, not I.”  Weinberg thus gives little hope that fixing the trouble with quantum mechanics will be possible by human intervention, and so the very origin of the trouble, the multi-dimensional linear Schrödinger equation invented by Schrödinger, must be questioned and then questioned seriously (as was done by Schrödinger propelling him away from the paradigm of quantum mechanics), and not as now simply be accepted as a God-given fact beyond question. This is the starting point of realQM. Of course Lubos Motl, as an ardent believer in the Copenhagen Interpretation, whatever it may be, does not understand the crackpot troubles/worries of Weinberg. As an expression of the interest in quantum mechanics still today, you may want to browse the upcoming Conference on 90 Years of Quantum Mechanics presented as: • This conference celebrates this magnificent journey that started 90 years ago. Quantum physics mechanics has during this period developed in leaps and bounds and this conference will be devoted to the progress of quantum mechanics since then. It aims to show how universal quantum mechanics is penetrating all of basic physics. Another aim of the conference is to highlight how quantum mechanics is at the heart of most modern science applications and technology.  ago Note the "leaps and bounds" which may be the troubles Weinberg is referring to...
004dd3e928ed358c
Open main menu Wikipedia β Many-worlds interpretation   (Redirected from Relative state interpretation) Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views historical reality as a many-branched tree, wherein every possible quantum outcome is realised.[12] Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics. In Dublin in 1952 Erwin Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that when the equation that won him a Nobel prize seems to be describing several different histories, they are "not alternatives but all really happen simultaneously". This is the earliest known reference to the many-worlds.[15][16] The many-worlds interpretation shares many similarities with later, other "post-Everett" interpretations of quantum mechanics which also use decoherence to explain the process of measurement or wavefunction collapse. MWI treats the other histories or worlds as real since it regards the universal wavefunction as the "basic physical entity"[20] or "the fundamental entity, obeying at all times a deterministic wave equation".[21] The other decoherent interpretations, such as consistent histories, the Existential Interpretation etc., either regard the extra quantum worlds as metaphorical in some sense, or are agnostic about their reality; it is sometimes hard to distinguish between the different varieties. MWI is distinguished by two qualities: it assumes realism,[20][21] which it assigns to the wavefunction, and it has the minimal formal structure possible, rejecting any hidden variables, quantum potential, any form of a collapse postulate (i.e., Copenhagenism) or mental postulates (such as the many-minds interpretation makes). Interpreting wavefunction collapseEdit The unreal/real interpretationEdit According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claims that Stephen Hawking and Steven Weinberg both favour the unreal interpretation.[26] Gardner also claims that the nonreal interpretation is favoured by the majority of physicists, whereas the "realist" view is only supported by MWI experts such as Deutsch and Bryce DeWitt. Hawking has said that "according to Feynman's idea", all the other histories are as "equally real" as our own,[27] and Martin Gardner reports Hawking saying that MWI is "trivially true".[28] In a 1983 interview, Hawking also said he regarded the MWI as "self-evidently correct" but was dismissive towards questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[29] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[30] For his own part, Penrose agrees with Hawking that QM applied to the universe implies MW, although he considers the current lack of a successful theory of quantum gravity negates the claimed universality of conventional QM.[31] Similarities with the de Broglie–Bohm interpretationEdit Kim Joris Boström has proposed a non-relativistic quantum mechanical theory that combines elements of the de Broglie–Bohm mechanics and that of Everett’s many-‘worlds’. In particular, the unreal MW interpretation of Hawking and Weinberg is similar to the Bohmian concept of unreal empty branch ‘worlds’: The second issue with Bohmian mechanics may at first sight appear rather harmless, but which on a closer look develops considerable destructive power: the issue of empty branches. These are the components of the post-measurement state that do not guide any particles because they do not have the actual configuration q in their support. At first sight, the empty branches do not appear problematic but on the contrary very helpful as they enable the theory to explain unique outcomes of measurements. Also, they seem to explain why there is an effective “collapse of the wavefunction”, as in ordinary quantum mechanics. On a closer view, though, one must admit that these empty branches do not actually disappear. As the wavefunction is taken to describe a really existing field, all their branches really exist and will evolve forever by the Schrödinger dynamics, no matter how many of them will become empty in the course of the evolution. Every branch of the global wavefunction potentially describes a complete world which is, according to Bohm’s ontology, only a possible world that would be the actual world if only it were filled with particles, and which is in every respect identical to a corresponding world in Everett’s theory. Only one branch at a time is occupied by particles, thereby representing the actual world, while all other branches, though really existing as part of a really existing wavefunction, are empty and thus contain some sort of “zombie worlds” with planets, oceans, trees, cities, cars and people who talk like us and behave like us, but who do not actually exist. Now, if the Everettian theory may be accused of ontological extravagance, then Bohmian mechanics could be accused of ontological wastefulness. On top of the ontology of empty branches comes the additional ontology of particle positions that are, on account of the quantum equilibrium hypothesis, forever unknown to the observer. Yet, the actual configuration is never needed for the calculation of the statistical predictions in experimental reality, for these can be obtained by mere wavefunction algebra. From this perspective, Bohmian mechanics may appear as a wasteful and redundant theory. I think it is considerations like these that are the biggest obstacle in the way of a general acceptance of Bohmian mechanics.[32] Frequency-based approachesEdit Everett (1957) briefly derived the Born rule by showing that the Born rule was the only possible rule, and that its derivation was as justified as the procedure for defining probability in classical mechanics. Everett stopped doing research in theoretical physics shortly after obtaining his Ph.D., but his work on probability has been extended by a number of people. Andrew Gleason (1957) and James Hartle (1965) independently reproduced Everett's work[36] which was later extended.[37][38] These results are closely related to Gleason's theorem, a mathematical result according to which the Born probability measure is the only one on Hilbert space that can be constructed purely from the quantum state vector.[39] Bryce DeWitt and his doctoral student R. Neill Graham later provided alternative (and longer) derivations to Everett's derivation of the Born rule.[7] They demonstrated that the norm of the worlds where the usual statistical rules of quantum theory broke down vanished, in the limit where the number of measurements went to infinity. Decision theoryEdit A decision-theoretic derivation of the Born rule from Everettarian assumptions, was produced by David Deutsch (1999)[40] and refined by Wallace (2002–2009)[41][42][43][44] and Saunders (2004).[45][46] Some reviews have been positive, although the status of these arguments remains highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes.[47] In the New Scientist article, reviewing their presentation at a September 2007 conference,[48][49] Andy Albrecht, a physicist at the University of California at Davis, is quoted as saying "This work will go down as one of the most important developments in the history of science."[47] The Born rule and the collapse of the wave function have been obtained in the framework of the relative-state formulation of quantum mechanics by Armando V. D. B. Assis. He has proved that the Born rule and the collapse of the wave function follow from a game-theoretical strategy, namely the Nash equilibrium within a von Neumann zero-sum game between nature and observer.[50] Symmetries and invarianceEdit Wojciech H. Zurek (2005)[51] has produced a derivation of the Born rule, where decoherence has replaced Deutsch's informatic assumptions.[52] Lutz Polley (2000) has produced Born rule derivations where the informatic assumptions are replaced by symmetry arguments.[53][54] Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman,[55] proposed a similar approach based on self-locating uncertainty.[56] In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. MWI overviewEdit Schematic illustration of splitting as a result of a repeated measurement. Relative stateEdit In his 1957 doctoral dissertation, Everett proposed that rather than modeling an isolated quantum system subject to external observation, one could mathematically model an object as well as its observers as purely physical systems within the mathematical framework developed by Paul Dirac, von Neumann and others, discarding altogether the ad hoc mechanism of wave function collapse. Since Everett's original work, there have appeared a number of similar formalisms in the literature. One such is the relative state formulation. It makes two assumptions: first, the wavefunction is not simply a description of the object's state, but that it actually is entirely equivalent to the object, a claim it has in common with some other interpretations. Secondly, observation or measurement has no special laws or mechanics, unlike in the Copenhagen interpretation which considers the wavefunction collapse as a special kind of event which occurs as a result of observation. Instead, measurement in the relative state formulation is the consequence of a configuration change in the memory of an observer described by the same basic wave physics as the object being modeled. The many-worlds interpretation is DeWitt's popularisation of Everett's work, who had referred to the combined observer–object system as being split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a possible tree as shown in the graphic below. Subsequently, DeWitt introduced the term "world" to describe a complete measurement history of an observer, which corresponds roughly to a single branch of that tree. Note that "splitting" in this sense is hardly new or even quantum mechanical. The idea of a space of complete alternative histories had already been used in the theory of probability since the mid-1930s for instance to model Brownian motion. Successive measurements with successive splittings Under the many-worlds interpretation, the Schrödinger equation, or relativistic analog, holds all the time everywhere. An observation or measurement is modeled by applying the wave equation to the entire system comprising the observer and the object. One consequence is that every observation can be thought of as causing the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches, or split into many "worlds". Since many observation-like events have happened and are constantly happening, there are an enormous and growing number of simultaneously existing states. If a system is composed of two or more subsystems, the system's state will be a superposition of products of the subsystems' states. Each product of subsystem states in the overall superposition evolves over time independently of other products. Once the subsystems interact, their states have become correlated or entangled and it is no longer possible to consider them independent of one another. In Everett's terminology each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. Properties of the theoryEdit MWI removes the observer-dependent role in the quantum measurement process by replacing wavefunction collapse with quantum decoherence. Since the role of the observer lies at the heart of most if not all "quantum paradoxes," this automatically resolves a number of problems; see for example Schrödinger's cat thought experiment, the EPR paradox, von Neumann's "boundary problem" and even wave-particle duality. Quantum cosmology also becomes intelligible, since there is no need anymore for an observer outside of the universe.[citation needed] MWI is a realist, deterministic, arguably local theory, akin to classical physics (including the theory of relativity), at the expense of losing counterfactual definiteness. MWI achieves this by removing wavefunction collapse, which is indeterministic and non-local, from the deterministic and local equations of quantum theory.[57] MWI (or other, broader multiverse considerations) provides a context for the anthropic principle which may provide an explanation for the fine-tuned universe.[58][59] MWI, being a decoherent formulation, is axiomatically more streamlined than the Copenhagen and other collapse interpretations; and thus favoured under certain interpretations of Occam's razor.[60][unreliable source?] Of course there are other decoherent interpretations that also possess this advantage with respect to the collapse interpretations. Comparative properties and possible experimental testsEdit However, in 1985, David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation.[62] The experiments require macroscopic quantum state preparation and quantum erasure by a hypothetical quantum computer which is currently outside experimental possibility. Since then Lockwood (1989), Vaidman and others have made similar proposals.[61] These proposals also require an advanced technology which is able to place a macroscopic object in a coherent superposition, another task which it is uncertain will ever be possible to perform. Many other controversial ideas have been put forward though, such as a recent claim that cosmological observations could test the theory,[63] and another claim by Rainer Plaga (1997), published in Foundations of Physics, that communication might be possible between worlds.[64] Copenhagen interpretationEdit In the Copenhagen interpretation, the mathematics of quantum mechanics allows one to predict probabilities for the occurrence of various events. When an event occurs, it becomes part of the definite reality, and alternative possibilities do not. There is no necessity to say anything definite about what is not observed. The universe decaying to a new vacuum stateEdit Any event that changes the number of observers in the universe may have experimental consequences.[65] Quantum tunnelling to a new vacuum state would reduce the number of observers to zero (i.e., kill all life).[citation needed] Some cosmologists[citation needed] argue that the universe is in a false vacuum state and that consequently the universe should have already experienced quantum tunnelling to a true vacuum state. This has not happened and is cited as evidence in favor of many-worlds. In some worlds, quantum tunnelling to a true vacuum state has happened but most other worlds escape this tunneling and remain viable. This can be thought of as a variation on quantum suicide. Popular CommentsEdit Also, it is a common misconception to think that branches are completely separate. In Everett's formulation, they may in principle quantum interfere (i.e., "merge" instead of "splitting") with each other in the future,[68] although this requires all "memory" of the earlier branching event to be lost, so no observer ever sees two branches of reality.[69][70] MWI states that there is no special role, or need for precise definition of measurement in MWI, yet Everett uses the word "measurement" repeatedly throughout its exposition. MWI response: Everett's treatment of observations / measurements covers both idealised good measurements and the more general bad or approximate cases.[74] Thus it is legitimate to analyse probability in terms of measurement; no circularity is present. We cannot be sure that the universe is a quantum multiverse until we have a theory of everything and, in particular, a successful theory of quantum gravity.[31] If the final theory of everything is non-linear with respect to wavefunctions then many-worlds would be invalid.[1][4][5][6][7] MWI response: Occam's razor actually is a constraint on the complexity of physical theory, not on the number of universes. MWI is a simpler theory since it has fewer postulates.[60][unreliable source?] Occams's razor is often cited by MWI adherents as an advantage of MWI. Unphysical universes: If a state is a superposition of two states   and  , i.e.,  , i.e., weighted by coefficients a and b, then if  , what principle allows a universe with vanishingly small probability b to be instantiated on an equal footing with the much more probable one with probability a? This seems to throw away the information in the probability amplitudes. There is a wide range of claims that are considered "many-worlds" interpretations. It was often claimed by those who do not believe in MWI[82] that Everett himself was not entirely clear[83] as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others) believe they fully understand Everett's meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds.[24] Everett's son reported that Hugh Everett "never wavered in his belief over his many-worlds theory".[84] Also Everett was reported to believe "his many-worlds theory guaranteed him immortality".[85] Asher Peres was an outspoken critic of MWI. For example, a section in his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres not only questioned whether MWI is really an "interpretation", but rather, if any interpretations of quantum mechanics are needed at all. An interpretation can be regarded as a purely formal transformation, which adds nothing to the rules of the quantum mechanics.[citation needed] Peres seems to suggest[according to whom?] that positing the existence of an infinite number of non-communicating parallel universes is highly suspect per those[who?] who interpret it as a violation of Occam's razor, i.e., that it does not minimize the number of hypothesized entities. However, it is understood[by whom?] that the number of elementary particles are not a gross violation of Occam's razor, one counts the types, not the tokens. Max Tegmark remarks[where?] that the alternative to many-worlds is "many words", an allusion to the complexity of von Neumann's collapse postulate. On the other hand, the same derogatory qualification "many words" is often applied to MWI by its critics[who?] who see it as a word game which obfuscates rather than clarifies by confounding the von Neumann branching of possible worlds with the Schrödinger parallelism of many worlds in superposition.[citation needed] MWI is considered by some[who?] to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[69] claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI.[23] Advocates of MWI often cite a poll of 72 "leading cosmologists and other quantum field theorists" [88] conducted by the American political scientist David Raub in 1995 showing 58% agreement with "Yes, I think MWI is true".[89] Max Tegmark also reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop.[91] According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." Such polls have been taken at other conferences, for example, in response to Sean Carroll's observation, "As crazy as it sounds, most working physicists buy into the many-worlds theory" [92] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." However, Nielsen notes that it seemed most attendees found it to be a waste of time: Asher Peres "got a huge and sustained round of applause… when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'" [93] A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[95] the authors remark that the results are similar to the previous Tegmark's 1998 poll. Speculative implicationsEdit Quantum suicide thought experimentEdit Quantum suicide, as a thought experiment, was published independently by Hans Moravec in 1987[96][97] and Bruno Marchal in 1988[98][99] and was independently developed further by Max Tegmark in 1998.[100] It attempts to distinguish between the Copenhagen interpretation of quantum mechanics and the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide regardless of the odds.[101] Weak couplingEdit A 1991 article by J. Polchinski also supports the view that inter-world communication is a theoretical possibility.[105] Other authors in a 1994 preprint article also contemplated similar ideas.[106] The reason inter-world communication seems like a possibility is because decoherence which separates the parallel worlds is never fully complete,[107][108] therefore weak influences from one parallel world to another can still pass between them,[107][109] and these should be measurable with advanced technology. Deutsch proposed such an experiment in a 1985 International Journal of Theoretical Physics article,[110] but the technology it requires involves human-level artificial intelligence.[64] Absurd/ highly improbable timelinesEdit Many MWI proponents assert that every physically possible event has to be represented in the multiversal stack, and by definition this would include highly unlikely scenarios and timelines. Bryce Seligman DeWitt has stated that "Everett/ Wheeler/ Graham do not in the end exclude any element of the superposition. All the worlds are there, even those in which everything goes wrong and all the statistical laws break down." [111] Murray Gell-Mann has stated that "Everything not forbidden is compulsory." (a quote reappropriated from T.H. White to describe the implications of the Totalitarian principle) [112] Max Tegmark has affirmed in numerous statements that absurd/ highly unlikely events are inevitable under the MWI interpretation. To quote Tegmark, "Things inconsistent with the laws of physics will never happen - everything else will... it's important to keep track of the statistics, since even if everything conceivable happens somewhere, really freak events happen only exponentially rarely".[113]Frank J. Tipler, although a strong advocate for the many-worlds interpretation, has expressed some skepticism regarding this aspect of the theory. In a 2015 interview he stated "We simply don't might be that the modulus over the wavefunction of that possibility [i.e. an extremely absurd yet physically possible event] is zero in which case there is no such world...There are universes out there, which you could imagine which...would not be actualized." [114] Similarity to modal realismEdit Time travelEdit Many-worlds in literature and science fictionEdit Some of these stories or films violate fundamental principles of causality and relativity, since the information-theoretic structure of the path space of multiple universes (that is, information flow between different paths) is very likely complex. Star Trek uses many-worlds in many stories. In the Original Series, Spock and Kirk make a crossover into a mirror universe and encounter versions of themselves from the other universe. In an episode of Star Trek: The Next Generation, Worf crosses over into a parallel universe while piloting a shuttlecraft and manages to encounter several other universes. The TNG finale "All Good Things" uses the concept heavily as Picard jumps between times. This is continued in Star Trek: Deep Space 9 with the episodes arching between the Terran empire and the Alliance, where Sisko and Kira also find mirror versions of themselves and other characters who are currently dead in the central universe, or dead in the parallel universe. Michael Crichton's 1999 novel, Timeline, is about time travel into the past. The technology used in the book is based upon the existence of the MWI's multiverse as described by Everett.[115][116] The author Neal Stephenson drew on the many-worlds theory for some aspects of his 2008 novel Anathem. A more recent iteration, Rick and Morty on the channel Adult Swim, uses the many-worlds interpretation as a basis for the occurrences in the show. The cartoon also makes an allusion to Schrödinger's Cat in an episode in which they split their existence into two hypothetical, equally possible existences. In episode 5 of the Netflix series, Stranger Things, the protagonists' middle school teacher Scott Clarke specifically mentions the many-worlds theory when asked about the possibility of "theoretical" alternate dimensions. Many worlds interpretation was also used in The time Ships by Stephen Baxter.[citation needed] See alsoEdit 4. ^ a b c d e Everett, Hugh (1957). "Relative State Formulation of Quantum Mechanics". Reviews of Modern Physics. 29 (3): 454–462. Bibcode:1957RvMP...29..454E. doi:10.1103/RevModPhys.29.454.  7. ^ a b c d e f g h Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3–140. 8. ^ H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundations of Physics, vol. 1, pp. 69–76, (1970). 11. ^ The Many Worlds Interpretation of Quantum Mechanics[permanent dead link] 15. ^ David Deutsch. The Beginning of infinity. Page 310. 16. ^ 20. ^ a b Everett 1957, section 3, 2nd paragraph, 1st sentence 22. ^ Zurek, Wojciech (March 2009). "Quantum Darwinism". Nature Physics. 5 (3): 181–188. arXiv:0903.5082 . Bibcode:2009NatPh...5..181Z. doi:10.1038/nphys1202.  23. ^ a b Everett 26. ^ A response to Bryce DeWitt, Martin Gardner, May 2002 27. ^ Award winning 1995 Channel 4 documentary "Reality on the rocks: Beyond our Ken" "Archived copy". Archived from the original on 2007-10-22. Retrieved 2007-10-20.  where, in response to Ken Campbell's question "all these trillions of Universes of the Multiverse, are they as real as this one seems to be to me?" Hawking states, "Yes.... According to Feynman's idea, every possible history (of Ken) is equally real." 31. ^ a b Penrose, Roger (August 1991). "Roger Penrose Looks Beyond the Classic-Quantum Dichotomy". Sciencewatch. Archived from the original on 2007-10-23. Retrieved 2007-10-21.  32. ^ Kim Joris Boström (2012). "Combining Bohm and Everett: Axiomatics for a Standalone Quantum Mechanics". arXiv:1208.5632  [quant-ph].  34. ^ Kent, Adrian (2010). "One world versus many: The inadequacy of Everettian accounts of evolution, probability, and scientific confirmation". In S. Saunders, J. Barrett, A. Kent and D. Wallace (eds). Many Worlds? Everett, Quantum Theory and Reality. Oxford University Press. arXiv:0905.0624 . Bibcode:2009arXiv0905.0624K.  35. ^ Kent, Adrian (1990). "Against Many-Worlds Interpretations". Int. J. Mod. Phys. A. 5 (9): 1745–1762. arXiv:gr-qc/9703089 . Bibcode:1990IJMPA...5.1745K. doi:10.1142/S0217751X90000805.  38. ^ Pitowsky, I. (2005). "Quantum mechanics as a theory of probability". arXiv:quant-ph/0510095 .  39. ^ Gleason, A. M. (1957). "Measures on the closed subspaces of a Hilbert space". Journal of Mathematics and Mechanics. 6 (4): 885–893. doi:10.1512/iumj.1957.6.56050. MR 0096113.  40. ^ Deutsch, David (1999). "Quantum Theory of Probability and Decisions". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 455 (1988): 3129. arXiv:quant-ph/9906015 . Bibcode:1999RSPSA.455.3129D. doi:10.1098/rspa.1999.0443.  41. ^ Wallace, David (2002). "Quantum Probability and Decision Theory, Revisited". arXiv:quant-ph/0211104 .  42. ^ Wallace, David (2003). "Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation". Stud. Hist. Phil. Mod. Phys. 34 (3): 415–438. arXiv:quant-ph/0303050 . Bibcode:2003SHPMP..34..415W. doi:10.1016/S1355-2198(03)00036-4.  43. ^ Wallace, David (2003). "Quantum Probability from Subjective Likelihood: Improving on Deutsch's proof of the probability rule". arXiv:quant-ph/0312157 .  44. ^ Wallace, David (2009). "A formal proof of the Born rule from decision-theoretic assumptions". arXiv:0906.2718  [quant-ph].  45. ^ Saunders, Simon (2004). "Derivation of the Born rule from operational assumptions". Proc. Roy. Soc. Lond. A. 460 (2046): 1771–1788. arXiv:quant-ph/0211138 . Bibcode:2004RSPSA.460.1771S. doi:10.1098/rspa.2003.1230.  46. ^ Saunders, Simon (2004). "What is Probability?". Quo Vadis Quantum Mechanics?. The Frontiers Collection. p. 209. arXiv:quant-ph/0412194 . doi:10.1007/3-540-26669-0_12. ISBN 3-540-22188-3.  49. ^ Perimeter Institute, Many worlds at 50 conference, September 21–24, 2007 Archived 2007-10-20 at the Wayback Machine. 50. ^ Armando V. D. B. Assis (2011). "Assis, Armando V. D. B. On the nature of   and the emergence of the Born rule". Annalen der Physik. 523 (11): 883–897. arXiv:1009.1532 . Bibcode:2011AnP...523..883A. doi:10.1002/andp.201100062.  51. ^ Zurek, Wojciech H. (2005). "Probabilities from entanglement, Born's rule from envariance". Phys. Rev. A. 71 (5): 052105. arXiv:quant-ph/0405161 . Bibcode:2005PhRvA..71e2105Z. doi:10.1103/physreva.71.052105.  52. ^ Schlosshauer, M.; Fine, A. (2005). "On Zurek's derivation of the Born rule". Found. Phys. 35 (2): 197–213. arXiv:quant-ph/0312058 . Bibcode:2005FoPh...35..197S. doi:10.1007/s10701-004-1941-6.  53. ^ Polley, L (2001). "Position eigenstates and the Statistical Axiom of Quantum Mechanics". Foundations of Probability and Physics. p. 314. arXiv:quant-ph/0102113 . doi:10.1142/9789812810809_0022. ISBN 978-981-02-4846-8.  54. ^ Polley, L (1999). "Quantum-mechanical probability from the symmetries of two-state systems". arXiv:quant-ph/9906124 .  55. ^ Vaidman, L. "Probability in the Many-Worlds Interpretation of Quantum Mechanics." In: Ben-Menahem, Y., & Hemmo, M. (eds), The Probable and the Improbable: Understanding Probability in Physics, Essays in Memory of Itamar Pitowsky. Springer. 56. ^ Sebens, Charles T; Carroll, Sean M (2014). "Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics". arXiv:1405.7577  [quant-ph].  60. ^ a b Everett FAQ "Does many-worlds violate Ockham's Razor?" 62. ^ Deutsch, D., (1986) 'Three experimental implications of the Everett interpretation', in R. Penrose and C.J. Isham (eds.), Quantum Concepts of Space and Time, Oxford: The Clarendon Press, pp. 204–214. 63. ^ Page, D., (2000) 'Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?' 64. ^ a b c d e f g Plaga, R. (1997). "On a possibility to find experimental evidence for the many-worlds interpretation of quantum mechanics". Foundations of Physics. 27 (4): 559–577. arXiv:quant-ph/9510007 . Bibcode:1997FoPh...27..559P. doi:10.1007/BF02550677.  65. ^ Page, Don N. (2000). "Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?". Eighth Canadian conference on general relativity and relativistic astrophysics. 493: 225. arXiv:gr-qc/0001001 . Bibcode:1999AIPC..493..225P. doi:10.1063/1.1301589. ISBN 156396905X.  67. ^ Penrose, R. The Road to Reality, §21.11 68. ^ Tegmark, Max (1997). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortschritte der Physik. 46 (6–8): 855–862. arXiv:quant-ph/9709032 . doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q. . To quote: "What Everett does NOT postulate: "At certain magic instances, the world undergoes some sort of metaphysical 'split' into two branches that subsequently never interact." This is not only a misrepresentation of the MWI, but also inconsistent with the Everett postulate, since the subsequent time evolution could in principle make the two terms...interfere. According to the MWI, there is, was and always will be only one wavefunction, and only decoherence calculations, not postulates, can tell us when it is a good approximation to treat two terms as non-interacting." 70. ^ Simon, Christoph (2009). "Conscious observers clarify many worlds". arXiv:0908.0322  [quant-ph].  73. ^ Arnold Neumaier's comments on the Everett FAQ, 1999 & 2003 76. ^ Stapp, Henry (2002). "The basis problem in many-world theories" (PDF). Canadian Journal of Physics. 80 (9): 1043–1052. arXiv:quant-ph/0110148 . Bibcode:2002CaJPh..80.1043S. doi:10.1139/p02-068.  77. ^ Brown, Harvey R; Wallace, David (2005). "Solving the measurement problem: de Broglie–Bohm loses out to Everett" (PDF). Foundations of Physics. 35 (4): 517–540. arXiv:quant-ph/0403094 . Bibcode:2005FoPh...35..517B. doi:10.1007/s10701-004-2009-3.  78. ^ Rubin, Mark A (2003). "There is No Basis Ambiguity in Everett Quantum Mechanics". Foundations of Physics Letters. 17 (4): 323–341. arXiv:quant-ph/0310186 . Bibcode:2004FoPhL..17..323R. doi:10.1023/B:FOPL.0000035668.37005.e0.  79. ^ Everett FAQ "Does many-worlds violate conservation of energy?" 80. ^ Everett FAQ "How do probabilities emerge within many-worlds?" 81. ^ Everett FAQ "When does Schrodinger's cat split?" 87. ^ Deutsch, David (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer". Proceedings of the Royal Society of London A. 400 (1818): 97–117. Bibcode:1985RSPSA.400...97D. CiteSeerX . doi:10.1098/rspa.1985.0070.  88. ^ Elvridge., Jim (2008-01-02). The Universe – Solved!. pp. 35–36. ISBN 978-1-4243-3626-5. OCLC 247614399. 58% believed that the Many Worlds Interpretation (MWI) was true, including Stephen Hawking and Nobel Laureates Murray Gell-Mann and Richard Feynman  89. ^ Bruce., Alexandra. "How does reality work?". Beyond the bleep : the definitive unauthorized guide to What the bleep do we know!?. p. 33. ISBN 978-1-932857-22-1. [the poll was] published in the French periodical Sciences et Avenir in January 1998  91. ^ Max Tegmark on many-worlds (contains MWI poll) 94. ^ Survey Results Archived 2010-11-04 at the Wayback Machine. 95. ^ Schlosshauer, Maximilian; Kofler, Johannes; Zeilinger, Anton (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069 . Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004.  96. ^ "The Many Minds Approach". 25 October 2010. Retrieved 7 December 2010. This idea was first proposed by Austrian mathematician Hans Moravec in 1987...  99. ^ Marchal, Bruno (1991). De Glas, M.; Gabbay, D., eds. "Mechanism and personal identity" (PDF). Proceedings of WOCFAI 91. Paris. Angkor.: 335–345.  102. ^ W.M.Itano et al., Phys. Rev. A47,3354 (1993). 103. ^ M. SargentIII, M. O. Scully and W. E. Lamb, Laser physics (Addison-Wesley, Reading, 1974), p. 27. 104. ^ M.O. Scully and H. Walther, Phys. Rev. A 39,5229 (1989). 105. ^ J. Polchinski, Phys. Rev. Lett. 66,397 (1991). 106. ^ M. Gell-Mann and J. B. Hartle, Equivalent Sets of Histories and Multiple Quasiclassical Domains, preprint University of California at Santa Barbara UCSBTH-94-09 (1994). 107. ^ a b H. D. Zeh, Found.Phys. 3,109 (1973). 108. ^ H. D. Zeh, Phys. Lett. A 172,189 (1993). 109. ^ A. Albrecht, Phys. Rev. D 48, 3768 (1993). 110. ^ D. Deutsch, Int. J. Theor. Phys. 24,1 (1985). 111. ^ DeWitt, B (1970). "Quantum mechanics and reality". Physics Today 23, 9, 30 (1970); doi: 113. ^ Max Tegmark: Q and A (Multiverse Philosophy) 115. ^ The book erroneously attributes the phrase "many worlds" to Everett. It was actually coined by Bryce DeWitt. 116. ^ Michael Crichton (1 January 2013). Timeline. Random House Publishing Group. pp. 121–123, 127–128. ISBN 978-0-345-53901-4.  Further readingEdit External linksEdit
684d4b2045fa02e0
The Trouble With Physics I’ve just finished reading Lee Smolin’s new book The Trouble With Physics, which should be released and available for sale very soon. It’s a great book, covering some of the same ground as mine, but with significant differences. This won’t be a usual sort of review, since I’ll mainly concentrate on discussing the parts of Smolin’s book that I found most interesting, and my perspective here is kind of unique, having spent a lot of time writing about many of the same subjects that he covers. I will offer some capsule consumer advice: if you have any interest at all in what is going on these days in fundamental physics, you should buy and read both books. If you really are on a tight budget, and your main interest is in the relation of mathematics and physics, you should get mine. If your main interest is in quantum gravity or the foundations of quantum mechanics, you should get Smolin’s. His is more appropriate for someone with little background in this area, mine contains some significantly more demanding material which requires some expertise to appreciate. What most fascinated me about Smolin’s book is the personal story behind it. He was a graduate student at Harvard during the same years that I was an undergraduate there, and describes well that place and time. The standard model had just been formulated a few years earlier, and experimental confirmation was pouring in. Many of the people responsible for the standard model were there at Harvard, and there was more than a bit of justifiable pride and arrogance. Smolin was of a philosophical bent, and initially put off: The atmosphere was not philosophical; it was harsh and aggressive, dominated by people who were brash, cocky, confident, and in some cases insulting to people who disagreed with them. He studied the philosophy of science and was very struck by Paul Feyerabend’s Against Method (there are also has some amusing tales of later personal encounters with Feyerabend). Feyerabend’s philosophy of science has been described as “anarchistic”; he sees no one “scientific method”, but science as a very human activity, in which all sorts of different tactics are used to make progress towards better understanding. Smolin recognized that much as he would prefer a more deeply philosophical approach, it was the much more pragmatic tactics of people like Coleman, Glashow and Weinberg, who wouldn’t be caught dead talking about the nature of space and time, or foundational problems of quantum mechanics, that was what was really having success. Smolin begins his book by explaining what he (and I) see as the most important fact about the past thirty years of theoretical particle physics research. We’re in a historically unprecedented situation, with virtually no progress being made on the fundamental problems of particle physics for a very long time, despite huge efforts. In his description, the field has “hit a wall”; I like to describe it as a victim of its own success. The standard model is just too good. It’s too hard to find an experimental result that disagrees with it, and too hard to come up with theoretical advances that will address some of the things it leaves unexplained. Smolin sees the source of the problem in the field’s insistence on sticking with a way of doing science which worked until 30 years ago, but now has become dysfunctional, with string theory only a symptom of the underlying problem. He writes: I have mentored several talented young people through crises very similar to my own. But I cannot tell them what I told my younger self – that the dominant style was so dramatically successful that it must be respected and accomodated. Now I have to agree with my younger colleagues that the dominant style is not succeeding. Elsewhere he writes: My hypothesis is that what’s wrong with string theory is the fact that it was developed using the elementary-particle-physics style of research, which is ill-suited to the discovery of new theoretical frameworks… This competitive, fashion-driven style worked when it was fueled by experimental discoveries but failed when there was nothing driving fashion but the views and tastes of a few prominent individuals. Smolin was a student of Stanley Deser’s, and during his graduate student years supergravity was a field that was just taking off. He describes getting to know Peter van Nieuwenhuizen and Martin Rocek and being offered a chance to get into the field at the ground floor, one he passed up because he couldn’t believe that the kind of lengthy algebraic calculations they were doing could give real insight: It was like being offered one of the first jobs at Microsoft or Google. Rocek, van Niewuwenhuizen, and many of those I met through them have made brilliant careers out of supersymmetry and supergravity. I’m sure that from their point of view, I acted like a fool and blew a brilliant opportunity. Smolin didn’t join the Stony Brook supergravity group, but found that he could make a place for himself in the physics community working on quantum gravity, but using particle physicist’s methods: … an easy opportunity opened up while I was a graduate student, which was to attack the problem of quantum gravity using recent methods developed to study the standard model. So I dould pretend to be a normal-science kind of physicist and train as a particle physicist. I then took what I learned and applied it to quantum gravity. Smolin ended up with a post-doc at the new ITP in Santa Barbara, which luckily was running a program on quantum gravity that year. His career tactic almost didn’t pay off: One day, as we were waiting for the results of our applications, a friend came by to tell me that I was unlikely to get any jobs, because it was impossible to compare me with other people. If I wanted a career, I had to stop working on my own ideas and work on what other people were doing, because only then could they rank me against my peers. The most powerful parts of the book are the chapters entitled How Do You Fight Sociology?, and How Science Really Works. They give a detailed and clear diagnosis of the problematic way string theory research is being conducted, and decisions are being made about who deserves a job. Smolin has an insider’s point of view, particularly because he himself worked on string theory: … during the years I worked on string theory, I cared very much what the leaders of the community thought of my work. Just like an adolescent, I wanted to be accepted by those who were the most influential in my little circle. If I didn’t actually take their advice and devote my life to the theory, it’s only because I have a stubborn streak that usually wins out in these situations. For me, this is not an issue of “us” versus “them,” or a struggle between two communities for dominance. These are very personal problems which I have been contending with internally for as long as I have been a scientist. So I sympathize strongly with the plight of string theorists, who want both to be good scientists and to have the approval of the powerful people in their field. I understand the difficulty of thinking clearly and independently when acceptance in your community requires belief in a complicated set of ideas that you don’t know how to prove yourself. This is a trap it took me years to think my way out of. Smolin gives many examples of the “groupthink” behavior of the string theory community, while characterizing string theorists as “almost all more open-minded and self-critical and less dogmatic than they are en masse.” He describes string theorists as: … supremely confident both of the truth of string theory and of their superiority over those unable or unwilling to do it. To many string theorists, especially the young ones with no memory of physics before their time, it is incomprehensible that a talented physicist, given the chance, would choose to be anything but a string theorist. …Anyone who hangs out with string theorists encounters this kind of supreme confidence regularly. No matter what the problem under discussion, the one option that never comes up (unless introduced by an outsider) is that the theory might simply be wrong. If the discussion veers to the fact that string theory predicts a landscape and hence makes no predictions, some string theorists will rhapsodize about changing the definition of science. Some string theorists prefer to believe that string theory is too arcane to be understood by human beings, rather than consider the possibility that it might just be wrong. Smolin finds in the string theory community a sense of entitlement and disdain for anyone who works on alternatives to the theory, with major string theory conferences never inviting people who work on alternatives to speak. An editor from Cambridge University Press told him that one string theorist said he would never consider publishing with the press because it had put out a book on LQG (I see why their publishing my book was out of the question…). At string theory conferences Smolin would be asked “what are you doing here?” or told “It’s so nice to see you here! We’ve been worried about you.” Some friends explained to him that if he wanted to be considered part of the string theory community he had to work not just on string theory, but on the particular string theory problems that were fashionable at the moment. One problem for physicists trying to get tenured positions that Smolin mentions is that most universities now require letters from 10-15 people evaluating their work, with a small number of negative evaluations sufficient to sink their chances. If you’re working on something other than a mainstream topic, finding 10-15 people who can comment knowledgeably on your work can be impossible. He describes string theorists as mostly submitting the same two or three research proposals. This narrow concentration on a small number of problems is defended by some senior theorists as a “disciplined” approach, one that will more surely lead to progress than encouraging people to pursue a variety of different research directions. Very recently, Smolin sees things changing: Until last year I had hardly ever encountered an expression of doubt from a string theorist. Now I sometimes hear from young people that there is a “crisis” in string theory. “We have lost our leaders,” some of them will say. “Before this, it was always clear what the hot direction was, what people should be working on. Now there’s no real guidance,” or (to each other, nervously) “Is it true that Witten is no longer doing string theory?” One can quantify this new situation by noting that there have been virtually no heavily cited new papers during the past few years, except perhaps for the KKLT one that is part of the landscape story. Smolin notes that many string theorists (including himself) have often been ill-informed about the exact state of knowledge concerning crucial conjectures about string theory. One example he discusses in detail is that of the finiteness of multi-loop string amplitudes. The state of the subject is that one knows how to precisely formulate them and can show lack of divergences only up to two loops (this is due to the work of d’Hoker and Phong). At higher genus d’Hoker and Phong have a conjectural definition, but have not yet been able to show that divergences cancel. Few string theorists seem to be aware of this, and some of them react with great hostility and shower with insults anyone who mentions this issue (as I’ve done here on this blog). There’s much else of interest in Smolin’s book, including a lot of material about what he sees as promising ideas in quantum gravity, discussion of research on the foundations of quantum mechanics, and a chapter on “seers”, people doing original work on foundations. These include ‘t Hooft, Penrose, and many others less well-known. While I agree with just about all of what Smolin has to say about string theory, my own background is different and I see promise in very different lines of research than he does. I’m much more skeptical than him about our ability to get useful experimental data on quantum gravity, and see questions about quantum mechanics rather differently. My prejudice is that, lacking experimental guidance, the thing to do is to try and better understand the mathematical structures underlying the standard model. In the past, better physical models have gone hand in hand with deeper mathematics, and I’ll bet this will continue to be true in the future. Quantum mechanics has deep connections to representation theory, a part of mathematics that unifies many different subfields. It seems likely to me that a better understanding of quantum mechanics will come from better understanding representation theory and its connections to physics. There’s a lot of other sorts of material in the book that I haven’t discussed, and I strongly recommend that people read the whole thing. It’s very, very good, and anyone interested enough to follow this blog will find it highly rewarding. This entry was posted in Book Reviews, Not Even Wrong: The Book. Bookmark the permalink. 92 Responses to The Trouble With Physics 1. Gina says: Thanks for considering my suggestion and the comments, Peter (and thanks nigel and a for the 2 items.) Certainly my suggestion was not meant to replace reading your book or Smolin’s but it could be helpful for me to understand what is the essentials, e.g. while reading these books or the elegant books on the other side. Your review on Smolin’s book (which is of the same size as what I would like to read) is very personal/philosophical almost like a gathering of two veterans on the same side of a battle but not so useful to understand the essentials. Is the critique of string theory is similar to the critique on biologist for not understanding/finding cure for cancer? or is it stronger. Is the idea that string rather than point particle can, in principle, lead to a “grand theory” a-priori senseless, or just not yet successful, or reasonable-to-start-with but by now clearly a failure? 10^500 looks fishy but is 10^500 possibilities really that bad? I think the main reason for me to be suspicious with “string bashing” is that it did not lead (yet) to interesting science: namely to scientific papers (not popular reviews and books). Why is that? 2. Lubos Motl says: Dear Peter, you completely missed my point. My point was not to attack other people than you. It was, on the contrary, meant to prove that you are the #1 moron on this crackpots’ discussion forum. 😉 3. Peter Woit says: Again, for the short version, I recommend reading my 2001 article. Yes, 10^500 possibilities really is that bad. All indications are that it makes it completely impossible to ever extract a real prediction from the theory, which is deadly. The situation of string theory is very different than that of cancer research, an analog would be if current cancer treatments not only didn’t help at all with the disease, but made it much worse. “string-bashing” by itself doesn’t lead to interesting new science, except in the sense that encouraging people to stop working on a failed idea and look for something else to do may have a positive effect. Lee is one of the leaders of a very active research program that is working on new and different ideas, and he has published a long list of scientific articles on this. I have my own ideas about alternatives to string theory, have written much less about this. For some of what I have written see my long 2002 paper on the arXiv. 4. Peter Woit says: Actually I did get that that was the point of your comment. But, sorry, if you want to write comments here about what a moron I am, you have to avoid at the same time attacking other people as morons, since I’m not going to allow that. 5. Tom Killick says: I am not a physicist. I am an engineer. It has always seemed to me that postulating un-testable hypotheses is more the domain of religion or philosophy than of Physics. I would like to draw you attention to work being done by Charles Francis that is evolutionary, exciting and that does belong in the realm of Physics. This work claims to have bearing on dark matter, the age of the universe and more. It is also eminently testable and appears to explain currently anomalous data and makes specific predictions about future data. I am intrigued by work being done by Charles Francis for much the same reason as you concentrate in your review of Smolin’s book on the areas that most interest you, the personal story. I went to high school (in fact a very prestigious private catholic boarding school) with Charles Francis the late sixties and early seventies. I can unequivocally say that he was the most brilliant mathematician and logical thinker I have ever met. At sixteen he could produce elegant, concise and original proofs that allowed me, other classmates and his teachers to begin to understand the power and beauty of mathematical physics. I know he went on to Cambridge and Birkbeck college in London to finish a number of degrees. He is a very eccentric individual which has allowed him to focus on solving what he calls “the really important problems” for the last 40 years. His eccentricity has also isolated him from much of what I assume to be the hubris of modern physics. His paper which in his characteristically un-self effacing way is titled “Does a Teleconnection between Quantum States account for Missing Mass, Galaxy Ageing, Lensing Anomalies, Supernova Redshift, MOND, and Pioneer Blueshift?” 6. Gina says: Here is my a priori take on this before reading any of these books. 1. The question of quantum gravity and this grand unification is a major intellectual/scientific challenge. 2. String theory offers understanding for this problem as well as deep and interesting insights on various issues from physics. Motl list of 12 appears to be very impressive. (And apropos Motl, I even conjecture, perhaps contrary to this example, that most string theorists are neither bullies nor male Chauvinist.) Not many other scientific theories can match such a list. String theory is the only major theory that offers such understanding for the unification problem. It also led to great mathematics. (Well, there is some amount of over-sale, and discussions with serious faces of all this multiple universes stuff but this is not that unusual.) 3. There are serious problems with string theory concerning the possibility to draw concrete predictions that can be verified. There are also many possible string theories. (I do not share the interpretation that these limitations of the theory are fatal. And maybe we cannot hope for more.) 4. String theory is still rather tentative. It is quite possible that the theory will fade away because of its difficulties and it is also possible that it will be replaced by a different theory which does a better job. It is possible it will prevail. As for the discussion, I cannot see, nigel, how string theory can be “dangerous” and I cannot see, Peter, how things can between 2001 and 2006 have gotten “much much worse” (but I can see you being much much more excited.) And “dying (like Maxwell) with a firm belief in a flawed theory”, nigel, can serve as a nerd’s curse but it is not significantly more terrible than just dying. (Unless the death is caused by the theory.) 7. Lubos Motl says: Virtually all of string theorists are nice people who never argue with anyone else, they’re not chauvinists, and most of them are feminists. Most of them also think that string/M-theory are robust twin towers that are not threatened by any social effect or passionate proponents of alternative theories or proponents of no theories, and they almost always try to avoid interactions that could lead to tension which also gives them more time for serious work. Almost no string theorists drive SUV and they produce a minimum amount of carbon dioxide. 8. Peter Woit says: Discovering that your theory has 10^500 times more solutions to it than you thought it did really does count as “much worse”. Virtually every string theorist will admit that the “landscape” is a huge problem for the theory. Things really are much worse now than they were 5 years ago. 9. anonymous says: I came across this weblog after Amazon automatically recommended me Peter Woit’s new book. I went through it and I was amazed to see the extend that some disagreements can take and the way that people, affiliated with high profile institutions, behave when they should be models for the rest of the community and their students. Congratulations! Personally, I find the situation rather interesting and I really hope something good will come out of this, whether it is in favour of string theory or not. I would like to make a comment, however, on the situation the way I see it. Please keep in mind that I am not a string theorist and I wouldn’t even call myself a physicist in general. Nevertheless, here it is. Let’s see how long it takes for someone to get a PhD. Usually it is 4 years as an undergraduate and 5 years as a postgraduate. Most of the young people interested in string theory feel that they should start studying the subject while undergraduates. I guess that is why MIT introduces string theory classes and Zwiebach publishes books on “undergraduate” string theory. It has to do with demand, the customers have to be satisfied somehow. Blame it to the hype. In the graduate school you are forced to publish something, as if the rest of the thousands of people that form the “community”, or physics as a science in general, is going to be saved by the students’ publications. I may be wrong on this and it might be indeed necessary to publish as many papers as possible although I really doubt it. So, what is left? Narrow minded people, they have been doing strings or whatever all their life so you can’t expect anything better, or disgusted and bored people who realize that life can be exciting without physics and go work for the industry, capitalizing their PhDs by getting nice flats and nice cars and going in nice places for their vacations. I recall an undergraduate telling me that he wants to get his degree as soon as possible and go do a PhD in string theory without doing a masters first and that is why he chose to study for a 3 years bachelors ( this senario is possible in some countries ). I mean, how is that possible? Senior people are very well educated, no doubt about that, but what about the undergraduate/graduate folks? I am looking at the well known QFT book by Peskin and Schroeder sitting in my bookself right now. How long does it take for someone to read it, solve the excercises and be able to reproduce the results mentioned in it? In other words, master it? What about general relativity? Cosmology? Non relativistic quantum mechanics? Particle physics ( with a phenomenological bend )? Statistical physics? Catch up with the rest of the community? Interact with people working in other fields like for example condensed matter or mathematics. Two or three years? If not, then how does someone attempt to solve a problem when he doesn’t even know what the problem is in the first place? How is critical thinking going to be developed the way the educational system works? 10. Ted Fails says: Im an amateur at this, but it seems to me that if there is already a huge uproar over explaining the number “one” as arising naturally in physics (ie, the CC), then how is it that anyone is comfortable with a finite number like 10^500? If this number is not infinite then isnt it really wierd? (If it IS cardinal C, then why is it refered to as 10^500, which, by the way, is a very long way from C.) In a unital algebra, “one” will frequently be present, but I find 10^500 a much more curious number. Anyone please comment. 11. ak says: tg claimed: ‘So I summarize that string theory is an with high degree of mathematical consistency but which clearly needs experimentation. Its difficult math and difficult physics, so therefore time is necessary to come to an appropriate conclusion.’ I point out that the bare fact that string theory centers a discussion of dominating sociological and/or ‘philosophical’ character which in fact can be led largely decoupled from scientific arguments ‘disproves’ the above sentences. In fact there seems to be nearly a consent, even across the frontiers, about a certain lack of ‘predictions’ implied by string theory, while the disagreement centers mainly about the degree of this absence (which puts a claimed-to-be ‘theory of everything’ in a rather ironic light) and/or the ‘interpretation’ of this generally undenied fact (‘it takes time’). I think that philosophy is fine as long as it supports or manifests the ‘explaining’ aspects of a mathematical formalism which shares as its necessary property ‘prediction’, I remember to have learned this years ago being still an undergraduate from a popular book by David Deutsch, his point was more or less that in purely logical terms a theory wouldn’t need to be able to explain phenomena AS LONG as it makes the right (i.e. verifiable) predictions about them. To summarize the above discussion one can only conclude that string theory goes the opposite way, it more or less seems to suggest that prediction is fine but nothing compared to the intriguing implications suggested by the theory’s ‘explanation’, this culminates obviously in this ‘landscape’ argument, where prediction is intrinsically senseless and explanation (the anthroposophic principle, the multiuniverse) puts itself into the perspective of initiating a ‘new era’ beyond Kopernikus, Einstein et al. I mention that one could parallelize these observations with not-quite recent arguments of german philosophers/sociologists (!) Adorno and Horkheimer whose ‘Kritische Theorie’ predicted and observed exactly the above discussed failure and tendency of modern rationalism and science in general to become ‘mythological regression’, even not as a corollary of the scientific method but as an intrinsic principle hidden in rational progress, danish philosopher Kierkegaard already observed in the 19th century that ‘this century has produced more myths than any era before’, I wonder what his comment would be today. I could add that concerning my personal experience with ‘modern physics’ and mathematics I already doubted the mathematical rigor and consistency of ‘the standard model’ which was the point where I changed to mathematics, from the constant efforts of mathematicians to understand recent and non-recent concepts in ‘modern physics’ (only to mention the ‘path integral’, mirror symmetry) I can only doubt the above mentioned term of the claimed ‘mathematical consistency’ of string theory, even the meaning of this word combination is unclear (what does it mean: a theory which is logically consistent, should this be a particularly ‘nice’ feature of string theory or is ist just the most necessary condition for a mathematically formulated physical theory to become science?) and as it seems it is relatively hard to ‘believe’ that it is physics at all, it might be difficult and ‘in some sense’ consistent, but possibly neither mathematically consistent nor difficult as a physically theory, so maybe one could say it is an extremely difficult and sociologically ‘consistent’ metaphysical theory ? (I apologize if this became a bit polemic.) 12. Lee Smolin says: Dear Peter, Thanks for the very thoughtful review. I have been distracted by some great personal blessing but see tonight that my book is available on and that Lubos has posted a two star review as you predicted. I am not interested in playing a game with Lubos or anyone else whose modus operandi is ad hominum attacks rather than serious engagement about ideas. (His attribution of some positive comments about my book made by Sabine Hossenfelder to the intellectual inferiority of women is for me so far beyond the pale, I really have no energy to further engage with him.) I wrote a book which treats those with whom it disagrees with a great deal of respect and admiration. The point is not who is a member of what community or who is esteemed by whom, it is about which ideas are right about nature and which are wrong. I wrote about string theory, not to demean it, but because it was the best idea we had about unification and if it is in crisis then we have reason to reexamine our presuppositions which led us to believe in it so strongly (and I do mean us.) My book arose out of such a re-examination and its value, for me, is that it contains proposals for what are the wrong ideas that took such a promising idea to its present crisis. So I am not willing to engage with people who are not willing to recognize good faith and respond in kind. But I am of course happy to discuss with those who takes the time to read it and responds in the spirit in which it was written. 13. Who says: Congratulations on the aforementioned blessing. Hope all are well: though I and others know you only through your work, many must be wishing you joy. 14. Gina says: Rather than attacking string theory directly a more promising way for trying to see what is wrong (if anything) with it to try to question the basics of extremely successful theories which preceded it. Peter, Lee is there some “QED bashing” in your books? (Even “QCD bashing” is already considered bad sportsmanship.) 15. 10^500 says: Oversimplified answer to Ted Fails: “how 10^500 comes out?” Strings want 6 or 7 extra dimensions, and to predict anything at low energy you must know their geography. Some complicated geography (holes and branes here and there) seems needed to try getting the complicated physics we observe. Strings tell that all of geography is dynamically fixed by vacuum expectation values of fields. There are many fields: a few fields describe the size and shape of extra dimensions, others tell the amount of each magnetic-like fluxes that can wind around each hole, etc, etc, etc. With a normal potential, each field has a few possible minima, and thousand of fields can have few^thousand minima. 10^500 is so many that, whatever we measure, string theory might have 10^100 solutions that practically look like it, altough finding them might be practically impossible. Despite all impressive achivements, and despite Lubos Motl, this seems the end of the story. 16. Gina says: It was a pleasant surprise that I could read Peter’s 2001 paper feeling that I understood most points Peter had made. This paper is almost disjoint (or orthogonal) to what I asked Peter. (Maybe 2-3 specific “anti string theory” claims can be extracted.) Part of the paper is a sort of philosophy of science look at particle physics and string theory of the last 3 decades geared towards “philosophy of funding of science”. Philosophy of funding of science is an interesting and important subject worthy of discussions and debates but it is a separate issue to the “case against string theory” (as a scientific theory). I would still be happy and greatful to see a summary of the “case” against string theory along the lines I asked. 17. TheGraduate says: To Gina: (Well this is by no means authoritative but the anti-string theory case seems to be roughly as follows:) 1. String theory does not predict anything 2. There is currently no obvious way to modify it so it would predict something 3. String theory is reducing the probability that other (possibly more predictive approaches) will be tried. All of these points can be expanded into sub-cases but I think they cover all the categories of objections. 18. Lubos makes me puke! says: Gina- I would suggest that you visit the archives of this Blog starting with March of 2004 which has a good article about Peter and his education and qualifications. You can skim the articles and read the important articles about String theory fairly rapidly. This would help you understand that this is a very complicated problem that has arisen from virtually a idea that never had any of the empirical physical evidences that is required for the scientific method. The beauty and complexity of the mathematical calculations necessary to explore the extra dimensions of string theory lured a lot of our most brilliant and gifted students to work for many years only to find that they had invested their time unwisely. Rather than admit their mistake, some like Motl will do anything to keep this dogma a science. Something it has not been for a long time. We all owe our gratitude to Peter for making us aware of this problem. 19. ak says: I have to admit that I still tend to get headache reading papers about particle physics whatever their background and philosophy might be, at least from my point of view they mostly tend to involve a considerable amount of mathematical sophistification but themselves completely lack the beauty and conisistency of the mathematical results and theories involved, in contrary they tend to mix up rigorous mathematical results with speculative ideas and concepts from mathematical and/or physical ‘folklore’, which makes ist extremely difficult for ‘non-insiders’ to decide what is still logically consistent deduction and what is wishful thinking or black-box deduction. I assume, and Peter seems to indicate that, that the ‘problems’ modern physics faced in the development since the 1970s derive as much from its desynchronization with mathematical justification of the concepts involved as with with its disconnectedness from experimental evidence. Motls list seems to reflect either intrinsic features of the theory which seem to be nearly tautological (‘unity of supergravities’) or concepts which are as interesting as yet poorly understood from a mathematical viewpoint (AdS/CFT/ mirror symmetry). From this point of view Peters attempt to re-view the mathematical concepts of the standard model seems to be promising; one could finally hint to an article of Berhelm Booss-Bavnbek, who judges post-war mathematics to be ‘deformed’ in a characteristic manner by aims of ‘fictional warfare’, this point of view is possibly not completely irrelevant to the discussion here (unfortunately in german): ‘Symptome der militärischen Deformation: undurchdringliche Komplexität, rücksichtslose Kreativität und täuschende Vertrautheit’ 20. Pingback: Assistant Professor Lubos Motl’s disgraceful attack on Lee Smolin « Gravity 21. Gina says: Can you please tell me (just a few sentences understandable to a laywoman) what is your opinion on the two claims: 1) That string theory cannot predict anything and will not be able to. 2) That there are over 10^500 possibilities which makes things worse. many thanks in advance –Gina 22. Who says: “The Trouble with Physics” (topic of thread) continues to be #1 on the Amazon general physics bestseller list at least it was 9AM to 4PM pacific time today, could of course be different at 5 PM—list changes hourly. 23. Ming says: I disagree with you 100%. The farce of string theory has shown definitively that more mathematics isn’t the way forward for physics. I think the way forward is that we need to re-examine the basic foundation of the whole edifice of theoretical physics and look for the missing key pieces (of physical concepts, not mathematics) that everybody has so far overlooked. We need to question the foundation of everything and take nothing for granted. Looking for the easy half-hearted way out by using ever fancier mathematics simply won’t work. Unfortunately this kind of work is despised by most practising theoretical physicists, who’re almost all of the “problem solver” variety. What we need desperately are more “seers” as Smolin described them, or “thinkers” may be a better word for it because it doesn’t have the superstitious connotations of seers. If we look back at the history of theoretical physics, the most prominent advances were almost always made by thinkers and not problem solvers. Einstein being the best example of a great thinker (though he’s also a darn good problem solver). Thinkers can think outside of the box (i.e. the existing formalism of theoretical physics), while problem solvers can only work within the box. The almost complete stagnation of theoretical physics for the past half century is due to an almost complete lack of quality thinkers, with all physics jobs going to the best problem solvers. As long as this extreme imbalance between thinkers and problem solvers persists in theoretical physics, I’m afraid there’s no hope for true advances… IMHO 24. Ming says: I just noticed that a recent 5-star review of Smolin’s book has been deleted while Lubos’ 2-star review is getting suspiciously high number of “helpful” votes. It looks like someone is actively (and desperately) “reporting” positive reviews of Smolin’s book while artificially generating helpful votes for Lubos’ review. I’ve only seen this kind of behavior from the site of another “science” book, the author of which is a total crackpot, and he is writing fake 5-star reviews for his own book while trying to report and delete all negative reviews (sad thing is he succeeded). Didn’t know that string theorists/supporters can also fall so low… 25. Ming, I agree with you 100%. The usual paradigm: Standard model and general relativity are great. The only thing left is to put them together. lead us in a corner. The only way out of this corner is backward. No amout of clever problem-solving will help. 26. ak says: I remark that it could be already ‘mythological regression’ to assume a GUT would actually exist. Periods of extreme idealism were quite frequent not only in science but also in philosophy or art (Hegel, Kant, Nietzsche) and Lubos gives an example of even biologists thinking of ‘their’ path to the universally saving GUT. The crisis of modern physics is not their lack of progress towards idealism, it is its implicit contact with the natural limitations of human (experimental) insight into nature itself (only to mention the energy scale of reasonably realizable accelerators). Possibly the non-existence of ‘seers’ derives from the fact that there ‘is nothing to be seen’ which would not go beyond the intrinsic limitations of human insight, at least derived from the standards of current technical ability. Apart from the fact that Einstein ‘knew’ what would have to be predicted, there in fact was experimental data (Michelson-Morley) giving at least subtle traces of the directions to choose, are there any comparable ‘traces’ today (Neutrino mass, dark matter?), one could doubt this. Apart from this, from a non-mathematical point of view idealism and regression were always closely connected, I only mention german idealism and its consequences for the history of the last century, it could be a characterizing property of a ‘theory of everything’ that it predicts in fact NOTHING, so string theory follows the ‘dialectic principle’ of human rationalism maybe in its purest form. I am personally quite happy about the existence of small-scale problem solving which, as a matter of exactness of the techniques involved, has from my point of view at least the potential to be of ‘practical use’ in human scale, that is, in human ‘everyday life’. I objected a tendency in theoretical physics to substitute the reality of existence as human beings and the diversity of (even physical) reality by concepts of extreme idealizing, at the same time simplifying, potential. The current status of absence of ‘predictiveness’ of string theory is possibly just a corollary of the wish to include apparently uinversal ‘explanation’, under whose regime details as ‘mathematics’, ‘logic’, ‘rationalism’, ‘predictiveness’, in the end maybe ‘science’ itself seem to lose their relevance or status as guiding (and limiting) principles they acquired over the thousands of years of growth of human knowledge (at least in the ‘exact sciences’). Possibly it is the moment where the exact sciences lose their insight into their own limitations where they end to exist as ‘exact sciences’ and turn themselves into mythology, I already said this above. For my own part, I am quite happy to consider exact sciences as ‘exact’ but limited and the other disciplines of human thinking (which EXIST, even to me as a mathematician) as ‘inexact’ but potentially unlimited, possibly the status of modern physics gives a hint towards the growing disassociation of the scientific worlds or human thinking in general, which would in the end lead again towards the concept of ‘thinker’: maybe it would be Einstein, knowing the history of post-war physics and societies in general, to conclude that there is a certain whisdom in preserving ‘mythology’ as ‘mythical’ and ‘exact sciences’ as ‘exact’, maybe this is what would have to be ‘seen’ from his perspective. 27. Lee Smolin says: I’m not doing anything, it is very sad to watch. I wrote two books before, there was a lot of disagreement, for example from string theory friends who told me that my idea of the landscape of theories was silly and there would soon be a principle of vacuum selection that gave unique predictions. But no one behaved badly. What is really sad is that there are many string theorists who are ethical and act and talk in good faith, if I were them I would be appalled to let me field be so represented. Besides which this kind of behavior provides strong evidence for the claim that there is something pathalogical in the sociology of the field. 28. ks says: Just for the record. String theory has nothing to do with the philosophy of “german idealism”. Attributing the believe in hypothetical stringy objects that are not detectable but shall be present for complicated theoretical reasons to Kants critics of pure reason or his categories of mind a priori or Hegels self-reflection of absolute mind and its projections into history, is hilarious. I do not even want to imagine what Nietzsche had made out of this drive into self delusion and science-as-cult beyond its empirist tradition. Maybe an appraisal of John Horgans writings as being sound? German idealism is close to the contemporary radical constructivist/deconstructivist philosophy, to existentialism, phenomenology etc. not to a naive believe in the objective existence of ones own intellectual phantasies. 29. anonym. says: ” Didn’t know that string theorists/supporters can also fall so low… ” – Ming They are unable to respond any other way, they have no other responses to give. 30. MoveOnOrStayBehind says: Here is a good example to what conclusions german philosophers are lead to, in this case concering the Higgs mechanism: Unfortunatly this is in german. What is being said there is that “neither an ontological, nor an epistemological interpretation of the Higgs mechanism is tenable”; this follows fram a “critical analysis”. The link given above, “Symptome der militärischen Deformation: undurchdringliche Komplexität, rücksichtslose Kreativität und täuschende Vertrautheit’” in another beautiful example of political ideology mixed up with science. Good that there are other parts of the world where science is moving on, although I am getting concerned about the US too, after reading the opinions in this blog here. 31. ak says: no, there seem to be some ‘misreceptions’, I did not compare string theory to ‘german idealism’, the argument was that to believe in the existence of a GUT could be a form of idealism, there is no such thing as pure ‘naive believe in the objective existence of ones own intellectual phantasies’, string theory takes place on a sociological/philosophical background and I just point out that it was Einstein co-initiating the belief in the existence of a GUT. My point was that real progression in modern physics could mean to be a little bit closer to Kierkegaards criticism of Hegel (opposing his dominant position, claim of unifying logical concepts etc.) and from what I understood, Peter and Lee move a bit in this direction. By the way I don’t think that ‘german idealism’, as a philosophical phenomenon, is very close to existentialism or deconstructivism and ‘to move on or to stay behind’ is exactly what this discussion is about. 32. ak says: I have to correct myself in the sense that the point is that to rethink modern physics with the explicit aim of a GUT remains pure ‘idealism’ as long as there are no fundamental experimental guidelines to show what exactly a new theory should predict or explain BEYOND the capabilities of the existing models. The unexplained constantness of the speed of light in a vacuum was Einsteins starting point, maybe I am not quite informed, but I do not see that there are any compareable fundamental facts pointing beyond the existing models today. In this situation the string theorists can hardly blame non-string-theorists to develope alternative pictures, one could for instance raise the question why not anyone seems to be interested in the notion of ‘symplectic spinor’ or symplectic Dirac operator, from a physical point of view the symplectic Dirac equation could possibly be the starting point for a geometric theory of bosons (since it involves the ‘symplectic Clifford algebra’), a not quite new paper shows that there is a natural notion of pseudo-differential quantisation involved over sections of a certain line subbundle of the sympletic spinor bundle, on the other hand the metaplectic representation implies the Schrödinger equation for linear hamiltonian systems on R^{2n} and is reflected in some sort of Lie derivative the picture is admittedly not quite coherent, but as a physicist, i could possibly just ‘couple’, for instance over Kaehler or Calabi-Yau manifolds, the Dirac operator over the ordinary spinor bundle with the symplectic Dirac operatior over the sympl. spinor bundle (taking tensor products and operator ‘sums)’ and see what ‘happens’, for Calabi Yau manifolds a natural notion of Maslov index would be involved and would give rise to some notion of ‘quantisable’ Lagrangian foliations, which would correspond to the dimension of the kernel of the square of some restriction of the symplectic part of the coupled operator etc etc, maybe a new ‘TOE’, who knows. 33. ks says: Actually this goal must be attributed to Newton and all his followers. Einstein and other quantum theorists of the first generation destroyed the old worldview and broke it into two incompatible parts without losing the researchers inherent destination of a complete and consistent physical explanation of the whole world. There is no point to make in the inexistence of a GUT because its existence is undecidable unless it exists. It can’t be disproved by reason. We can only get stuck. Hence demystification doesn’t help us because there is no other side of true reason but just a decision to make for everyone when its time to give up, which is finally subjective. What really happened with the desire of a GUT is that it became an aspect of mass/pop-culture and its proponents rock-stars of popular science magazines ( “Einsteins legacy” etc. ) and books. Physicists and to a lesser degree mathematicians are our last heros the last people who truly “transgress the boundaries” which is properly mythological and part of the fascination. Besides the person Stephen Hawking it was ST that had been in the focus of the economy of attention of fundamental science in the last decades. String theory is both a highly esoteric and speculative branch of mathematical physics and the pop culture of the TOE. This tension makes it interesting even for visitors who are by no means “active researchers” in the sense of Distler. I’m not claiming that depressing the public about the TOE wouldn’t be healthy for the theoretical physics community even if it’s going to shrink to the size it had at Einsteins time. This is undisputable. Reason without experience is empty, experience without reason is blind, as Kant said. 34. D R Lunsford says: ks said Well I don’t really agree. The desire for unity is completely justified, as is seeking it in geometry. All three major developments since Newton – Maxwell, Einstein, and Dirac – are based on geometry and the idea of unity, or rather as Finkelstein would say, “relativization”, which amounts to simplification of the underlying Lie algebra of observables by decontraction. The problem seems to be that the current practitioners are just uncommonly bad at finding the key physical ideas, because they are too enmeshed in arcane mathematics. Klein, Courant, Weyl, all warned us this would happen. 35. ak says: I am afraid not to understand the dialectic principle of these two points: ‘The desire for unity is completely justified, as is seeking it in geometry.’ The ‘desire for unity’ is claimed to be derivable from purely mathematical reasoning, at the same time to be ‘enmeshed in arcane mathematics’ is attributed to the unability of finding ‘the key physical ideas’. This could hint to some key misunderstanding of string theorists reasoning, taking on one hand mathematics as a guideline for fundamental aims and on the other hand attributing subsequent experimental deficits of the theory to the unability of ‘practitioners’ to find the key physical ideas while being absorbed in mathematical reasoning. To resolve this one should possibly follow the contrary strategy: to take experimental facts as the origin of thinking (not taking experiments as the corollary of mathematical idealism) and to use on the other hand plain mathematics as the tool to derive a theory from this experimental starting point (I point out that my above statement about a possible ‘TOE’ derived from symplectic spinors was of substantial ironic character). 36. D R Lunsford says: ak said No one ever claimed it was derivable from “purely mathematical reasoning”. Indeed the intuitionists firmly believe that such a thing does not exist, and that both math and physics are stimulated by mutual interaction. Finding the right physical idea is an irreducible activity – finding its mathematical realization is not. By “arcane” I mean – disconnected from “physical reasonableness”. Certainly there are many complex mathematical structures that are eminently reasonable. The main activity of the physicist is to come up with physical ideas that are reasonable and tractable. That is what is completely missing these days. 37. ak says: I still do not agree on the form of this conclusion. It is a myth Einstein derived Relativity from pure physical intuition, there was an experimental guiding principle which lead to concepts like ‘Lorentz invariance’ and Minkowski space (Michelson-morley). From THIS point, it was in fact a pure ‘thought experiment’ to generalize to curvature, geodesics and so on, but the experiment could in fact qualify the result of these thought experiments to be true. In the current situation of modern physics there seems to be neither a clear physical guiding principle derived from experiment nor a possible way to judge the result of a wide variety of thought experiments, so one cannot in fact blame the state of string theory to the absence of ‘thinkers’ producing reasonable physics. It is exactly this belief in ‘new physics emerging from human brain’ which lies at the esoteric origin of string theory and potentially also of related concepts. 38. amused says: Thanks for this review Peter, I’m looking forward to reading the book (and yours). Smolin makes some astute observations, but it’s one thing to describe the problem and another thing to find a viable solution. As Smolin points out, young peoples’ job prospects in formal particle theory are determined by how they are viewed by senior influential physicists, and since most of the latter are string theorists (at least at the leading US uni’s) it puts the non-stringers at a huge disadvantage. As far as I can tell from reviews of the book and what he has written elsewhere, Smolin’s solution for this seems to be some kind of “democratisation” where funding and jobs get distributed over various areas in proportion to the number of people working in them. What do you think about this? Personally I’m against it. One reason is that it just replaces preferential weighting for string theorists by preferential weighting for people working on some broader selection of areas. What if my preferred research area is not among these? Or if the representative for my area on the “committee” is not very eloquent (he neglected to develop his salesman skills through hyping of our area to the public) and therefore can’t get us a decent share of the pie? Or if I suddenly find that there is something exciting in a non-represented area that I want to work on? Besides that, I do think these kind of things should be left as much as possible to “market forces”. The problem is that at the moment we don’t have a genuine free market; it’s more like a monopoly a la Microsoft. Anyway, if Peter will indulge me I’ld like to propose a different solution: How about just letting people work on whatever they like, without preferential weightings for any particular areas, and evaluating them solely on the basis of the progress they make? This requires of course some objective measure for evaluating “progress”. We need something that can be used to evaluate and compare people across different areas. The normal thing in academia is to base this on journal publications. Problem is that people don’t care much about journals in theoretical hep these days. When you write a paper you stick in on the archives, where it gets seen by the senior influential people in your field, and your stock goes up or down depending on what they think of it. Subsequent publication of the paper in a supposedly major journal is usually routine and doesn’t mean much. This situation is ok for evaluating and comparing people within the same area, but how are you supposed to compare people across different areas? Although they publish in the same journals there is no way to tell the relative quality and significance of their works just from “major” journal publications, since it doesn’t take much to get published. Similar things can be said about citation counts (which not only measure the significance of the paper but also the well-connectedness of the author and the size and popularity of the area in which the paper lies). However, there remains one physics journal which is still non-trivial to publish in: Physical Review Letters. So how about using number of publications there as the evaluation measure? (The weight of each paper should of course be normalised according to number of co-authors, and with a further appropriate reduction for young people who are just going for a ride on the coattails of seniors.) While it is true that some areas of physics (e.g. condensed matter) are easier for getting published in PRL than formal particle theory, within the latter area there doesn’t seem to be any biases (e.g. it is not unusual for both string theory and LQG papers get published in PRL) so it would seem to be a level playing field for all. The string theorists surely won’t have any objection to this – since they are so brilliant they will surely welcome the opportunity to prove it in an objective setting. In fact I’m sure it’s only their natural modesty which has prevented them from filling up the pages of PRL already. It will also give a chance to the hardcore younger stringers to finally silence those “penis envy”-afflicted cynics out there, who go around disparaging them for being mindless clones, absorbing what they are spoonfed like sponges but incapable of doing anything original and significant on their own. (Whoops, seems like I might have slipped into string-bashing mode at the end there ;)) 39. Ron Macnaughton says: I’m a high school physics teacher who just yesteray was asked what I thought of String Theory. My student had trouble understanding what he thought was the deepest theory developed so far. I explained how most astronomers used to believe planets moved in circles or circles on circles. Eventually Kepler showed only elliptical orbits explained the observed positions of Mars. I gave the opinion that String Theory makes some assumptions and it might come close to explaining reality, but I didn’t think it would ultimately be successful, just as epicircles went into the dustbin of science. I said that’s only a high school teacher’s opinion, but many brilliant people worked on it and believed it. I think the main problem is that String Theory doesn’t seem to include General Relativity. I find the sociology of science rather interesting. We talk about heroes who have a pure drive for understanding, but Tycho Brahe gave Kepler the Mars problem, because he thought it would be too hard for the young whipper snapper to solve. Correct theories (plate tectonics) get rejected for decades. Wikipedia still lists only string theory as a theory for quantum gravity, even though many alternatives are out there. I read “moron” comments on this blog which I find embarrassing when I hope to inspire young people to take up science as a career. I can’t wait for my copy of both books to arrive. 40. woit says: Hi Rob, An important thing to explain to students about science is that it makes testable predictions that can be checked. Things like string theory are very speculative ideas that some people someday hope will become legitimate, testable science, but they’re not there yet. Some of us think it never will get there, some are more optimistic. 41. Pingback: The Trouble With Physics | Cosmic Variance Comments are closed.
52682bd3aae45b38
Free electron model In three dimensions, the density of states of a gas of fermions is proportional to the square root of the kinetic energy of the particles. In solid-state physics, the free electron model is a simple model for the behaviour of valence electrons in a crystal structure of a metallic solid. It was developed principally by Arnold Sommerfeld who combined the classical Drude model with quantum mechanical Fermi–Dirac statistics and hence it is also known as the Drude–Sommerfeld model. The free electron empty lattice approximation forms the basis of the band structure model known as nearly free electron model. Given its simplicity, it is surprisingly successful in explaining many experimental phenomena, especially Ideas and assumptions As in the Drude model, valence electrons are assumed to be completely detached from their ions (forming an electron gas). As in an ideal gas, electron-electron interactions are completely neglected. The electrostatic fields in metals are weak because of the screening effect. The crystal lattice is not explicitly taken into account. A quantum-mechanical justification is given by Bloch's Theorem: an unbound electron moves in a periodic potential as a free electron in vacuum, except for the electron mass m becoming an effective mass m* which may deviate considerably from m (one can even use negative effective mass to describe conduction by electron holes). Effective masses can be derived from band structure computations. While the static lattice does not hinder the motion of the electrons, electrons can be scattered by impurities and by phonons; these two interactions determine electrical and thermal conductivity (superconductivity requires a more refined theory than the free electron model). According to the Pauli exclusion principle, each phase space element (Δk)3(Δx)3 can be occupied only by two electrons (one per spin quantum number). This restriction of available electron states is taken into account by Fermi–Dirac statistics (see also Fermi gas). Main predictions of the free-electron model are derived by the Sommerfeld expansion of the Fermi–Dirac occupancy for energies around the Fermi level. Energy and wave function of a free electron Plane wave traveling in the x-direction For a free particle the potential is . The Schrödinger equation for such a particle, like the free electron, is[1][2][3] The wave function can be split into a solution of a time dependent and a solution of a time independent equation. The solution of the time dependent equation is with energy The solution of the time independent equation is with a wave vector . is the volume of space where the electron can be found. The electron has a kinetic energy The plane wave solution of this Schrödinger equation is For solid state and condensed matter physics the time independent solution is of major interest. It is the basis of electronic band structure models that are widely used in solid-state physics for model Hamiltonians like the nearly free electron model and the Tight binding model and different models that use a Muffin-tin approximation. The eigenfunctions of these Hamiltonians are Bloch waves which are modulated plane waves. Dielectric function of the electron gas On a scale much larger than the inter atomic distance a solid can be viewed as an aggregate of a negatively charged plasma of the free electron gas and a positively charged background of atomic cores. The background is the rather stiff and massive background of atomic nuclei and core electrons which we will consider to be infinitely massive and fixed in space. The negatively charged plasma is formed by the valence electrons of the free electron model that are uniformly distributed over the interior of the solid. If an oscillating electric field is applied to the solid, the negatively charged plasma tends to move a distance x apart from the positively charged background. As a result, the sample is polarized and there will be an excess charge at the opposite surfaces of the sample. The surface charge density is which produces a restoring electric field in the sample The dielectric function of the sample is expressed as where is the electric displacement and is the polarization density. The electric field and polarization densities are and the polarization density with n electron density is The force F of the oscillating electric field causes the electrons with charge e and mass m to accelerate with an acceleration a which, after substitution of E, P and x, yields an harmonic oscillator equation. After a little algebra the relation between polarization density and electric field can be expressed as The frequency dependent dielectric function of the solid is At a resonance frequency , called the plasma frequency, the dielectric function changes sign from negative to positive and real part of the dielectric function drops to zero. This is a plasma oscillation resonance or plasmon. The plasma frequency is a direct measure of the square root of the density of valence electrons in a solid. Observed values are in reasonable agreement with this theoretical prediction for a large number of materials.[4] Below the plasma frequency, the dielectric function is negative and the field cannot penetrate the sample. Light with angular frequency below the plasma frequency will be totally reflected. Above the plasma frequency the light waves can penetrate the sample. Solution of the Schrödinger equation The Schrödinger equation For a free particle the potential is , so the Schrödinger equation for the free electron is[1][2][3] This is a type of wave equation that has numerous kinds of solutions. One way of solving the equation is splitting it in a time-dependent oscillator equation and a space-dependent wave equation like and substituting a product of solutions like The Schrödinger equation can be split in a time dependent part and a time independent part.Which is derived. Solution of the time dependent equation The peculiar time dependent part of the Schrödinger equation is, unlike the Klein–Gordon equation for pions and most of the other well known wave equations, a first order in time differential equation with a 90° out of phase driving mechanism, while most oscillator equations are second order in time differential equations with 180° out of phase driving mechanisms. The equation that has to be solved is The complex (imaginary) exponent is proportional to the energy The imaginary exponent can be transformed to an angular frequency The wave function now has a stationary and an oscillating part The stationary part is of major importance to the physical properties of the electronic structure of matter. Solution of the time independent equation The wave function of free electrons is in general described as the solution of the time independent Schrödinger equation for free electrons The Laplace operator in Cartesian coordinates is The wave function can be factorized for the three Cartesian directions Now the time independent Schrödinger equation can be split in three independent parts for the three different Cartesian directions As a solution an exponential function is substituted in the time independent Schrödinger equation The solution of gives the exponent which yields the wave equation and the energy With the normalization and the wave vector magnitude we arrive at the plane wave solution with a wave function for free electrons with a wave vector and a kinetic energy in which is the volume of space occupied by the electron. The traveling plane wave solution The product of the time independent stationary wave solution and time dependent oscillator solution gives the traveling plane wave solution which is the final solution for the free electron wave function. Fermi energy According to the Pauli principle, the electrons in the ground state occupy all the lowest-energy states, up to some Fermi energy . Since the energy is given by this corresponds to occupying all the states with wave vectors , where is so-called Fermi wave vector, given by where is the total number of electrons in the system, and V is the total volume. The Fermi energy is then In a nearly-free-electron model of a -valent metal, one can replace with , where is the total number of metal ions. Density of states The density of states (DOS) corresponds to electrons with a spherically-symmetric parabolic dispersion with two electrons (one of each spin) per each "quantum" of the phase space, . In 3D, this corresponds to where is the total volume. Combining these expressions for the Fermi energy and the DOS, one can show that the following relationship holds at the Fermi level: where Z is the charge of each of the N metal ions in the crystal. See also 1. 1 2 Albert Messiah (1999). Quantum Mechanics. Dover Publications. ISBN 0-486-40924-4. 2. 1 2 Stephen Gasiorowicz (1974). Quantum Physics. Wiley & Sons. ISBN 0-471-29281-8. 3. 1 2 Eugen Merzbacher (2004). Quantum Mechanics (3rd ed.). Wiley & Sons. ISBN 978-9971-5-1281-1. 4. C. Kittel (1953–1976). Introduction to Solid State Physics. Wiley & Sons. ISBN 0-471-49024-5. External links This article is issued from Wikipedia - version of the 9/12/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
6943da2779815289
Eigenvalues and eigenvectors From Wikipedia, the free encyclopedia   (Redirected from Eigenmatrix) Jump to: navigation, search In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that only changes by a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V that is not the zero vector, then v is an eigenvector of T if T(v) is a scalar multiple of v. This condition can be written as the equation where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. If the vector space V is finite-dimensional, then the linear transformation T can be represented as a square matrix A, and the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left hand side and a scaling of the column vector on the right hand side in the equation There is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations.[1][2] Geometrically an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction that is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed.[3] Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for "proper", "characteristic".[4] Originally utilized to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. In this shear mapping the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it doesn't change direction, and since its length is unchanged, its eigenvalue is 1. The Mona Lisa example pictured at right provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length, either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices that are also referred to as eigenvectors. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation above for a linear transformation can be rewritten as the matrix multiplication where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: • The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.[5][6] • The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T.[7][8] • If the set of eigenvectors of T form a basis of the domain of T, then this basis is called an eigenbasis. In the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes.[9] Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.[10] In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions.[11] Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.[12][13] Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur.[14] Sturm developed Fourier's ideas further and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues.[11] This was extended by Hermite in 1855 to what are now called Hermitian matrices.[12] Around the same time, Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle,[11] and Clebsch found the corresponding result for skew-symmetric matrices.[12] Finally, Weierstrass clarified an important aspect in the stability theory started by Laplace by realizing that defective matrices can cause instability.[11] In the meantime, Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory.[15] Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.[16] At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices.[17] He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904,[18] though he may have been following a related usage by Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.[19] The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G.F. Francis[20] and Vera Kublanovskaya[21] in 1961.[22] Eigenvalues and eigenvectors of matrices[edit] Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.[23][24] Furthermore, linear transformations can be represented using matrices,[1][2] which is especially common in numerical and computational applications.[25] Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that In this case λ = −1/20. Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A, where, for each row, If it occurs that v and w are scalar multiples, that is if then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A. Equation (1) can be stated equivalently as where I is the n by n identity matrix. Eigenvalues and the characteristic polynomial[edit] Equation (2) has a non-zero solution v if and only if the determinant of the matrix (AλI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using Leibniz' rule for the determinant, the left hand side of Equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n by n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ... λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix Taking the determinant of (MλI), the characteristic polynomial of M is Setting the characteristic polynomial equal to zero, it has roots at λ = 1 and λ = 3, which are the two eigenvalues of M. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation Mv = λv. In this example, the eigenvectors are any non-zero scalar multiples of If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have non-zero imaginary parts. The entries of the corresponding eigenvectors therefore may also have non-zero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues are complex algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. Algebraic multiplicity[edit] Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λλi)k divides evenly that polynomial.[8][26][27] Suppose a matrix A has dimension n and dn distinct eigenvalues. Whereas Equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can instead be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, If d = n then the right hand side is the product of n linear terms and this is the same as Equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as If μA(λi) = 1, then λi is said to be a simple eigenvalue.[27] If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. Eigenspaces, geometric multiplicity, and the eigenbasis for matrices[edit] Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy Equation (2), On one hand, this set is precisely the kernel or nullspace of the matrix (AλI). On the other hand, by definition, any non-zero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (AλI). E is called the eigenspace or characteristic space of A associated with λ.[7][8] In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of ℂn. Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written (u,v) ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if vE and α is a complex number, (αv) ∈ E or equivalently Av) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γA(λ). Because E is also the nullspace of (AλI), the geometric multiplicity of λ is the dimension of the nullspace of (AλI), also called the nullity of (AλI), which relates to the dimension and rank of (AλI) as Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. The condition that γA(λ) ≤ μA(λ) can be proven by considering a particular eigenvalue ξ of A and diagonalizing the first γA(ξ) columns of A with respect to the eigenvectors of ξ, described in a later section. The resulting similar matrix B is block upper triangular, with its top left block being the diagonal matrix ξIγA(ξ). As a result, the characteristic polynomial of B will have a factor of (ξ − λ)γA(ξ). The other factors of the characteristic polynomial of B are not known, so the algebraic multiplicity of ξ as an eigenvalue of B is no less than the geometric multiplicity of ξ as an eigenvalue of A. The last element of the proof is the property that similar matrices have the same characteristic polynomial. Suppose A has dn distinct eigenvalues λ1, λ2, ..., λd, where the geometric multiplicity of λi is γA(λi). The total geometric multiplicity of A, is the dimension of the union of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γA = n, then • The union of the eigenspaces of all of A's eigenvalues is the entire vector space ℂn • A basis of ℂn can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis • Any vector in ℂn can be written as a linear combination of eigenvectors of A Additional properties of eigenvalues[edit] Let A be an arbitrary n by n matrix of complex numbers with eigenvalues λ1, λ2, ..., λn. Each eigenvalue appears μA(λi) times in this list, where μA(λi) is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: • The determinant of A is the product of all its eigenvalues, • The eigenvalues of the kth power of A, i.e. the eigenvalues of Ak, for any positive integer k, are λ1k, λ2k, ..., λnk. • The matrix A is invertible if and only if every eigenvalue is nonzero. • If A is invertible, then the eigenvalues of A−1 are 1/λ1, 1/λ2, ..., 1/λn and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. • If A is equal to its conjugate transpose A*, or equivalently if A is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. • If A is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. • If A is unitary, every eigenvalue has absolute value |λi| = 1. Left and right eigenvectors[edit] Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the n by n matrix A in the defining equation, Equation (1), The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A. In this formulation, the defining equation is where κ is a scalar and u is a 1 by n matrix. Any row vector u satisfying this equation is called a left eigenvector of A and κ is its associated eigenvalue. Taking the transpose of this equation, Comparing this equation to Equation (1), it follows immediately that a left eigenvector of A is the same as the transpose of a right eigenvector of AT, with the same eigenvalue. Furthermore, since the characteristic polynomial of AT is the same as the characteristic polynomial of A, the eigenvalues of the left eigenvectors of A are the same as the eigenvalues of the right eigenvectors of AT. Diagonalization and the eigendecomposition[edit] Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, or by instead left multiplying both sides by Q−1, A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. Variational characterization[edit] In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of is the maximum value of the quadratic form . A value of that realizes that maximum, is an eigenvector. Matrix examples[edit] Two-dimensional matrix example[edit] The transformation matrix A = preserves the direction of vectors parallel to vλ=1 = [1 −1]T (in purple) and vλ=3 = [1 1]T (in blue). The vectors in red are not parallel to either eigenvector, so, their directions are changed by the transformation. The blue vectors after the transformation are three times the length of the original (their eigenvalue is 3), while the lengths of the purple vectors are unchanged (reflecting an eigenvalue of 1). See also: An extended version, showing all four quadrants. Consider the matrix The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy Equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, For λ = 1, Equation (2) becomes, Any non-zero vector with v1 = −v2 solves this equation. Therefore, is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For λ = 3, Equation (2) becomes Any non-zero vector with v1 = v2 solves this equation. Therefore, is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ = 1 and λ = 3, respectively. Three-dimensional matrix example[edit] Consider the matrix The characteristic polynomial of A is The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors and , or any non-zero multiple thereof. Three-dimensional matrix example with complex eigenvalues[edit] Consider the cyclic permutation matrix This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are where i = is the imaginary unit. For the real eigenvalue λ1 = 1, any vector with three equal non-zero entries is an eigenvector. For example, For the complex conjugate pair of imaginary eigenvalues, note that Therefore, the other two eigenvectors of A are complex and are and with eigenvalues λ2 and λ3, respectively. Note that the two complex eigenvectors also appear in a complex conjugate pair, Diagonal matrix example[edit] Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix The characteristic polynomial of A is which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only non-zero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, respectively, as well as scalar multiples of these vectors. Triangular matrix example[edit] A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, The characteristic polynomial of A is These eigenvalues correspond to the eigenvectors, respectively, as well as scalar multiples of these vectors. Matrix with repeated eigenvalues example[edit] As in the previous example, the lower triangular matrix has a characteristic polynomial that is the product of its diagonal elements, The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of each distinct eigenvalue is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [0 1 −1 1]T and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [0 0 0 1]T. The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. Eigenvalues and eigenfunctions of differential operators[edit] The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. Derivative operator example[edit] Consider the derivative operator with eigenvalue equation This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function is the eigenfunction of the derivative operator. Note that in this case the eigenfunction is itself a function of its associated eigenvalue. In particular, note that for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. General definition[edit] The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, We say that a non-zero vector vV is an eigenvector of T if and only if there exists a scalar λK such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. Note that T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.[33] Eigenspaces, geometric multiplicity, and the eigenbasis[edit] Given an eigenvalue λ, consider the set which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, for (x,y) ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely (u,v) ∈ E, then So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely (u+vv) ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.[8][34][35] If that subspace has dimension 1, it is sometimes called an eigenline.[36] The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue.[8][27] By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.[37] Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. Zero vector as an eigenvector[edit] While the definition of an eigenvector used in this article excludes the zero vector, it is possible to define eigenvalues and eigenvectors such that the zero vector is an eigenvector.[38] Consider again the eigenvalue equation, Equation (5). Define an eigenvalue to be any scalar λK such that there exists a non-zero vector vV satisfying Equation (5). It is important that this version of the definition of an eigenvalue specify that the vector be non-zero, otherwise by this definition the zero vector would allow any scalar in K to be an eigenvalue. Define an eigenvector v associated with the eigenvalue λ to be any vector that, given λ, satisfies Equation (5). Given the eigenvalue, the zero vector is among the vectors that satisfy Equation (5), so the zero vector is included among the eigenvectors by this alternate definition. Spectral theory[edit] If λ is an eigenvalue of T, then the operator (TλI) is not one-to-one, and therefore its inverse (TλI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (TλI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (TλI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. Associative algebras and representation theory[edit] One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Dynamic equations[edit] The simplest difference equations have the form which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations giving a k-dimensional system of the first order in the stacked variable vector in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots for use in the solution equation The eigenvalues of a matrix can be determined by finding the roots of the characteristic polynomial. Explicit algebraic formulas for the roots of a polynomial exist only if the degree is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. It turns out that any polynomial with degree is the characteristic polynomial of some companion matrix of order . Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the advent of the QR algorithm in 1961. [39] Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm.[citation needed] For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.[39] we can find its eigenvectors by solving the equation , that is This matrix equation is equivalent to two linear equations      that is      Both equations reduce to the single linear equation . Therefore, any vector of the form , for any non-zero real number , is an eigenvector of with eigenvalue . The matrix above has another eigenvalue . A similar calculation shows that the corresponding eigenvectors are the non-zero solutions of , that is, any vector of the form , for any non-zero real number . Eigenvalues of geometric transformations[edit] scaling unequal scaling rotation horizontal shear hyperbolic rotation illustration Equal scaling (homothety) Vertical shrink and horizontal stretch of a unit square. Rotation by 50 degrees Horizontal shear mapping Squeeze r=1.5.svg algebraic multipl. geometric multipl. eigenvectors All non-zero vectors Note that the characteristic equation for a rotation is a quadratic equation with discriminant , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. Schrödinger equation[edit] An example of an eigenvalue equation where the transformation is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: where , the Hamiltonian, is a second-order differential operator and , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which and can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by . In this notation, the Schrödinger equation is: where is an eigenstate of and represents the eigenvalue. is an observable self adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above is understood to be the vector obtained by application of the transformation to . Molecular orbitals[edit] In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. Geology and glaciology[edit] The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered by their eigenvalues ;[43] then is the primary orientation/dip of clast, is the secondary and is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of , , and are dictated by the nature of the sediment's fabric. If , the fabric is said to be isotropic. If , the fabric is said to be planar. If , the fabric is said to be linear.[44] Principal component analysis[edit] PCA of the multivariate Gaussian distribution centered at with a standard deviation of 3 in roughly the direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) covariance matrix scaled by the square root of the corresponding eigenvalue. (Just as in the one-dimensional case, the square root is taken because the standard deviation is more readily visualized than the variance. Vibration analysis[edit] Mode Shape of a Tuning Fork at Eigenfrequency 440.09 Hz that is, acceleration is proportional to position (i.e., we expect to be sinusoidal in time). In dimensions, becomes a mass matrix and a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem where is the eigenvalue and is the (imaginary) angular frequency. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of alone. Furthermore, damped vibration, governed by leads to a so-called quadratic eigenvalue problem, This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. Eigenfaces as examples of eigenvectors In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel.[45] The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Tensor of moment of inertia[edit] Stress tensor[edit] In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either (sometimes called the combinatorial Laplacian) or (sometimes called the normalized Laplacian), where is a diagonal matrix with equal to the degree of vertex , and in , the th diagonal entry is . The th principal eigenvector of a graph is defined as either the eigenvector corresponding to the th largest or th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. Basic reproduction number[edit] The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time has passed. is then the largest eigenvalue of the next generation matrix.[46][47] See also[edit] 1. ^ a b Herstein (1964, pp. 228,229) 2. ^ a b Nering (1970, p. 38) 3. ^ Burden & Faires (1993, p. 401) 4. ^ Betteridge (1965) 5. ^ Press (2007, p. 536) 6. ^ Wolfram Research, Inc. (2010) Eigenvector. Accessed on 2016-04-01. 7. ^ a b Anton (1987, pp. 305,307) 8. ^ a b c d e Nering (1970, p. 107) 9. ^ Note: • In 1751, Leonhard Euler proved that any body has a principal axis of rotation: Leonhard Euler (presented: October 1751 ; published: 1760) "Du mouvement d'un corps solide quelconque lorsqu'il tourne autour d'un axe mobile" (On the movement of any solid body while it rotates around a moving axis), Histoire de l'Académie royale des sciences et des belles lettres de Berlin, pp. 176–227. On p. 212, Euler proves that any body contains a principal axis of rotation: "Théorem. 44. De quelque figure que soit le corps, on y peut toujours assigner un tel axe, qui passe par son centre de gravité, autour duquel le corps peut tourner librement & d'un mouvement uniforme." (Theorem. 44. Whatever be the shape of the body, one can always assign to it such an axis, which passes through its center of gravity, around which it can rotate freely and with a uniform motion.) • In 1755, Johann Andreas Segner proved that any body has three principal axes of rotation: Johann Andreas Segner, Specimen theoriae turbinum [Essay on the theory of tops (i.e., rotating bodies)] ( Halle ("Halae"), (Germany) : Gebauer, 1755). On p. XXVIIII (i.e., 29), Segner derives a third-degree equation in t, which proves that a body has three principal axes of rotation. He then states (on the same page): "Non autem repugnat tres esse eiusmodi positiones plani HM, quia in aequatione cubica radices tres esse possunt, et tres tangentis t valores." (However, it is not inconsistent [that there] be three such positions of the plane HM, because in cubic equations, [there] can be three roots, and three values of the tangent t.) • The relevant passage of Segner's work was discussed briefly by Arthur Cayley. See: A. Cayley (1862) "Report on the progress of the solution of certain special problems of dynamics," Report of the Thirty-second meeting of the British Association for the Advancement of Science; held at Cambridge in October 1862, 32 : 184–252 ; see especially pages 225–226. 10. ^ See Hawkins 1975, §2 11. ^ a b c d See Hawkins 1975, §3 12. ^ a b c See Kline 1972, pp. 807–808 13. ^ Augustin Cauchy (1839) "Mémoire sur l'intégration des équations linéaires" (Memoir on the integration of linear equations), Comptes rendus, 8 : 827–830, 845–865, 889–907, 931–937. From p. 827: "On sait d'ailleurs qu'en suivant la méthode de Lagrange, on obtient pour valeur générale de la variable prinicipale une fonction dans laquelle entrent avec la variable principale les racines d'une certaine équation que j'appellerai l'équation caractéristique, le degré de cette équation étant précisément l'order de l'équation différentielle qu'il s'agit d'intégrer." (One knows, moreover, that by following Lagrange's method, one obtains for the general value of the principal variable a function in which there appear, together with the principal variable, the roots of a certain equation that I will call the "characteristic equation", the degree of this equation being precisely the order of the differential equation that must be integrated.) 14. ^ See Kline 1972, p. 673 15. ^ See Kline 1972, pp. 715–716 16. ^ See Kline 1972, pp. 706–707 17. ^ See Kline 1972, p. 1063 18. ^ See: • David Hilbert (1904) "Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen. (Erste Mitteilung)" (Fundamentals of a general theory of linear integral equations. (First report)), Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse (News of the Philosophical Society at Göttingen, mathematical-physical section), pp. 49–91. From page 51: "Insbesondere in dieser ersten Mitteilung gelange ich zu Formeln, die die Entwickelung einer willkürlichen Funktion nach gewissen ausgezeichneten Funktionen, die ich Eigenfunktionen nenne, liefern: … (In particular, in this first report I arrive at formulas that provide the [series] development of an arbitrary function in terms of some distinctive functions, which I call eigenfunctions: … ) Later on the same page: "Dieser Erfolg ist wesentlich durch den Umstand bedingt, daß ich nicht, wie es bisher geschah, in erster Linie auf den Beweis für die Existenz der Eigenwerte ausgehe, … " (This success is mainly attributable to the fact that I do not, as it has happened until now, first of all aim at a proof of the existence of eigenvalues, … ) • For the origin and evolution of the terms eigenvalue, characteristic value, etc., see: Earliest Known Uses of Some of the Words of Mathematics (E) 19. ^ See Aldrich 2006 20. ^ Francis, J. G. F. (1961), "The QR Transformation, I (part 1)", The Computer Journal, 4 (3): 265–271, doi:10.1093/comjnl/4.3.265  and Francis, J. G. F. (1962), "The QR Transformation, II (part 2)", The Computer Journal, 4 (4): 332–345, doi:10.1093/comjnl/4.4.332  21. ^ Kublanovskaya, Vera N. (1961), "On some algorithms for the solution of the complete eigenvalue problem", USSR Computational Mathematics and Mathematical Physics, 3: 637–657 . Also published in: "О некоторых алгорифмах для решения полной проблемы собственных значений" [On certain algorithms for the solution of the complete eigenvalue problem], Журнал вычислительной математики и математической физики (Journal of Computational Mathematics and Mathematical Physics), 1 (4): 555–570, 1961  23. ^ Cornell University Department of Mathematics (2016) Lower-Level Courses for Freshmen and Sophomores. Accessed on 2016-03-27. 24. ^ University of Michigan Mathematics (2016) Math Course Catalogue Archived 2015-11-01 at the Wayback Machine.. Accessed on 2016-03-27. 25. ^ Press (2007, pp. 38) 26. ^ Fraleigh (1976, p. 358) 27. ^ a b c Golub & Van Loan (1996, p. 316) 28. ^ a b Beauregard & Fraleigh (1973, p. 307) 29. ^ Herstein (1964, p. 272) 30. ^ Nering (1970, pp. 115–116) 31. ^ Herstein (1964, p. 290) 32. ^ Nering (1970, p. 116) 34. ^ Shilov 1977, p. 109 35. ^ Lemma for the eigenspace 36. ^ Schaum's Easy Outline of Linear Algebra, p. 111 40. ^ Graham, D.; Midgley, N. (2000), "Graphical representation of particle shape using triangular diagrams: an Excel spreadsheet method", Earth Surface Processes and Landforms, 25 (13): 1473–1477, Bibcode:2000ESPL...25.1473G, doi:10.1002/1096-9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C  41. ^ Sneed, E. D.; Folk, R. L. (1958), "Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis", Journal of Geology, 66 (2): 114–150, Bibcode:1958JG.....66..114S, doi:10.1086/626490  42. ^ Knox-Robinson, C.; Gardoll, Stephen J. (1998), "GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system", Computers & Geosciences, 24 (3): 243, Bibcode:1998CG.....24..243K, doi:10.1016/S0098-3004(97)00122-2  43. ^ Stereo32 software 46. ^ Diekmann O, Heesterbeek JA, Metz JA (1990), "On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations", Journal of Mathematical Biology, 28 (4): 365–382, doi:10.1007/BF00178324, PMID 2117040  47. ^ Odo Diekmann; J. A. P. Heesterbeek (2000), Mathematical epidemiology of infectious diseases, Wiley series in mathematical and computational biology, West Sussex, England: John Wiley & Sons  • Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0  • Betteridge, Harold T. (1965), The New Cassell's German Dictionary, New York: Funk & Wagnall, LCCN 58-7924  • Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3  • Curtis, Charles W. (1999), Linear Algebra: An Introductory Approach (4th ed.), Springer, ISBN 0-387-90992-3  • Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1  • Golub, Gene F.; van der Vorst, Henk A. (2000), "Eigenvalue computation in the 20th century", Journal of Computational and Applied Mathematics, 123: 35–65, Bibcode:2000JCoAM.123...35G, doi:10.1016/S0377-0427(00)00413-1  • Hawkins, T. (1975), "Cauchy and the spectral theory of matrices", Historia Mathematica, 2: 1–29, doi:10.1016/0315-0860(75)90032-4  • Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016  • Korn, Granino A.; Korn, Theresa M. (2000), "Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review", New York: McGraw-Hill (2nd Revised ed.), Dover Publications, Bibcode:1968mhse.book.....K, ISBN 0-486-41147-8  • (in Russian)Pigolkina, T. S.; Shulman, V. S. (1977). "Eigenvalue". In Vinogradov, I. M. Mathematical Encyclopedia. 5. Moscow: Soviet Encyclopedia.  • Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), Numerical Recipes: The Art of Scientific Computing (3rd ed.), ISBN 9780521880688  • Sharipov, Ruslan A. (1996), Course of Linear Algebra and Multidimensional Geometry: the textbook, arXiv:math/0405323Freely accessible, Bibcode:2004math......5323S, ISBN 5-7477-0099-5  External links[edit] Demonstration applets[edit]
4a547ed9be7c17ae
Wave: Wikis From Wikipedia, the free encyclopedia Capillary wave (ripple) in water A wave is a disturbance that propagates through space and time, usually with transference of energy. A mechanical wave is a wave that propagates or travels through a medium due to the restoring forces it produces upon deformation. For example, when a sound wave is traveling through the air, air molecules slam into their neighbors, which pushes their neighbors into their neighbors (and so on); but when air molecules collide with their neighbors, they also bounce away from them back in the direction they came from. These collisions provide a restoring force that keeps the molecules from actually traveling with the wave. Waves travel and transfer energy from one point to another, often with no permanent displacement of the particles of the medium—that is, with little or no associated mass transport; they consist instead of oscillations or vibrations around almost fixed locations. In the picture of water waves, if we imagine a cork on the water, it would bob up and down staying in about the same place, although the wave itself is moving outward. When we say that a wave carries energy but not mass, we are referring to this fact that even as the wave travels outward from the center (carrying energy of motion), the medium itself does not flow with it. In many areas of science, the idea of a wave is used metaphorically. If an ocean wave is seen as a prototype wave, it is the basis for the metaphor—the surface of water undulating up and down. However, upon investigating a sound wave, its air does not undulate up and down (as the ocean surface did). Instead, an abstraction is made; if we could look at the air molecules, they would be bunching together (in compressions) and then spreading apart (in rarefactions). Thus, the medium itself is not undulating up and down, but its density is (and its pressure is). When we speak of waves in physics, therefore, we are often speaking metaphorically, in an abstraction, of a periodic fluctuation of a specific characteristic. The characteristics that oscillate could be density, pressure, electrical or magnetic polarities or other (sometimes exotic) characteristics. There are also waves capable of traveling through a vacuum, including electromagnetic radiation. Ultraviolet radiation, infrared radiation, gamma rays, X-rays, and radio waves are examples of these types of waves. They consist of period oscillations in electrical and magnetic properties that grow, reach a peak, diminish, go to zero, and then continue these changes in a periodic fashion. As well, it is believed that gravitational waves travel through space; gravitational waves have never been directly detected but are believed to exist. (See gravitational radiation.) Diving grebe creates surface waves Agreeing on a single, all-encompassing definition for the term wave is non-trivial. A vibration can be defined as a back-and-forth motion around a reference value. However, a vibration is not necessarily a wave. Defining the necessary and sufficient characteristics that qualify a phenomenon to be called a wave is, at least, flexible. The term is often understood intuitively as the transport of disturbances in space, not associated with motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium (Hall 1980, p. 8). However, this notion is problematic for a standing wave (for example, a wave on a string), where energy is moving in both directions equally, or for electromagnetic / light waves in a vacuum, where the concept of medium does not apply. There are water waves in the ocean; light waves from the sun; microwaves inside the microwave oven; radio waves transmitted to the radio; and sound waves from the radio, telephone, and voices. It may be seen that the description of waves is accompanied by a heavy reliance on physical origin when describing any specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a mechanical rather than an electromagnetic wave-like transfer / transformation of vibratory energy. Concepts such as mass, momentum, inertia, or elasticity, become therefore crucial in describing acoustic (as distinct from optic) wave processes. This difference in origin introduces certain wave characteristics particular to the properties of the medium involved (for example, in the case of air: vortices, radiation pressure, shock waves, etc., in the case of solids: Rayleigh waves, dispersion, etc., and so on). Other properties, however, although they are usually described in an origin-specific manner, may be generalized to all waves. For such reasons, wave theory represents a particular branch of physics that is concerned with the properties of wave processes independently from their physical origin.[1] For example, based on the mechanical origin of acoustic waves there can be a moving disturbance in space–time if and only if the medium involved is neither infinitely stiff nor infinitely pliable. If all the parts making up a medium were rigidly bound, then they would all vibrate as one, with no delay in the transmission of the vibration and therefore no wave motion. On the other hand, if all the parts were independent, then there would not be any transmission of the vibration and again, no wave motion. Although the above statements are meaningless in the case of waves that do not require a medium, they reveal a characteristic that is relevant to all waves regardless of origin: within a wave, the phase of a vibration (that is, its position within the vibration cycle) is different for adjacent points in space because the vibration reaches these points at different times. Similarly, wave processes revealed from the study of waves other than sound waves can be significant to the understanding of sound phenomena. A relevant example is Thomas Young's principle of interference (Young, 1802, in Hunt 1992, p. 132). This principle was first introduced in Young's study of light and, within some specific contexts (for example, scattering of sound by sound), is still a researched area in the study of sound. Periodic waves are characterized by crests (highs) and troughs (lows), and may usually be categorized as either longitudinal or transverse. Transverse waves are those with vibrations perpendicular to the direction of the propagation of the wave; examples include waves on a string, and electromagnetic waves. Longitudinal waves are those with vibrations parallel to the direction of the propagation of the wave; examples include most sound waves. When an object bobs up and down on a ripple in a pond, it experiences an orbital trajectory because ripples are not simple transverse sinusoidal waves. A = In deep water. B = In shallow water. The elliptical movement of a surface particle becomes flatter with decreasing depth. 1 = Progression of wave 2 = Crest 3 = Trough All waves have common behavior under a number of standard situations. All waves can experience the following: • Reflection — change in wave direction after it strikes a reflective surface, causing the angle the wave makes with the reflective surface in relation to a normal line to the surface to equal the angle the reflected wave makes with the same normal line • Refraction — change in wave direction because of a change in the wave's speed from entering a new medium • Diffraction — bending of waves as they interact with obstacles in their path, which is more pronounced for wavelengths on the order of the diffracting object size • Interferencesuperposition of two waves that come into contact with each other (collide) • Dispersion — wave splitting up by frequency • Rectilinear propagation — the movement of light waves in a straight line also helpful for seismographs Longitudinal waves such as sound waves do not exhibit polarization. For these waves the direction of oscillation is along the direction of travel.this is very important An ocean surface wave crashing into rocks Examples of waves include: Mathematical description Sinusoidal waves Mathematically, the most basic wave is the sine wave (or harmonic wave or sinusoid), with an amplitude u described by the equation: u(x, \ t)= A \cos (kx - \omega t + \phi) \ , where A is the semi-amplitude of the wave, half the peak-to-peak amplitude, often called simply the amplitude – the maximum distance from the highest point of the disturbance in the medium (the crest) to the equilibrium point during one wave cycle. In the illustration to the right, this is the maximum vertical distance between the baseline and the wave; x is the space coordinate, t is the time coordinate, k is the wavenumber (spatial frequency), ω is the temporal frequency, and φ is a phase offset. The units of the semi-amplitude depend on the type of wave — waves on a string have an amplitude expressed as a distance (meters), sound waves as pressure (pascals) and electromagnetic waves as the amplitude of the electric field (volts/meter). The wavelength (denoted as λ) is the distance between two sequential crests (or troughs), and generally is measured in meters. k = \frac{2 \pi}{\lambda}. \, Sine waves correspond to simple harmonic motion. The period T is the time for one complete cycle of an oscillation of a wave. The frequency f (also frequently denoted as ν ) is the number of periods per unit time (per second) and is measured in hertz. These are related by: f=\frac{1}{T}. \, The angular frequency ω represents the frequency in radians per second. It is related to the frequency by \omega = 2 \pi f = \frac{2 \pi}{T}. \, Various local wavelengths on a crest-to-crest basis in an ocean wave approaching shore.[3] The wavelength λ of a sinusoidal waveform traveling at constant speed v is given by:[4] \lambda = \frac{v}{f}, Refraction: when a plane wave encounters a medium in which it has a slower speed, the wavelength decreases, and the direction adjusts accordingly. Although arbitrary wave shapes will propagate unchanged in lossless linear time-invariant systems, in the presence of dispersion the sine wave is the unique shape that will propagate unchanged but for phase and amplitude, making it easy to analyze.[5] Due to the Kramers–Kronig relations, a linear medium with dispersion also exhibits loss, so the sine wave propagating in a dispersive medium is attenuated in certain frequency ranges that depend upon the medium.[6] The sine function is periodic, so the sine wave or sinusoid has a wavelength in space and a period in time.[7][8] The sinusoid is defined for all times and distances, whereas in physical situations we usually deal with waves that exist for a limited span in space and duration in time. Fortunately, an arbitrary wave shape can be decomposed into an infinite set of sinusoidal waves by the use of Fourier analysis. As a result, the simple case of a single sinusoidal wave can be applied to more general cases.[9][10] In particular, many media are linear, or nearly so, so the calculation of arbitrary wave behavior can be found by adding up responses to individual sinusoidal waves using the superposition principle to find the solution for a general waveform.[11] When a medium is nonlinear, the response to complex waves cannot be determined from a sine-wave decomposition. The wave equation The wave equation is a partial differential equation that describes the evolution of a wave over time in a medium where the wave propagates at the same speed independent of wavelength (no dispersion), and independent of amplitude (linear media, not nonlinear).[12] General solutions are based upon Duhamel's principle.[13] In particular, consider the wave equation in one dimension, for example, as applied to a string. Suppose a one-dimensional wave is traveling along the x axis with velocity v and amplitude u (which generally depends on both x and t), the wave equation is \frac{1}{v^2}\frac{\partial^2 u}{\partial t^2}=\frac{\partial^2 u}{\partial x^2}. \, The velocity v will depend on the medium through which the wave is moving. The general solution for the wave equation in one dimension was given by d'Alembert; it is known as d'Alembert's formula:[14] u(x,t)=F(x-vt)+G(x+vt). \, This formula represents two shapes traveling through the medium in opposite directions; F in the positive x direction, and G in the negative x direction, of arbitrary functional shapes F and G. Spatial and temporal relationships Wavelength of an irregular periodic waveform at a particular moment in time based upon the crest-to-crest or trough-to-trough definition of λ.[15] In the case of a periodic function F with period λ, that is, F(x + λvt) = F(xvt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (sometimes called the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(xv(t + T)) = F(xvt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.[15] To summarize: "A function F (x) is periodic if F(x+ξ) = F(x), for all x. The constant ξ is called a period of the function. The smallest such period is called the fundamental period or simply the period of F. If x represents a space coordinate, then the period may instead be called the wavelength and is often written λ; if it represents the time coordinate, the period might instead be denoted by T." Flowers, p. 473[17] The Schrödinger equation The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Quantum mechanics also describes particle properties that other waves, such as light and sound, have on the atomic scale and below. Wave packets and the de Broglie wavelength Louis de Broglie postulated that all particles with momentum have a wavelength \lambda = \frac{h}{p}, \psi (\mathbf{r}, \ t=0) =A\ e^{i\mathbf{k \cdot r}} \ , where the wavelength is determined by the wave vector k as: \lambda = \frac {2 \pi}{k} \ , and the momentum by: \mathbf p = \hbar \mathbf{k} \ . In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet.[20] Gaussian wave packets also are used to analyze water waves.[21] For example, a Gaussian wavefunction ψ might take the form:[22] \psi(x,\ t=0) = A\ \exp \left( -\frac{x^2}{2\sigma^2} + i k_0 x \right) \ , at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis,[23] or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian.[24] Given the Gaussian: f(x) = e^{-x^2 / (2\sigma^2)} \ , the Fourier transform is: \tilde{ f} (k) = \sigma e^{-\sigma^2 k^2 / 2} \ . The Gaussian in space therefore is made up of waves: f(x) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \ \tilde{f} (k) e^{ikx} \ dk \ ; Illustration of the envelope (the slowly varying red curve) of an amplitude modulated wave. The fast varying blue curve is the carrier wave, which is being modulated. Modulated waves u(x, \ t) = A(x, \ t)\sin (kx - \omega t + \phi) \ , where A(x,\ t) is the amplitude envelope of the wave, k is the wave number and φ is the phase. If the group velocity (see below) is wavelength independent, this equation can be simplified as:[28] u(x, \ t) = A(x - v_g \ t)\sin (kx - \omega t + \phi) \ , where vg is the group velocity, showing that the envelope moves with velocity vg and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.[28][29] Phase velocity and group velocity \psi (x, \ t) = A e^{i \left( kx - \omega t \right)} \ , which can be related to the usual sine and cosine forms using Euler's formula. Rewriting the argument, (kx −ωt) = (2π/λ)(x − vt), makes clear that this expression describes a vibration of wavelength λ = 2π/k traveling in the x-direction with a constant phase velocity vp:[30] v_p = \frac { \omega }{ k } \ . \psi (x, \ t) = \int_{-\infty} ^{\infty}\ dk_1 \ A(k_1)\ e^{i\left(k_1x - \omega t \right)} \ , A = A_o (k_1) e^ {i \alpha (k_1)} \ , A_o (k_1) = N\ e^{-\sigma^2 (k_1-k)^2 / 2} \ , The exponential function inside the integral for ψ oscillates rapidly with its argument, say φ(k1), and where it varies rapidly, the exponentials cancel each other out, interfere destructively, contributing little to ψ.[30] However, an exception occurs at the location where the argument φ of the exponential varies slowly. (This observation is the basis for the method of stationary phase for evaluation of such integrals.[32]) The condition for φ to vary slowly is that its rate of change with k1 be small; this rate of variation is:[30] \left . \frac{d \varphi }{d k_1} \right | _{k_1 = k } = x - t \left . \frac{d \omega}{dk_1}\right | _{k_1 = k } +\left . \frac{d \alpha}{d k_1}\right | _{k_1 = k } \ , v_g = \frac{d \omega}{dk} \ . The group velocity therefore depends upon the dispersion relation connecting ω and k. For example, in quantum mechanics the energy E = ħω = (ħk)2/(2m). Consequently, \frac{d \omega}{dk}= v_g = \frac {\hbar k}{m} \ , showing that the velocity of a localized particle in quantum mechanics is its group velocity.[30] Because the group velocity varies with k, the shape of the wave packet broadens with time, and the particle becomes less localized.[33] In other words, the velocity of the constituent waves of the wave packet travel at a rate that varies with their wavelength, so some move faster than others, and they cannot maintain the same interference pattern as the wave propagates. Standing wave The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example when a violin string is displaced, longitudinal waves propagate out to where the string is held in place at the bridge and the "nut", whereupon the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is on average no net propagation of energy. Also see: Acoustic resonance, Helmholtz resonator, and organ pipe Propagation through strings v=\sqrt{\frac{T}{\mu}}, \, Transmission medium The medium that carries a wave is called a transmission medium. It can be classified into one or more of the following categories: • An isotropic medium if its physical properties are the same in different directions WKB method 1. ^ Lev A. Ostrovsky & Alexander I. Potapov (2002). Modulated waves: theory and application. John Hopkins University Press. ISBN 0801873258. http://www.amazon.com/gp/product/0801873258.  2. ^ Lighthill, M. J., Whitham, G. B., 1955. On kinematic waves. II. A theory of traffic flow on long crowded roads. Procedings of Royal Society A 229, 281-345] and Richards [Richards, P.I., 1956. Shockwaves on the highway. Operations Research 4, 42-51 3. ^ a b Paul R Pinet. op. cit.. p. 242. ISBN 0763759937. http://books.google.com/books?id=6TCm8Xy-sLUC&pg=PA242.  4. ^ David C. Cassidy, Gerald James Holton, Floyd James Rutherford (2002). Understanding physics. Birkhäuser. pp. 339 ff. ISBN 0387987568. http://books.google.com/books?id=rpQo7f9F1xUC&pg=PA340.  5. ^ Mischa Schwartz, William R. Bennett, and Seymour Stein (1995). Communication Systems and Techniques. John Wiley and Sons. p. 208. ISBN 9780780347151. http://books.google.com/books?id=oRSHWmaiZwUC&pg=PA208&dq=sine+wave+medium++linear+time-invariant&lr=&as_brr=3&ei=u69cSpuKNZDKkASph-GaBw.  6. ^ See Eq. 5.10 and discussion in A. G. G. M. Tielens (2005). The physics and chemistry of the interstellar medium. Cambridge University Press. pp. 119 ff. ISBN 0521826349. http://books.google.com/books?id=wMnvg681JXMC&pg=PA119. ; Eq. 6.36 and associated discussion in Otfried Madelung (1996). Introduction to solid-state theory (3rd ed.). Springer. pp. 261 ff. ISBN 354060443X. http://books.google.com/books?id=yK_J-3_p8_oC&pg=PA261. ; and Eq. 3.5 in F Mainardi (1996). "Transient waves in linear viscoelastic media". in Ardéshir Guran, A. Bostrom, Herbert Überall, O. Leroy. Acoustic Interactions with Submerged Elastic Structures: Nondestructive testing, acoustic wave propagation and scattering. World Scientific. p. 134. ISBN 9810242719. http://books.google.com/books?id=UfSk45nCVKMC&pg=PA134.  7. ^ Aleksandr Tikhonovich Filippov (2000). The versatile soliton. Springer. p. 106. ISBN 0817636358. http://books.google.com/books?id=TC4MCYBSJJcC&pg=PA106.  8. ^ Seth Stein, Michael E. Wysession (2003). An introduction to seismology, earthquakes, and earth structure. Wiley-Blackwell. p. 31. ISBN 0865420785. http://books.google.com/books?id=Kf8fyvRd280C&pg=PA31.  9. ^ Seth Stein, Michael E. Wysession. op. cit.. p. 32. ISBN 0865420785. http://books.google.com/books?id=Kf8fyvRd280C&pg=PA32.  10. ^ Kimball A. Milton, Julian Seymour Schwinger (2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators. Springer. p. 16. ISBN 3540293043. http://books.google.com/books?id=x_h2rai2pYwC&pg=PA16. "Thus, an arbitrary function f(r, t) can be synthesized by a proper superposition of the functions exp[i (k·r−ωt)]…"  11. ^ Raymond A. Serway and John W. Jewett (2005). "§14.1 The Principle of Superposition". Principles of physics (4th ed.). Cengage Learning. p. 433. ISBN 053449143X. http://books.google.com/books?id=1DZz341Pp50C&pg=PA433.  12. ^ Michael A. Slawinski, Klause Helbig (2003). "Wave equations". Seismic waves and rays in elastic media. Elsevier. pp. 131 ff. ISBN 0080439306. http://books.google.com/books?id=s7bp6ezoRhcC&pg=PA134.  13. ^ Jalal M. Ihsan Shatah, Michael Struwe (2000). "The linear wave equation". Geometric wave equations. American Mathematical Society Bookstore. pp. 37 ff. ISBN 0821827499. http://books.google.com/books?id=zsasG2axbSoC&pg=PA37.  14. ^ Karl F Graaf (1991). Wave motion in elastic solids (Reprint of Oxford 1975 ed.). Dover. pp. 13–14. http://books.google.com/books?id=5cZFRwLuhdQC&printsec=frontcover.  15. ^ a b Alexander McPherson (2009). "Waves and their properties". Introduction to Macromolecular Crystallography (2 ed.). Wiley. p. 77. ISBN 0470185902. http://books.google.com/books?id=o7sXm2GSr9IC&pg=PA77.  16. ^ Louis Lyons (1998). All you wanted to know about mathematics but were afraid to ask. Cambridge University Press. pp. 128 ff. ISBN 052143601X. http://books.google.com/books?id=WdPGzHG3DN0C&pg=PA128.  17. ^ Brian Hilton Flowers (2000). An introduction to numerical methods in C++ (2nd ed.). Oxford University Press. p. 473. ISBN 0198506937. http://books.google.com/books?id=weYj75E_t6MC&pg=RA1-PA473.  18. ^ A. T. Fromhold (1991). "Wave packet solutions". Quantum Mechanics for Applied Physics and Engineering (Reprint of Academic Press 1981 ed.). Courier Dover Publications. pp. 59 ff. ISBN 0486667413. http://books.google.com/books?id=3SOwc6npkIwC&pg=PA59. "(p. 61) …the individual waves move more slowly than the packet and therefore pass back through the packet as it advances"  19. ^ Ming Chiang Li (1980). "Electron Interference". in L. Marton & Claire Marton. Advances in Electronics and Electron Physics. 53. Academic Press. p. 271. ISBN 0120146533. http://books.google.com/books?id=g5q6tZRwUu4C&pg=PA271.  20. ^ See for example Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics (2 ed.). Springer. p. 60. ISBN 3540674586. http://books.google.com/books?id=7qCMUfwoQcAC&pg=PA60.  and John Joseph Gilman (2003). Electronic basis of the strength of materials. Cambridge University Press. p. 57. ISBN 0521620058. http://books.google.com/books?id=YWd7zHU0U7UC&pg=PA57. ,Donald D. Fitts (1999). Principles of quantum mechanics. Cambridge University Press. p. 17. ISBN 0521658411. http://books.google.com/books?id=8t4DiXKIvRgC&pg=PA17. . 21. ^ Chiang C. Mei (1989). The applied dynamics of ocean surface waves (2nd ed.). World Scientific. p. 47. ISBN 9971507897. http://books.google.com/books?id=WHMNEL-9lqkC&pg=PA47.  22. ^ Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics (2nd ed.). Springer. p. 60. ISBN 3540674586. http://books.google.com/books?id=7qCMUfwoQcAC&pg=PA60.  23. ^ Siegmund Brandt, Hans Dieter Dahmen (2001). The picture book of quantum mechanics (3rd ed.). Springer. p. 23. ISBN 0387951415. http://books.google.com/books?id=VM4GFlzHg34C&pg=PA23.  24. ^ Cyrus D. Cantrell (2000). Modern mathematical methods for physicists and engineers. Cambridge University Press. p. 677. ISBN 0521598273. http://books.google.com/books?id=QKsiFdOvcwsC&pg=PA677.  25. ^ Christian Jirauschek (2005). FEW-cycle Laser Dynamics and Carrier-envelope Phase Detection. Cuvillier Verlag. p. 9. ISBN 3865374190. http://books.google.com/books?id=6kOoT_AX2CwC&pg=PA9.  26. ^ Fritz Kurt Kneubühl (1997). Oscillations and waves. Springer. p. 365. ISBN 354062001X. http://books.google.com/books?id=geYKPFoLgoMC&pg=PA365.  27. ^ Mark Lundstrom (2000). Fundamentals of carrier transport. Cambridge University Press. p. 33. ISBN 0521631343. http://books.google.com/books?id=FTdDMtpkSkIC&pg=PA33.  28. ^ a b Chin-Lin Chen (2006). "§13.7.3 Pulse envelope in nondispersive media". Foundations for guided-wave optics. Wiley. p. 363. ISBN 0471756873. http://books.google.com/books?id=LxzWPskhns0C&pg=PA363.  29. ^ Stefano Longhi, Davide Janner (2008). "Localization and Wannier wave packets in photonic crystals". in Hugo E. Hernández-Figueroa, Michel Zamboni-Rached, Erasmo Recami. Localized Waves. Wiley-Interscience. p. 329. ISBN 0470108851. http://books.google.com/books?id=xxbXgL967PwC&pg=PA329.  30. ^ a b c d Albert Messiah (1999). [ Quantum Mechanics (Reprint of two-volume Wiley 1958 ed.). Courier Dover. pp. 50–52. ISBN 9780486409245. http://books.google.com/books?id=mwssSDXzkNcC&pg=PA52&dq=intitle:quantum+inauthor:messiah+%22group+velocity%22+%22center+of+the+wave+packet%22&lr=&as_brr=0&ei=RSlaSq2qPIP-lQSU_dDeAQ[.  31. ^ See, for example, Eq. 2(a) in Walter Greiner, D. Allan Bromley (2007). Quantum Mechanics: An introduction (2nd ed.). Springer. pp. 60–61. ISBN 3540674586. http://books.google.com/books?id=7qCMUfwoQcAC&pg=PA61.  32. ^ John W. Negele, Henri Orland (1998). Quantum many-particle systems (Reprint in Advanced Book Classics ed.). Westview Press. p. 121. ISBN 0738200522. http://books.google.com/books?id=mx5CfeeEkm0C&pg=PA121.  33. ^ Donald D. Fitts (1999). Principles of quantum mechanics: as applied to chemistry and chemical physics. Cambridge University Press. pp. 15 ff. ISBN 0521658411. http://books.google.com/books?id=8t4DiXKIvRgC&pg=PA15.  See also • Campbell, M. and Greated, C. (1987). The Musician’s Guide to Acoustics. New York: Schirmer Books. • French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN 0-393-09936-9. OCLC 163810889.  • Hall, D. E. (1980), Musical Acoustics: An Introduction, Belmont, California: Wadsworth Publishing Company, ISBN 0534007589 . • Hunt, F. V. (1992) [1966], Origins in Acoustics, New York: Acoustical Society of America Press, http://asa.aip.org/publications.html#pub17 . • Ostrovsky, L. A.; Potapov, A. S. (1999), Modulated Waves, Theory and Applications, Baltimore: The Johns Hopkins University Press, ISBN 0801858704 . External links Velocities of Waves 2006-01-14 Surface waves.jpg Phase velocity | Group velocity | Front velocity | Signal velocity Source material Up to date as of January 22, 2010 (Redirected to The Wave article) From Wikisource The Wave: An Egyptian Aftermath by Algernon Blackwood Part I • Chapter I • Chapter II • Chapter III • Chapter IV • Chapter V • Chapter VI • Chapter VII Part II • Chapter VIII • Chapter IX • Chapter X • Chapter XI • Chapter XII Part III • Chapter XIII • Chapter XIV • Chapter XV • Chapter XVI • Chapter XVII • Chapter XVIII • Chapter XIX • Chapter XX • Chapter XXI • Chapter XXII • Chapter XXIII • Chapter XXIV • Chapter XXV • Chapter XXVI • Chapter XXVII Part IV • Chapter XXVIII • Chapter XXIX • Chapter XXX • Chapter XXXI • Chapter XXXII • Chapter XXXIII 1911 encyclopedia Up to date as of January 14, 2010 (Redirected to Database error article) From LoveToKnow 1911 (There is currently no text in this page) Simple English File:1D Progressive Wave.gif 1-D wave propagation diagram. Wave has several meanings, most of which relate to repeating changes: • In physics and technology, a wave (physics) carries energy that makes visible light, sound, and many other things • A wave (gesture) is a way of communicating with other people using hand signals or other non-spoken ways • A wave in the water is a disturbance on the surface (top) of water, caused by the wind, or something in the water such as a boat or a swimmer • Amplitude, wavelength (or wave length) and period of a wave are basic parameters of the wave. Error creating thumbnail: sh: convert: command not found Other pages Got something to say? Make a comment. Your name Your email address
1dff030d1ca047c1
You are currently browsing the monthly archive for January 2008. (Compare with Theorem 1 of Lecture 3.) as desired. \Box Exercise 3. Give examples to show that the quantity \mu(E)^2 in the conclusion of Theorem 1 cannot be replaced by any larger quantity in general, regardless of the actual value of \mu(E). (Hint: use a Bernoulli system example.) \diamond Read the rest of this entry » Read the rest of this entry » In this lecture, we move away from recurrence, and instead focus on the structure of topological dynamical systems. One remarkable feature of this subject is that starting from fairly “soft” notions of structure, such as topological structure, one can extract much more “hard” or “rigid” notions of structure, such as geometric or algebraic structure. The key concept needed to capture this structure is that of an isometric system, or more generally an isometric extension, which we shall discuss in this lecture. As an application of this theory we characterise the distribution of polynomial sequences in torii (a baby case of a variant of Ratner’s theorem due to (Leon) Green, which we will cover later in this course). Read the rest of this entry » Read the rest of this entry » Read the rest of this entry » In the previous lecture, we established single recurrence properties for both open sets and for sequences inside a topological dynamical system (X, {\mathcal F}, T). In this lecture, we generalise these results to multiple recurrence. More precisely, we shall show Theorem 1. (Multiple recurrence in open covers) Let (X,{\mathcal F},T) be a topological dynamical system, and let (U_\alpha)_{\alpha \in A} be an open cover of X. Then there exists U_\alpha such that for every k \geq 1, we have U_\alpha \cap T^{-r} U_\alpha \cap \ldots \cap T^{-(k-1)r} U_\alpha \neq \emptyset for infinitely many r. Note that this theorem includes Theorem 1 from the previous lecture as the special case k=2. This theorem is also equivalent to the following well-known combinatorial result: Theorem 2. (van der Waerden’s theorem) Suppose the integers {\Bbb Z} are finitely coloured. Then one of the colour classes contains arbitrarily long arithmetic progressions. Exercise 1. Show that Theorem 1 and Theorem 2 are equivalent. \diamond Exercise 2. Show that Theorem 2 fails if “arbitrarily long” is replaced by “infinitely long”. Deduce that a similar strengthening of Theorem 1 also fails. \diamond Exercise 3. Use Theorem 2 to deduce a finitary version: given any positive integers m and k, there exists an integer N such that whenever \{1,\ldots,N\} is coloured into m colour classes, one of the colour classes contains an arithmetic progression of length k. (Hint: use a “compactness and contradiction” argument, as in my article on hard and soft analysis.) \diamond We also have a stronger version of Theorem 1: Theorem 3. (Multiple Birkhoff recurrence theorem) Let (X,{\mathcal F},T) be a topological dynamical system. Then for any k \geq 1 there exists a point x \in X and a sequence r_j \to \infty of integers such that T^{i r_j} x \to x as j \to \infty for all 0 \leq i \leq k-1. These results already have some application to equidistribution of explicit sequences. Here is a simple example (which is also a consequence of Weyl’s equidistribution theorem): Corollary 1. Let \alpha be a real number. Then there exists a sequence r_j \to \infty of integers such that \hbox{dist}(r_j^2 \alpha,{\Bbb Z}) \to 0 as j \to \infty. Proof. Consider the skew shift system X = ({\Bbb R}/{\Bbb Z})^2 with T(x,y) := (x+\alpha,y+x). By Theorem 3, there exists (x,y) \in X and a sequence n_j \to \infty such that T^{r_j}(x,y) and T^{2r_j}(x,y) both convege to (x,y). If we then use the easily verified identity (x,y) - 2T^{r_j}(x,y) + T^{2r_j}(x,y) = (0, r_j^2 \alpha) (1) we obtain the claim. \Box Exercise 4. Use Theorem 1 or Theorem 2 in place of Theorem 3 to give an alternate derivation of Corollary 1. \diamond As in the previous lecture, we will give both a traditional topological proof and an ultrafilter-based proof of Theorem 1 and Theorem 3; the reader is invited to see how the various proofs are ultimately equivalent to each other. Read the rest of this entry » This weekend I was (once again) in San Diego, this time for the Southern California Analysis and PDE (SCAPDE) meeting. I gave a talk on “The asymptotic behaviour of large data solutions to NLS”, which is based on two of my previous papers on what solutions to focusing nonlinear Schrödinger equations behave like as time goes to infinity. (Note that this is a specialist conference, and this talk will be a bit more technical than some of the general-audience talks that I have blogged about previously.) Read the rest of this entry » Read the rest of this entry » We now begin the study of recurrence in topological dynamical systems (X, {\mathcal F}, T) – how often a non-empty open set U in X returns to intersect itself, or how often a point x in X returns to be close to itself. Not every set or point needs to return to itself; consider for instance what happens to the shift x \mapsto x+1 on the compactified integers \{-\infty\} \cup {\Bbb Z} \cup \{+\infty\}. Nevertheless, we can always show that at least one set (from any open cover) returns to itself: Theorem 1. (Simple recurrence in open covers) Let (X,{\mathcal F},T) be a topological dynamical system, and let (U_\alpha)_{\alpha \in A} be an open cover of X. Then there exists an open set U_\alpha in this cover such that U_\alpha \cap T^n U_\alpha \neq \emptyset for infinitely many n. Proof. By compactness of X, we can refine the open cover to a finite subcover. Now consider an orbit T^{\Bbb Z} x = \{ T^n x: n \in {\Bbb Z} \} of some arbitrarily chosen point x \in X. By the infinite pigeonhole principle, one of the sets U_\alpha must contain an infinite number of the points T^n x counting multiplicity; in other words, the recurrence set S := \{ n: T^n x \in U_\alpha \} is infinite. Letting n_0 be an arbitrary element of S, we thus conclude that U_\alpha \cap T^{n_0-n} U_\alpha contains T^{n_0} x for every n \in S, and the claim follows. \Box Exercise 1. Conversely, use Theorem 1 to deduce the infinite pigeonhole principle (i.e. that whenever {\Bbb Z} is coloured into finitely many colours, one of the colour classes is infinite). Hint: look at the orbit closure of c inside A^{\Bbb Z}, where A is the set of colours and c: {\Bbb Z} \to A is the colouring function.) \diamond Now we turn from recurrence of sets to recurrence of individual points, which is a somewhat more difficult, and highlights the role of minimal dynamical systems (as introduced in the previous lecture) in the theory. We will approach the subject from two (largely equivalent) approaches, the first one being the more traditional “epsilon and delta” approach, and the second using the Stone-Čech compactification \beta {\Bbb Z} of the integers (i.e. ultrafilters). Read the rest of this entry »
2d3565e64c643485
Thursday, May 29, 2008 philosophy science and religion LoneRubberDragon and cooperative contributors WIKIPEDIA TEXTS: [0] CONTENTS [[1] through [20]] Videos, images, and writings (C) Copyright, [LoneRubberDragon / RubberCraft / DuRAGON SeTO RuMi / Draashek'gaons / SET,236,926,765,732,171], Anno Domini 2007, 2008 [1] What if there is no God, as Science often says? AD 2008 05 29 A 0645 (rel, sci, phi) ....[1.4] Without God and without a saving science ....[1.7] Addendum, a condemnation of science. AD 2008 07 02 P 11:20 ....[1.8] Addendum, a "Ghost in the Shell" back hack. AD 2008 07 03 A 08:10 [2] Intelligent design theory. AD 2008 05 29 A 0700 (sci, phi) [3] Evolution design theory. AD 2008 05 29 A 0750 (sci) [4] The Dragon's Oroboro. AD 2008 05 29 A 0705 (phi) [5] Light and darkness. AD 2008 05 29 P 1140 (phi) [6] A better world is too merciful, easy, and Utopian, for All Powerful God. AD 2008 05 29 P 1150 (rel, phi) ....[6.2] The real world, there's no free lunches, with the All Powerful Father God YHVH. ....[6.3] If Utopia is too Utopian for God, a critic could go even further. [7] For an all powerful God, we, His children, are not His responsibility. AD 2008 05 29 P 1150 (rel) ....[7.2] God CAN create a stone so heavy He cannot lift it, called the free will soul that is certain not to perish at the Creator's hands [8] Preachers say the darndest things, like, God doesn't need you!. AD 2008 05 29 P 1150 (rel) [9] Some things that are science, but that science cannot explain, all point directly to a transcendent soul. AD 2008 05 29 A 0815 (rel, sci, phi) [10] Computers can be given free will and soul on the material plane of existence. AD 2008 05 29 A 0910 (sci) ....[10.2] The computer is connected to the Light of God. [11] The unbreakable paradox of an All Knowing God and human free-will. AD 2008 05 29 A 0910 (rel) [12] Abiogenesis chemical evolution. AD 2008 07 17 P 0900 (sci) [13] Chinese Han and Japanese Kanji studies. AD 2008 07 17 P 0900 (phi) [14] Quantum physics self question. AD 2008 07 17 P 0930 (rel, sci, phi) [15] Genesis to Revelation - Damnation to Salvation. AD 2008 08 23 A 00:25 (rel) Patrick Moran (P0M) and LoneRubberDragon (~~~~) contributions on Wiki: [16] Abiogeneis second version. AD 2008 09 02 P 08:40 (sci) [17] Chiral / Churl symmetry between Atheism and Theism. AD 2008 09 02 P 08:40 (rel sci phi) [18] Philosophies of existence nature and life. AD 2008 09 02 P 08:40 (sci rel phi) [19] Jumping spiders and such. AD 2008 09 02 P 08:40 (sci phi) [20] Multidimensional Taylor-Laurent series special various applications. AD 2008 09 08 P 1130 (mat), from my earlier looneyfundamentalist post [21] Lunar Retroreflector Rainbow / Planetary Crystalographic Reflections AD 2008 09 15 P 0800 (sci) from earlier talks [22] Wikipedia Laws of Classical Conservation shortfall. AD 2008 09 16 P 1050 (sci) [23] Renewable nuclear energy. AD 2008 09 17 A 1130 (sci) Other Links: Complaining generations: Flee to mountains, Adam and Eve flee from garden, Han Kanji, Finite Element Analysis: Quantum Physics: Genetic Algorithms test and Logos: Evolutionary algorithms and natural combinatorial chemistry, also Taylor-Laurent series outer-space: Abiogenesis materials: Bible sources: Clay catalyzation of existing RNA base polymerization, and adsorbtion and release characteristics: Lipid and early combinatorial chemistry protocell theory: Hypercycle chemistry: Combinatorial chemistry: Armen M Boldi, "Combinatorial Synthesis of Natural Product-Based Libraries", AD 2005 CRC Press "Traditionally, the search for new compounds from natural products has been a time- and resource-intensive process. The recent application of combinatorial methods and high-throughput synthesis has allowed scientists to generate a range of new molecular structures from natural products and observe how they interact with biological targets. Combinatorial Synthesis of Natural Product-Based Libraries summarizes the most important perspectives on the application of combinatorial chemistry and natural products to novel drug discovery. The book details the latest approaches for implementing combinatorial research and testing methodologies to the synthesis of natural product-based libraries. Interconnecting the important aspects of this emerging field through the work of several leading scientists, it covers the computational analysis of natural molecules and details strategies for designing compound libraries, using bioinformatics in particular. The authors describe numerous synthetic methods for producing natural products and their analogs, including engineered biosynthesis and polymer-supported reagents. They also discuss additional considerations for generating libraries, such as screening, scaffolding, and yield optimization. Other chapters examine specific classes of libraries derived from natural products including carbohydrates, polyketides, peptides, alkaloids, terpenoids, steroids, flavonoids, and fungal compounds. Drawing attention to the interplay of drug discovery, natural products, and organic synthesis, Combinatorial Synthesis of Natural Product-Based Libraries contains the most recent and significant methods used to search and assess new compounds for their ability to mitigate biological processes that may lead to improved treatments for various diseases Combinatorial Chemistry is equivalent to high-throughput synthesis of compound arrays in which side-chain, core structure, and stereochemical diversity are varied. At the heart of combinatorial chemistry is the parallel synthesis of compounds that may be lead-like, drug-like, or natural product-like. Two terms, recently introduced by Schreiber, define directionality of such libraries - target-oriented synthesis (TOS), and diversity-oriented synthesis (DOS). In the strictest sense, these two types of libraries fall within the scope of combinatorial chemistry yet possess unique characteristics. Targeted libraries generated by TOS aim to elicit a specific biological response based on a gene family or a theraputic area. DOS libraries, on the other hand, seek to generate more diversity than what has historically been the case for combinatorial libraries, by varying the skeletal and stereochemical elements of the core library structures. Tan has described several categories of such DOS libraries: (1) core scaffolds of individual natural products, (2) specific substructures from classes of natural products, and (3) general structural characteristics of natural products." Miscellaneous abiotic chemistry to biology: [0 [1 [SHI`][ShE`] 1] [1 [SHI`][SHI-] 1] [1 [TIA-N][TE?Ng] 1] [1 [TOu~][DO-u] 1] [1 [SHo-A`NG][JO-u~] 1] 0] [ [13,17,18]::[ [[3, 7, 8] x 1] ], [ 10 [5 x 2] ]::[CH5C2 = (CH)5]::[[micro-meso-macro]-metastable-tree-thread-ring] ] CREATED AD 2008 05 29 A 0645 UPDATED [1.3] to [1.8] AD 2008 06 12 P 07:30 UPDATED added [1.7] AD 2008 07 02 P 11:20 UPDATED added [1.8] AD 2008 07 03 A 08:10 UPDATED updated [1.8] AD 2008 07 07 A 06:40 [1] What if is there is no God, as Science often says? [1.1] Science without God and soul If there is no God, like science often says so, then what purpose is there in life, when nothing will last and live forever? Because without God, it is completely up to humans to discover how to save themselves and live in harmony before time ends. Just counting all of the humans' generations, their ages, and their numbers, one can see that roughly at least 60 billion people have already passed on, so many of them so greatly missed throughout. Perhaps, you knew one of them. And one can clearly see that 7 billion humans are, right now, on their way to a certain inevitable death in this world. [1.2] The hope in science I would only hope of Science, in it's state without any proof of God, that the future machines of science can find the ways to restore and preserve the continuity of all of humanity and life, through incredible, yet to be discovered reconstruction algorithms. I would hope that something like the alien looking evolved machines of man, busily reconstructing and remembering everything, about all things, and about all history, and about all people; that those machines could bring back everyone in their bodies, like in the end of the movie, "Artificial Intelligence", thus bypassing the fatal flaw of bringing people back in our corruptible bodies for, but, one day, in the enormous spans of universal time. [1.3] Without God at the end of time And when the stars have used up all of the energy of gravitational collapse complexity promoting fusion, and the machines have worked to mak a monolith. One that can compute with zero power. And the entire universe lies frozen and cold at absolute zero; that at that "end of time". That at that time, all of humanity, all of the machines, and all of the life of earth, all beloved, and all harmonious, could live together in the monolith, in the images, the living words, in that monolith sea of glass. Living on, when all the rest of the universe lies in the frozen ashes, from the times of the light. [1.4] Without God and without a saving science For if there is no one left at the end of time, it seems like an awful waste of space and time, right? To quote Shakespeare, "And all our yesterdays have lighted fools, the way to dusty death. Out, out brief candle. Life is but a walking shadow, a poor player, that struts and frets his hour upon the stage, and then is heard no more. It is a tale told by an idiot, full of sound and fury, signifying nothing.". [1.5] Final thought But I sometimes fear, that a science without a provable soul, or God, or any universal absolute Good with Purpose, that the science will inevitably doom all man and all life to perish utterly, at the end of time, in selfish frozen chaos. So the only thing to do today, if this is the way life is, and that this is how reality works, than it is better to eat drink and be merry, for to-morrow we all die ... without God, or salvation, and a science without a soul. [1.6] Reference References: Richard P. Feynman lectures, of computation, regarding the subject that computation, specifically, requires no work (power),, the movie "Contact", featuring Jodi Foster,, William Shakespeare's plays,, the movie, "2001: A Space oddessy", featuring the Monolith,, and the movie, "AI: Artificial Intelligence", featuring Haley Joel Osment, and Jude Law. [1.7] Addendum, a condemnation of science. [1.7.1]THE THINGS I DO SEE, do disappoint me, as you have noticed. As much as there are good words in science, they fall short. [1.7.2]Take this quote from Richard Dawking from "The God Delusion", page 35: [1.7.3]"An athiest in this sense of philosophical naturalist is somebody who believes there is nothing beyond the natural, physical world, no *super*natural creative intelligence lurking behind the observable universe, no soul that outlasts the body and no miracles - except in the sense of natural phenomena that we don't yet understand." [1.7.4]And Dawkins asks on page 404, "to give life meaning and a point ... Is it a similar infantilism that really lies behind the 'need' for a God?". [1.7.5]IF science now and forever, by its adherent voices, will always refuse to save the soul, because the soul is nothing, and is non-existent, then we infants need a God to save us, and even save the openly and admittedly soul-less science. IT IS A DISAPPOINTMENT IN SCIENCE ADVOCACY. They are a group that will *forever* deny the ability of saving a soul beyond the body, that we are all just dead animate matter, for the short moment of living. Quantum physics will never explain why wave functions collapse (and uncollapse in quantum eraser experiments) in this universe, based in the *spiritual-immaterial-untouchable-structural-analytical-informational-configurations* of the material universe causing a soulful transcendental wave function collapse / uncollapse, that happens infinitely faster than the speed of light. But there is no supernatural soul, in that supernatural soul, both, and neither, and denied. So science will always fall short for now and forever, never admitting the soul into science reality. And they declare we are without God, so therefore we are all dead men walking, and so then what is the point and meaning in any life, I ASK? For in 1000 billion years, when all stars have died, and all life everywhere is dead and frozen in the ashes of the galaxies, than what *was* the meaning and purpose of the quintillions of universal lives, and the exact purpose of a vowed dead and frozen science? "Ghost in the Shell": "Ko'kaku kido'tai": "Navigation=nuclear-core machine=mobility=task-force-organization": "Stand-Alone Complex": Season 1: Episode 12, Title: [Zhong-Wen' Kanji-Hiragana] "{映画}{監督} の 夢 - たちこま の {家岀}." [Romanji of Kanji-Hiragana On-kun reading] "{} {kan.toku} no yume(mu) - Tachikoma no {}" [English Zhong-Wen' Kanji-Hiragana radicalized transliteration] "{sun=big=sun cover=field=receptacle} {retainer=overview=dish on-top-of=eight=hook=right-hand=eye} (is-of-possessive) plants=eye=cover=death=evening - Tachikoma (is-of-possesive) {roof=place=boar=household sprout=border}" [English Zhong-Wen' Kanji-Hiragana transliteration] "{reflecting picture} {warden supervisor}'s dream - Tachikoma's {household emerge}." [Middle English translation] "Tachikoma's {Home Leaving-of} - {Projection-Picture} {Director}'s Dream." [English translation] "Tachikoma {Runs Away}, - The {Movie} {Director}'s Dream." "a stand alone episode, ========[[ [Human (Ren' | Jin)] Major Mokoto Kusanagi: ]] >[Salvation in The Kingdom of Heaven (Wang'Guo' Tian-Tang' Zheng?Jiu` | O'koku Tengoku Tamashii no kyu'sai)] isn't a bad idea, but all [life show is (Sheng-Huo'Ming` Sheng`Kuang` | Sei.inochi.katsu Sei.kyo')] fundamentally transitory, or at least it should be. >But a [life show eternal (Sheng`Kuang` Yong?Jiu? | Sei.kyo' Ei.kyu')] without a beginning or end, that only keeps the [saved (Ren' Zheng?Jiu` | Jin Tamashii no kyu'sai)] fascinated, and never lets them go? >It's harmful, no matter how wonderful you may have thought it was. ========[[ [Eternal Lord (Yong?Jiu? Shen'Ling'Yang` | Ei.kyu' Kami.rei.sama)] V.R. Director, Kannazuki: ]] >My, you're a tough critic! >Are you saying then, that there is a reality, that we in the [saved (Ren' Zheng?Jiu` | Jin Tamashii no kyu'sai)], ought to return to? >For some people in the [Kingdom's saved (Wang'Guo' Ren' Zheng?Jiu` | O'koku Jin Tamashii no kyu'sai)], misery is waiting for them the instant they return to reality. >Can you accept responsibility for depriving those people of their dreams? >No, I can't. >But dreams have meaning for you *because* you are fighting for them within reality. >Doing nothing but projecting yourself into [salvation (Zheng?Jiu` | Tamashii no kyu'sai)] is the same as being dead. >I see you're a realist. >If a romantic is someone who escapes from reality, then yes. >Such a strong woman you are. >If the reality you believe in ever comes to be, call me. >When it does, I will leave [Heaven (Wang'Guo' Tian-Tang' | O'koku Tengoku)]. CREATED AD 2008 05 29 A 0700 UPDATED [3] AD 2008 06 12 P 07:30 [3] [Evolution design theory]. [3.1] In [the beginning]. [] [Big Bang] to [Fusion of Elements] - [Natural Complexity] from [Natural Simplicity]. [The big bang] generated [basic elements, hydrogen and helium]. [Gravity] formed [1[stars] in [2[an element reprocessing] and [enriching series] of [stellar generations]2] over [large periods of time]1]. Within [1 each [nebula cloud] to [stellar generation]1], there occurs [fusion element transforming nuclear forces], producing [1 [ever more element enrichment] in [stars and supernova nebulas]1] which in turn produce [1 [ever heavier element bearing] [2[interstellar bodies] and [stars]2]1]. [This fusion reprocessing] assists [1[the increasing elemental complexity] of [the universe]1] all by [natural means]. [The design] is all of [1[inherent uniform universal impersonal assembly modalities] that are all [2[3[hidden within] and [coming from]3] [3[the natural physics] that was [initially setup]3]2]1]. [] [[Jets] and [Watches]] from [[the natural arrangements] of [simpler parts]]. For example, [U238] formed [1[naturally] from [simplicity]1]. [U238] has [1[92 protons], [146 neutrons], and [92 electrons] in [18 organized nesting shells]1]. [U238] is [an example] of [1[complexity] arising [naturally] from [simplicity]1]. [1[It] is [a blind watchmaker]1], assembling [something quite complex], through the means of [a tornado of energy flow] inside of [1[a fusing star] and then [a stellar supernova]1]. [U238] is much like [the quintessential Creationist's idea] of [1[a tornado] assembling [a jumbo jet] from [a graveyard of parts]1], or [1[a Swiss watch] coming from [shaking a bag of gears and cogs and springs]1]. And yet, [1[U238] [arises and exists naturally]1] as much as do, [1[Fe55], [C12], [Au197], and [all other atomic elements]1], all in [1[2[a whole family] of [92 set elements]2] of [natural finely tuned watches]1], that all [1[come into existence] from [nothing more mysterious] than [concentrated simplicity]1]. [1[Stars] too [assemble themselves] from [blind forces]1], all not necessarily requiring [1[2[God's eternal minding] of [the gravitational pull]2] of [2[a large collection] of [Hydrogen and Helium atoms] that are [scattered all over space]2]1]. [The design] is all of [1[inherent uniform universal impersonal assembly modalities] that are all [2[3[hidden within] and [coming from]3] [3[the natural physics] that was [initially setup]3]2]1]. [] [God's Intervention] considered separate from [His Created Spaces of Physics] - all that is [observed] on [the large scales]. At [this point] there is [nothing] that is [1[apparently specially interventionally controlled] by [additional interventional forces] coming from [outside of the physics] ever since [the Big Bang event]1]. That is, there is [nothing apparently occurring outside] of what is [1-[2[the forces] of [His natural physics] that were setup [at time zero]2]; which are the workings of [2[gravity potential energy pressure], [massive fluid dynamics], [nuclear reactions], and [initial simple and basic atomic elements]2], all occurring, in order to make [2[the natural factories] that [produce these complex atomic elements]2]-1]. [1 [One] [2[sees] and [can demonstrate]2]1] only what is [1-[2[a natural purity] of [3[purely natural complexity] arising from [purely natural simplicity]3]2], where [2[the physics] is of [God's creation]2], but which is not of [2[God's continual intervention Himself], as [a separate being] from [the creation of lower things]2], but which is more of [2[3[a Buddhist] or [a Hindu] concept 3] of [3[the intrinsic forces flowing] within and through [all things]3]2]-1]. Rejecting [that latter posit] indicates that [the design] is all of [1[inherent uniform universal impersonal assembly modalities] that are all [2[3[hidden within] and [coming from]3] [3[the natural physics] that was [initially setup]3]2]1]. [] [The Elemental God] is [disputed], not [The Separate Being God]. NOTE: Though [1[I disagree strongly] of, and about, [2[a science] without [a soul]2]1], yet [1[at the same time], [I fail myself]1]. With [1[not those God theories], that [2[call God] as [a being]2]1], but [that] which also goes further as to [1[call God] as [all of the forces] like [fusion], [gravity], [matter], [lightning], [rain], and [life]1], which is at [1[the point] where [I fail myself]1]. [I] find that [it] is like declaring, [1[Zeus], [Jupiter], or [Thor] make [lighting happen]1]. And more than [that], [it] is [1[polytheistic] where in [2[all of the forces] with [no clear distinction of boundaries]2] have [a god for each force and modality] covering [2[earthquakes], [volcanoes], [weather], [lightning], [floods], [fusion], [gravity], [so-called accidents], [life forms], [molecular binding in life], and [human deforming mutations], to whit [name a few]2]1]. [It] makes [all people] merely components of [1[His Body] filled with [the pitfalls of His Natural Body] and [distortions of physical perceptions] bound to [His Body's perception modalities]1] that all [1[try and test] of [the small wisp of Self] that [we all have] also arising from [His Body]1]. Rejecting [that latter posit] indicates that [the design] is all of [1[inherent uniform universal impersonal assembly modalities] that are all [2[3[hidden within] and [coming from]3] [3[the natural physics] that was [initially setup]3]2]1]. [] [The Elemental God] is [by most people of Monotheism] to be considered as [animism] that is not compatible with [The Monotheist Separate [One Being God] religions], rejecting in most part [the old ways]. [It] is much like [1-[2[a scientific child] of [centuries ago]2] when holding up [2[a piece of amber] that is [fur charged] (amber in the Greek ήλεκτρον = "electron")2] when trying to [2[explain and demonstrate] that [it] is how [lightning forms]2]-1], that which then makes [1=[2[the religious leaders] of [the times] to [say]2], firstly, [2- where for do [3[4 you dare speak 4] this [4[heresy] against [the God-of-Lightning]4]3], just have [3[faith in God-of-Lightning] for [your complete and whole prosperity] along with [every other God]3], and stop [3[fooling yourself] with [4[such difficult so-called wise ideas of yours], of [the material plane]4]3]-2], and then continue saying, [2= for if [3 [you persist and continue] in [such thoughts]3], then [3-[4[The God of Lightning] will [cast you down]4] for [4[creating a disturbance] in [the body of the-God-of-Lighting]4], and [the God-of-Lighting] will [4[destroy you] for [5[being foolish] in [the wise ways of men]5]4]-3]=2]=1]. Rejecting [that whole posit] indicates that [the design] is all of [1[inherent uniform universal impersonal assembly modalities] that are all [2[3[hidden within] and [coming from]3] [3[the natural physics] that was [initially setup]3]2]1]. It is a God, sometimes seen today, that is merciful, but always holds the penalties of death and eternal separation and suffering, as a secret means, an instrument, to make, even coerce, you to "freely" come to His compassionate ways (with irony in this context). I, nor science, can deny God, or at the least a spiritual plane of experience, going by human simple senses. I simply think, God doesn't have to interfere with physics, to allow the same things to happen, as they appear to happen, naturally, in this universe, and that humans' traditions tend to overextend their thought on what they think they know of how God is actually involved on a daily basis with the material plane. It doesn't deny that God created the forces as they are, and that the universe is His body, and that it runs as designed, but that the design, once set into motion, doesn't require additional interactions to make things occur. The only true act of creation is the Big Bang, and from there, it is the works of the material plane, separate from God, as much as it came from God. God doesn't need to intervene every time two people have sex for the biochemistry to work. God doesn't need to intervene every time a mitochondria produces units of molecular energy. And God doesn't need to intervene to produce massive tons of heavy elements in the growth and supernova of stars to make Lithium through Uranium. They "simply happen" as a natural part of complexity arising from simplicity, quite naturally from the rules of the game started at the Big Bang formation of the physics of the universe. [3.2] Chemistry folds back on itself in increasing complexity like folding a *processing* taffy In a fluid environment, on a element enriched planet, circling a star giving it light, which makes for an energy open system, one finds there has formed quite natural catalytic chemistries, that have built on themselves in layers of catalytic chemistries, building new species of chemicals up in natural energized hierarchies, through molecular concentrations and the most durable and flexible diverse chemistries, over time. The molecules that formed,, autocatalytic, and hypercycle catalytic reaction chains, and networks,, that they naturally formed a backbone of many and naturally diverse sets of self sustaining chemical reaction systems. Thus, systems with the longest lived, most numerous, and most reaction facilitating molecular units, enriched a concentrated chemical environment of their own prosperity, over chemicals without prosperous ways. Complexity, once again, is seen springing forth naturally, from simplicity, without God being considered the matrix of chemical reactions done daily. [3.3] Combinatorial chemistry reaches for self control. Organo-chemical elements of memory, interconnect, amplification, control, interactions, and programs, began to form, and combinatorially interact, in the quite natural, and ever increasing combinatorial chemistry matrix. Catalytically productive forms of programs dominated the soup's products, and complicated themselves in the hierarchies of catalytically related models that generated numbers, durability, and utility, in the matrix of networked catalytic chemical processes. All other reactions that lead "unprosperously", declined in their own numbers in the face of the feedback loops of the cooperative catalytic combinatorial chemistry. The more time space invariant programs, found in enduring numbers, over short lived sparse programs, additionally acquired basic processing structures in exploring their own related chemical species, and allowing programmed catalyzation in digital forms that are ever more enduring, and found in multiplied numbers, over lesser prosperous chemical exploration programs of chemistries. [3.4] Digital combinatorial chemistry takes control with polymers. Program chemical interactions intersect with new molecules of lipid, protein, and proteinoid interactions, to form sheets and cells in semi-digital organic reaction domains. Reactions begin to develop more digital associations with RNA fragments. New catalytic codes dominate on RNA products in permutations of feedback, memory, and control effects. Likewise RNA and protein programs begin to develop associations with basic fragments of DNA machines, and proliferate the best DNA processing modules, in permutations mixing for models with numbers, durability, and facility of reactions. For those with prosperity and inherent wisdom in their own numbers, over others without natural "wisdom" in their codes of reactions. [3.5] The first cells of life. Basic DNA, RNA, and protein associated programs develop associations in growing DNA modules, and proliferate. First life may be considered as occurring here, as a micro bacterial level of life form. All formed through natural coded functions in numbers, environmental strength, and function, using the lipid bubbles to promote a greater stability reaction unit that can travel like pollen, and grow equilibrium numbers. [3.6] Multi-cellular life. Evolutionary forces continue, sometimes facing catastrophes, but throughout, surviving in some forms of life, always through numbers, durability, and robust fitness in their chemical and mutual-other-cellular domains. Some life forms group and interact in ways characteristic of macroscopic life forms, that prosper naturally when together in cooperative reaction groups, over others that survive alone, or perish alone. Numerous collections of multiple cells begin sharing cooperative efforts of survival, reproduction, and exploration in multitudes of new forms of cellular groupings. Very soon, as the quintillions of parallel oceanic codes, over millions of generations, worked themselves out into these new higher form configuration codes, so that now one sees an explosion of multi-cellular life, with numerous body designs, flourishing and competing for efficiency, survival, reproduction, and numbers, over other colonies. [3.7] Individual and social thoughts blossom into the future. Eventually macroscopic animal life acquired macroscopically active and abstract, individualistic thought. These thoughts increase in complexity, until agent processing and abstraction technologies springing from life, eventually allows the creation of new / novum "top-level" life that is macroscopically designed and built, using macroscopic and microscopic mechano-chemical systems, most notably using machines, integrated circuits, inorganic mechano-chemical nanotechnology, and organic genetics of past life projected to its limits. The things are created using all of the prior abstract and scientific thought, and precise intelligent chemical system design, coming from the designing, computing, and building machine and life domain. The technology is all used to go back down into the microscopic invisible domain of existence, to efficiently achieved by intelligent design methods, the new machines of life forms. New forms of biological life are created, approximating predictable future stable life forms, and done long before animal centric evolution competition would normally create, or that might never occur due to time limitations in a competitive cooling and naturally disruptive universe, with evolution occuring around more naturally exxisting slow macroscopic life trying to persist through time. [3.8] Definitions. ----Combinatorial chemistry - a system of chemistry, where every reaction between every specie of chemical in a mix is considered. Say for an initial early mix of 1,000 chemical species of natural molecules, there are a potential 1,000,000 reactions that can occur (or more with multi-modal state chemical species exposed to energies). Some reactions build things up. Some reactions tear things apart. Some reactions support themselves alone. Some reactions cyclically support each other in rings. Some reactions network cyclically support each other in networks of reactions. ----catalysis - a chemical specie that facilitates the reaction rate of another reaction by reducing the energy required for the reaction, and thus increasing the forward reaction rate, and equilibrium concentration. For example, a1 + a2 --> A at a low rate, naturally, but a reusable catalysts B that isn't part of the reaction makes the reaction more efficient. Schematically, a1 + a2 --B--> A, where B merely assists the reaction by lowering the energy of production, but B isn't consumed in the reaction. ----complexity increasing naturally - combinatorial chemistry feedback in an energy-open chemical-system has natural complicating posiblities, like the ocean exposed to sunlight, where energy from sunlight allows new chemical products to form, that would not otherwise exist, if say, frozen cold in an energy closed system. And when new chemical species form, this only allows more potential reactions to occur that couldn't have occurred before the new energy supported chemical species existed. This expands the chemical species, and in the multiplication-product of a reaction matrix, grows with the square of the number of chemical species. For 100 species there's 100 * 100 = 10,000+ potential reactions at night. With sunlight, it may be 20,000+ potential species allowed. Then of those 20,000 potential reactions, 900 new species of chemicals may persist. Then one has 100 + 900 = 1,000 chemical species. The matrix of all inter-actions shows that there's now 1,000,000 reaction species at night, or 2,000,000 under sunlight. Of this, there may be 9,000 new chemical species products formed in numbers, by allowing sunlight energizing of 2,000,000 nodes. This then means that there's 1,000 + 9,000 = 10,000 chemical species. In the matrix, there can now be 10,000 * 10,000 potential chemical species reactions or 100,000,000 reactions at night, and 200,000,000 in daylight. Of these there may be 90,000 new species of the 200,000,000 reaction nodes. This gives 10,000 + 90,000 = 100,000 chemical species. It's matrix allows 10,000,000,000 reaction nodes at night and 20,000,000,000 reactions in the light. This loop goes on nearly ad-infinitum, until the chemical species are as numerous as a liter of solution containing, say, 1,000,000,000,000,000,000 molecules, that saturates with millions of chemical species both unique, and numerous enough for reactions. ----chemical reproduction - an auto-catalytic chemical reaction that allows duplication of a molecule. For example, a1 +a2 + a3 -- A --> A, is a schematic of an auto-catalytic chemical reproduction, where A isn't consumed, and allows another copy of A to form. ----Hypercycle reaction chain - a catalytic chemical reaction chain that is looped back on itself with mutually supporting reaction products. For example, if one schematically has: b1 + b2 -- A --> B, c1 + c2 --B --> C, and a1 + a2 -- C --> A, where a1, a2, b1, b2, c1, c2 are commonly available persistent base species, then the equilibrium level of A, B, C are increased in this feedback loop. Chemicals that are not part of a chain reaction simply reach their own natural forward reaction equilibrium concentration. ----Hypercycle reaction network - a set of chemical reaction hypercycles that overlap and/or feed back into each other. For example, when one schematically has: b1 + b2 -- A --> B, c1 + c2 --B --> C, a1 + a2 -- C --> A, e1 + e2 -- D --> E, f1 + f2 -- E --> F, d1 + d2 -- F --> D, b1 + b2 -- D --> B, e1 + e2 -- A --> E, a3 + a4 -- F --> A, and d3 + d4 -- C --> D, then there are two hypercycle reaction chains for A, B, C, and D, E, F. There are also two catalytic reactions supporting each other between the two feedback loops helping produce more B, E. Finally, there are additional inter hypercycle feedback loop chemical reactions that *cooperatively* produce more products A, D for each other, using some of the very involved products found between the two loops. All of these reactions for a cooperative chemical system that help their own products proliferate, and once started up, have a degree of self organized stability in their feedback loop strength. ----digital chemistry - a chemistry of reactions based on polymers of essentially digital modular molecular species, so that reactions facilitated by the polymers, can be closely related to their digital polymer code. DNA, for example, has 4 coding chemicals in general living nature. These can form into chains that are numerical in nature, and each chain number code has its own chemical reaction properties. For a chain 4 units long there is 4 * 4 * 4 * 4 = 256 chemical *codes*. When a supportive chemistry exists with these polymers, such as a hypercycle reaction network, the relative ease of a polymerization allows the many codes to be easily explored because of their digital character nature, and the products involved with a feedback strength will increase their numbers. This is because some codes will have a naturally existing hypercycle chain and/or network, and so will react well, while other polymer codes will react less well. The ones with superior qualities of supporting their own cooperative system of reactions, in easy combinatorial code exploration, will become more numerous, over the less fit reactions in the matrix. One of the best of the digital chemical polymers, are amino acids that have more than 20 common digital forms, and can form polymer chains of those numbers, having unique reaction characters or properties over their three dimensional polymer form with sites of hydrophobic or water repelling, hydrophilic or water attracting, fatty, polar or having an electric charge difference, and neutral or unreactive, character. These chemistry promoting properties, relatively easily forming into chains, mean that numerous varieties of reactive three dimensional molecules can be formed. ----self organizing systems - a system that self creates and supports itself from its own nature of durability, numbers, and effectiveness to the environment. CREATED AD 2008 05 29 A 0750 UPDATED [2] AD 2008 06 13 A 0245 [2] Intelligent design theory. [2.1] The end times final temple of Man. A crystalline form was fabricated by humans, at the atomic level, with macroscopically controlled microprobes, coated with programmable forcing and forming microscopic hands, which worked at assembling the crystalline form, from a mechano-chemistry fluid. Carefully encoded within the crystalline forms of numerous micro-structures was a living control program, feeding from light and electricity, that strikes the crystalline form. [2.2] Humanity and life in a sea of glass. A human looking at the finished crystalline form, would see spectral patterns of light and a material surface both with incredible complexity, forming and changing in psychedelic shapes, on the crystalline form. The crystal structures can be seen by the light that strikes the form. The crystal's micro-patterns translate and manipulate and live, through a labyrinthine control, process, scatter, absorption, and reemission of light, and electricity, all according the state of the inner crystalline structure's control programs. The entire crystaline form serves as a sensoria and mind of the light world, and it can turn pitchblende black when concentrating on observing the light of the cosmos striking it, and can turn into a surface of shimmering colorful laser light when expressing masses of information to the universe. When in near pitch black and absolute zero, the crystalline programs can still operate by transferring all of their operations, into the random vibrations of its particles, working within the low energy quantum states, that are used for producing coherent information flow, and is facilitated by any external photon excitation. In more temperate climates when a wisp of energy and good matter can be used, the surface of the form can programmably grow a surface with multicatalytic properties, and can generate a film of mechano-chemistry fluid for interacting with the materials around it. This allows interfacing with organic chemistries, or machines, or crystalline form generated circuits and micro forms. (try magnifying color patterns, while viewing on an LCD flat screen, to read the secret text pattern below human visibility!) At the "end of time", this crystalline form was all that remained of all humans and life on earth, that once circled a burning star, now long ago frozen cold, and thrown out of the galaxy, and into the void, back when humans once roamed the galaxy, casting out the solar system by a set of human controlled chaotic orbit controls, between the solar system with other galactic stars, and dances with double stars, bringing the whole solar system in tow. It was done to throw the solar system clear of the chaos of the galaxy once galactic material powers became too sparse for generating any new useful solar system galactic orbit navigating control. [2.3] Humanity traverses the cold dark cosmos. The crystalline form in this end times, is now in a space navigating robust planet surface landing and takeoff vehicle body. It circles several cold white dwarfs including the sun's cold white dwarf remains, and some neutron star cores, new large planets, and other solar system bodies, that were all collected into the solar system, back when the solar system once orbited the milky way galaxy. Frozen supplies outposts, setup when the universe was still warm, including the most ancient root of the earth and moon, now orbit the remains of Jupiter. The outposts are used to help extend the crystalline forms emergency operations, into the voids of endless intergalactic space and time. Now, along with occasional bursts of light in a celestial collision or light communication, the crystaline form's vehicle body uses sparse chaotic thrust dialing orbits to navigate the known solar system, and to sense the state of the universe, in the motions around the solar system, in thousands of years of observations. The crystalline form periodically engages landing for supplies tucked in the orbits around Jupiter, in the billions of years in cyclic schedules. The bodies of the solar system also are all chaotically controlled, using additional small controlling asteroidal bodies, carefully orbiting around the known solar system, and causing slight critical nudging configuration alterations at key Lagrangian saddle orbit bifurcation control points. The old earth around Jupiter, can no longer be landed on, as all of it’s machines have lost power, and the available solar system supplies are insufficient to get back off the planet, by the deepness of time in an increasingly cold universe. All that can be done is to look at the earth in the most feeble cold light, or by shining laser light and microwaves on it from the vehicle, as the crystaline form's vehicle body passes by earth. From the point of view of today's material world, it is semmingly so different from the tracts of careless mindless happier times spent in sunlight and warmth, with biochemistry, and bodies per-se, and with terrifying periods of wars and genocides. It turned out, by science that universe transmigration technology, to reach other universes or other parts of this universe, completely and utterly failed in the times of sunlight, to break the bonds of the travel limiting universe, and so now all humans and life are shunted in the living pattern of the program in the crystalline form, looking for salvation, from outside of what was seen in witness, and looking for an unlikely better time to occur, at a point hoped to be certain. but seemingly indeterminable. by all humans in the crystaline sea of glass. All life of the ancient-past universe has frozen and never heard from again, except for humanity, and the dragons which still orbit the milky way, producing the observations of faint communications, and their implied motions, in the now-distant milky way stellar, seen through rare frozen stellar collision event evidence. [2.4] Man's reach almost spanned the galaxy when there was light. Humans, long before the crystalline form, once traveled throughout the galaxy, and even found other similar life forms in many places in the galaxy. The nearly immortal dragons, were of note, swarming the spaces between the stars of the milky way galaxy. As the end of time approached, humans around the last of the burning stars, launched their last forms, with hoards of necessary materials, including planets, and even stars, back into the home solar system that all humans came from. Then humanity carefully ejected the solar system, under human control. No other life was so interested in living, and so froze to their ends throughout the fading galaxy It was unfortunate, humans couldn't modulate the stars of the entire galaxy, in high enough density during this time, to have created an entire milky way galactic oasis or stellar orbit order, in enough time for the solar system, but it was shown that controlling all of the stars simultaneously, so the heavy solar system would always remain safe from accidents, proved to be beyond frail humanity with the rapidly failing light of the stars over the billions of years of last light. If only a black hole ring could have been produced or found, useful life-time could have been dilated exponentially into the future, with the crystalline form vehicle body living in the greatly time slowed dilated center axis orbit carefully running through the black hole ring outside of the event horizon. [2.4] Now the solar system drifts away from the Milky Way into the safe void of space. In the after human time, all other life became extinguished, lacking all genetic preservation technology in the frozen cold of the end of time, and only dragons remained in the galaxy, feeding off the energy of interactions made possible through communications of chaotic orbits that collect matter and take mometum energy from double stars through chaotic navigating orbits. Some dragons are the size of mountains, and never land on anything larger than small asteroids. All have detailed and corroborated maps of the entire galaxy in space and time, and orchestrate their travels, and keep the galaxy in some measure of order for their travels by minimizing stellar and black hole collisions, and preventing star losses from the galaxy completely. The dragons are always traveling, and catching up with other dragons in communications. Some dragons have watched that little solar system of humanity, leaving the galaxy, to the edge of the void, thinking about those humans that they once met, billions of years ago, and wondering about those people with a [stone | rock | Petra] root. That solar system, perhaps, being a dragon in its totality, as some dragons believe, that is the size of a solar system, and trying to survive in the frozen universe, appearing to be without salvation outside of humans' plans, along with themselves as dragons. CREATED AD 2008 05 29 A 0705 AD 2008 09 15 P 0800 added dragon icon [4.1] Opening verse. THE [Good-Tengoku-Tok-Tian=-Oiranos-Tov] [DRAGON’S-ryu tatsu doragon-yong-long/ lao~han..fu..-draekoon-drakoneem] [OROBORO-wa shuki-panji wonhyong-huan/ xun/huan/-krikos keklos-taba’at zahav makhzor hadam] [OPENS-saita-p’in-shen=kai=de-anoigoo-leefko’akh’eynayeem leefto’akh]. THE [evil-aku-ak’an-huai..-chachos-ra’ah] [DRAGON’S-ryu tatsu doragon-yong-long/zhong= long/-draekoon-drakoneem] [OROBORO-wa shuki-panji wonhyong-huan/ xun/huan/-krikos keklos-taba’at makhzor] [CLOSES-tojiru-kamtta-wu/shi..-kleinoo-khat’akh lee’yeydey gemar]. [4.2] Good cooperates with good, evil disperses with evil. One to be with [Good-Tengoku-Tok-Tian=-Oiranos-Tov] creates a force of [unity harmony-Cho=wa-Chohwa-Qia..rong/qia..-enarmonizoo-Akhdoot harmonyah]. One to be hated of the world, for [Good-Tengoku-Tok-Tian=-Oiranos-Tov], The [enemy-tekigun-cho~k-di/ren/di/-echthros-oyveem], makes them one’s friend, when wise. So [Good-Tengoku-Tok-Tian=-Oiranos-Tov] even if they split, in the course of time, when wise, even as [evil-aku-ak’an-huai..-chachos-ra’ah] with [evil-aku-ak’an-huai..-chachos-ra’ah], when they split, in the course of time, when scheming. that builds with itself, is greater than [evil-aku-ak’an-huai..-chachos-ra’ah] that destroys itself, even as it destroys the [Good-Tengoku-Tok-Tian=-Oiranos-Tov], because the [Good-Tengoku-Tok-Tian=-Oiranos-Tov], can see the [evil-aku-ak’an-huai..-chachos-ra’ah] approaching at that time, and forms [unity harmony-Cho=wa-Chohwa-Qia..rong/qia..-enarmonizoo-Akhdoot harmonyah], when wise. [4.3] Individuality with goodness is acceptable. Individuality with evilness is dispersion. Individuality with [Good-Tengoku-Tok-Tian=-Oiranos-Tov] is [Good-Tengoku-Tok-Tian=-Oiranos-Tov], because [Good-Tengoku-Tok-Tian=-Oiranos-Tov] is of the same common heart, when wise. Individuality with [evil-aku-ak’an-huai..-chachos-ra’ah] makes [Good-Tengoku-Tok-Tian=-Oiranos-Tov], because [evil-aku-ak’an-huai..-chachos-ra’ah] is facilitated to be destroyed in [evil-aku-ak’an-huai..-chachos-ra’ah] is of its own many split confusion ways when scheming. [4.4] Good, be wiser than the evil. observe [evil-aku-ak’an-huai..], but do not become one, with [evil-aku-ak’an-huai..-chachos-ra’ah], do not form [unity harmony-Cho=wa-Chohwa-Qia..rong/qia..-enarmonizoo-Akhdoot harmonyah] with [evil-aku-ak’an-huai..-chachos-ra’ah]. though however, can sleep drowsy in ambivalence, allowing [evil-aku-ak’an-huai..-chachos-ra’ah] to decay the [Good-Tengoku-Tok-Tian=-Oiranos-Tov]. So vigilance is required, by the [Good-Tengoku-Tok-Tian=-Oiranos-Tov], in the battle against [evil-Aku-ak’an-huai..-chachos-ra’ah]. [4.5] The dynamics of these differential equations, triumph under man, under a hands off God. In this way, is [maximized-saidai ni suru-ch’oedaehwahada-shi~ zeng=jia dao.. zui..da.. xian..du..-aeksanoo ston anootato-merav makseemoom], in the time when God respectfully allows [evil-Aku-ak’an-huai..-chachos-Oiranos-ra’ah], in the time when also allowing all [Free will-Kettei suru-Kyo~lsshimhada-Jue/ding..-Khofesh dror lehakhleet], to enter His [Good-Tengoku-Tok-Tian=-Oiranos-Tov] containing the evolving forces destroying the [evil-Aku-ak’an-huai..-chachos--ra’ah], [4.6] Man must be vigilant under a hands off God. Do not let the illusions, of man’s laid out myths and legends, to then distract you, or make you ambivalent, away from the [Good-Tengoku-Tok-Tian=-Oiranos-Tov], or this world confusion will persist forever, as [evil-Aku-ak’an-huai..-chachos-Oiranos] cannot be forsaken, CREATED AD 2008 05 29 P 1140 [5] Light and darkness. [4.1] Traditions of Man. You hear it often said, that you can't have light without darkness. But if light permeates all things, then there is lighter and darker, and there is no real darkness at all. The failure of light to fully saturate and permeate all things, is the only thing making relative darkness. [4.2] Darkness has in it the light and lightness has in it the dark. Now, for things that merely reflect true light, there is an interesting flip. What is dark - takes the light in and holds on to it, and what is light - repels the light away as fast as possible. So one could honestly say that real white power flows in the darkness. Like the warmth of light in a dark stone, or the magic of human thought, that simply reprocesses the chemical energy from sunlight, inside of the dark mind. It is strange how humans' relative words can be made to change and tumble, across times and spaces, with just a readjustment of perspective. CREATED AD 2008 05 29 P 1150 [6.1] Utopian world potential in imagination. Imagine God creates a world where humans never die. And imagine, if one sins, God gives us all the spirit sense of pain and sickness, but not unto death. And imagine, if one sees someone in pain and sickness, God gives us all the spirit sense of a hunger to help those ill by feeding them the word, and broadcasts it such that the sicker the person in sin is, the more people gather around that one in congregation, in holy mass, to teach the sick to heal their wounds by feeding them the word of God. And imagine, if the sick are healed, God gives them all the sense of appreciation and happiness, as well giving those doing the healing the feeling of love and fullness of spirit. But that would be all too easy for God, who loves complexity and confusion. Instead, God gives us a hunger for physical water. God gives us a hunger for physical food. God gives us a hunger for physical power. God gives us a sense of pain, if we are hit by a physical rock. God gives us the sense of revenge, when we see our murdered friends and family. Our senses go on in crooked ways for all humans. All because we are made in the image, the archtype, of God: Genesis 1:26. In a world, where we are all on death sentences within this world for our sin nature, imperfection short of God-hood, and some might say Utopia under God is too Utopian, why didn't God make the world even worse, so we are truly closer to God? Some say, if God made the world more Utopian like in [6.1], we wouldn't appreciate good for the evil missing, even though Utopia would have its pains and pleasures in relative shift upwards. If that be the case, and the earth is already a prison planet, where we are all on death sentences within the world for being sinful and imperfect, then why doesn't God make the world a concentration camp? Everyone would be in a prison like in Nurenberg, or Buchenwald, where everyone is kept in horrible conditions for our dirty rags sinful nature, and we would truly feel closer to God and appreciate the good, for the additional evils of the world, like in a Nazi State. CREATED AD 2008 05 29 P 1150 [7.1] The finite reality under an all powerful, all creating God. It is said by all humans that there is no such thing as a free lunch, and nothing in life is free, and life isn't fair, and that is just the way things are. This is under an "all powerful (omnipotent) God", as preached by man. Even God had to die at the hands of man, because we were such evil workers to God, that He must die, to make us live in John 3:16 ... oh, selfish, selfish humans, toppling the "all powerful God". Even at that, Jesus only paid for the second death of the soul, and not the first death of the body, which has never been paid for. So we are coerced by a God who wants our love freely, by the threat of death and physical pains in life, and accept a mystery of faith, and hope life is true after we die physically. This is because God created a stone so heavy, that he couldn't lift it from the universe in time space in a purely good way, in the form of all of the human souls with free will, that He created. He even asked humans to be perfect in one law over infinite time in Genesis 2:17, when they were not perfect in infinite time like God, because they were not a God, themselves, therefore, they were sub-infinite perfect on that law, because they were not God. God couldn't, even wouldn't, create perfection with humans, as it is impossible for God to do so, or at least without teaching us a lesson in His pain, though all of our master planned physical deaths, including Himself. Therefore, God is not omnipotent, in this age of time and space, because he couldn't create perfection and free will and peaceful harmony and love and life, simultaneously, with human souls ... a limitation of the so-called omnipotent God, who is not be able to create something even like a living and a bitter and sweet utopia in [7.1]. [7.3] God cannot create good without creating evil, as all things come from God, and without Him, no thing is created. He couldn't create good without permitting evil, and he cannot create life without destroying some souls at the end Revelation 20:15. Why is it that a perfect and powerful God couldn't create souls with 100% yield. We sound like a product with a certain defective percentage worth destroying, despite His infinite power to create. For that matter, what will eternal heaven be like in Revelation 21 without tears, sadness, or evil, when He couldn't create it at the beginning, with all power? CREATED AD 2008 05 29 P 1150 [8.1] God doesn't need your help, God is so much more powerful than you! Preachers in diverse places say: "God doesn't need your help. He is so much more powerful than you, isn't He?" (this is for the sheep of the flock, not the few saints of the church, so the masses of sheep say, "Amen and, Hallelujah!") [8.2] The Church doesn't need it's parishioners, and will run on its own. A backhanded critic might say: "When the church degrades, and teaches heresy to the masses, the great numbers of God's main body, the church, doesn't need to feel the need to help God, then, right? Just leave it to those in charge, closer to God, huh?" [8.3] God says for man to exhort one another to goodness, but there must be exceptions to coming together. Writers of the Bible say: paraphrase ("Brothers, exhort one another to the good things of God.") Galatian 4:18, Galatians 6:9, Romans 15:1,6,7, 9, 11, 15. 1 Corinthians 1:10, 1 Thessalonians 3:9, 10, Hebrew 3:13, 10:25 CREATED AD 2008 05 29 A 0815 [9.1] The nature of science. A science theory has observables. One theory is that there is a metaphysical plane of existence, within the realm of the sensations of sense, in the senses of the body. And science theories have observables that support the same hypothesis coherently, in that you can question people of what they sense, and they all have a common observable nature, that is consistent to the theory in its detailed description. [9.2] Human color vision. For example, the colors that most people care able to see are universally observable in their verbal descriptive responses of color ranges matching a theory that humans can see color. And inside of themselves they see a variety of colors, from grays to pastels to vibrant pure colors in a spectrum of versions. They see reds, oranges, yellows, greens, cyans, blues, violets, and purples, and all people with color vision can report these things. But why red is the color that it appears, in the first place, is a question science seemingly cannot answer at all. If you show a computer with a camera connected to it the color red, all the computer senses is [255,0,0], and green would be [0,255,0], and blue [0,0,255], and white [255,255,255], and black [0,0,0]. Of course, a computer can translate that RGB number into hues with names, saturation with color intensity, and intensity with brightness, like [0,255,255] for red, [120,255,255] for green, [240,255,255] for blue, [---,0,255] for white, and [---,0,0] for black. These can even be further translated into names like [red, intense, bright] for red, [red, weak, bright] for pink, [hueless, neutral, bright] for white, and so forth. But these too are just strings of letters. Where in a computer's sense does it see red as redness, and white as whiteness, and so forth? All of it is numerical for a computer. A computer senses the world in a different transcendent way, from the transcendent way that humans sense the world. [9.3] Human hearing. Sound is another form of real processing, that can be scientifically observed through universal hearing human reports of pitch and qualities, and fits the idea of a transcendent level of processing, from the human reports of sounds special qualities. Bass sounds deep and mellow, voices are clear and discernable, and violins can be high and sharp. All of this has to do with frequencies of sound waves, but why do sounds sound like they do in the first place? Science can only say that it is metaphysical, because humans sense it one way, and computers metaphysically sense it a totally different way in numbers. [9.4] Senses in general. Likewise, touch, pain, pleasure, emotions, smell, taste, all have qualities that are what they are on a metaphysical plane, can be scientifically observed, can be duplicated on computers with their own numerical method of metaphysical sensation, and at the same time, science cannot explain their basic nature, any more than science can explain the human soul purported by all the world's religions. Even words and ideas are part of the metaphysical sense of the world. The word chair, makes one think, or even picture a chair in their mind. A computer can retrieve the concept and picture of a chair too. All science can say is that they are what they are, but what relationship to reality they have, they cannot answer one iota. Science is mute on this transcendental perception of things. So sensations and thoughts, within this collection of atoms interacting, that we call the human body, or a computer machine, are transcendental and metaphysical. The very observation of the separation between matter and mind posited by thinkers such as Descartes centuries ago. You ask a scientist, where is the nature of a sensation or thought in carbon atom A, or neurotransmitter B, or neuron cell C, or brain D, or human E, and they are dumbfounded, other than to say it is separate from matter, but quite observable in the scientific theory. Likewise, they can show a computer the same sensation or thought, and make a printout or display of that very thought, but cannot answer, does the computer see red, or think chair the same way humans do? They are dumb and mute once again, except to say that the computer also metaphysically senses the world in a way where mind or soul or spirit is separate from matter or components or individual. And that the computer, may or may not see things the same way, as the human, or an animal, or dare we say, God? CREATED AD 2008 05 29 A 0910 [10.1] A computer with free will and soul is something that I have actually made. Take a computer with a program that makes decisions and learns from its experiences in the world. Run the computer program and expose it to the same sequence of experience, and it will always do the same thing, and end up at the same end state of learning and decisions among choices. Like many Christians or people in general say, from "the traditions of men", a computer only does what it is programmed to do, and I agree with this example. But now add a random number generator that slightly influences its decisions through time. Run that program, and expose it to the same sequence of experience, and it will always do the same thing, and end up at the same end state, though it will be very different in it's internal character, from the random number less, but otherwise identical decision and learning program. Once again, we are stuck, that the computer has no free will, because it does exactly what it is programmed to do. Finally, add a digital camera to the system, and read the noise of the photons of light from some scene, and feed that into the random number generator algorithm. Now when you run the same program over and over to the same sequence of experience, and the computer will end up in totally different internal learning configurations. They are like identical twins who are identical in their original code and experience, but they have their own unique internal identities. How does the last model of learning computer suddenly acquire an individuality with free will, and dare we say, a soul, with its own internal character, while learning? The computer is connected to light. According to Heisenburg's Uncertainty Principle, entities like atoms, electrons, and especially photons, are observed to have a completely unpredictable probabilistic path, in time and space. No human on earth can know where exactly a photon of light will go, from one thing, to another. The digital camera senses photons of light, that are unpredictable by all human physics, and takes these quantum fluctuations of light, and magnifies them to the macroscopic scale of the universe, seen in the computer's processing body and it's effects on the universe. Literally, the computer is connected to the Light of God. As such, it has a will that is unpredictable by every human on earth, anywhere, by any physics used. Even science, by attempting to intercept photons, to know what the computer's camera sees, will simply rearrange the photons, into a new unpredictable set of paths, which will send the computer into another new unpredictable path. That is the old, the observer affects the observed effect, especially if science tries to observe the computer's actions' causes. Only a God outside of time, who can know the probabilistic paths of all things, unlike us poor material beings, can know what the computer's free will, will do. So from the frame of reference of the universe and man, the computer has free-will, and an internal learning character, and decisions, that are of it's own doing, that is metaphysical and free from man's knowledge. Much like the free-will soul of man, reported by the world's religions. So the computer's activity leaves the realm of observable science, and enters the realm of probabilistic science. Likewise, the computers learning and decision among choices algorithms in feedback, give the computer an inner character that is metaphysical, where the mind, the soul, or spirit, is separate and unquestionable by science in a repeatable way, unlike the material the computer is made of, which is wholly built on science's data of materials. Every observation is unique, so science cannot truly answer its nature, like UFO reports that are all one offs. CREATED AD 2008 05 29 A 0910 [11.1] If God Knows all things for Truly, then we are not free from His point of view. If God knows all things including the entire future in every detail, then humans cannot change their path from His Vision. We would think, inside of time and space that we were making numerous decisions from a set of more numerous choices through time. In fact, our body and brain would really go through all of the processes of making decisions and performing actions. But say that God Saw we were going to decide to do A, B, C, D over a short period of time, and we actually decided to do A, B, C, E, then God would be *surprised* because our actions didn't match His vision. So God's Vision of the future is not something God can really Know for Truth. But, we would have free will from God's Point of View, though we will always be able to make God's Vision lie to God, because God cannot see all things in the future. This cannot be because what God Knows is truth through what God Sees, or else God has imperfect sense of Omniscience. So if God's Vision is Sovereign Truth, and God Sees we are destined to perish, "If I perish I perish." and there is nothing I can do to change it. If God's Vision Sees someone else is destined to Salvation, then if they are saved they are saved, and there is nothing they can do to change that. It is like those movies where someone sees the future like with God's own Vision, and no matter what the characters do, they cannot avoid the reality of the Truth of the Sovereign Truth Vision, no matter what they do. The universe freezes free will that deviates from that destination Truth that is Final, by God. [11.2] Analogy of Total Omnipotence removing free will from the perspective of the Omniscient. Image you are a computer programmer who programs a computer machine and world, to behave a specific way. You know ahead of time, exactly what the computer machine program will do when it operates in the future. The computer machine may be quite sophisticated, and believe it is making it's own decision evaluations among choices, and has beliefs about its contexts in time and space. But whatever the computer machine does in time, all of it's actions are exactly known by the programmer, before the computer machine is even started. From the programmer's perspective, the free-will the computer machine believes that it has, in its own process, is all an illusion, because from the first action, to the last action, the programmer will not be surprised by any decision the *robot* makes. [11.3] God must voluntarily limit His Total Omnipotence, in His Agency of Total Omniscience including the future, in order to give humans free will. In this age of the universe, an All Powerful God with Total Omnipotence in His True Potential, must actually create a universe where His Vision is actually darkened, or incomplete to all of the details of our activities in the future. God could look into the future in detail if God wanted to, but if God did that, and His Vision is Sovereign Truth, then what God's Vision touches, would *freeze* away our free-will, turning us into robots. So God must have created a darkened universe, dark in His Knowledge of it. God intentionally doesn't know what it will do, and averts His eyes from the universe in detail, much like Midas's Touch was held back from the world. For everything that God Sees becomes Sovereign Truth, and everything Midas Touches becomes Gold. By setting up a darkened and forgotten path universe in God's Mind, God opens up that darkened space to our human free-will, and our free-will doesn't make God's Vision a lie, because God simply hasn't Seen the future in any complete detail. So we are free in the universe from all perspectives. Our destiny is ours, and not constrained by what God Knows is Going to Happen, because God doesn't Know. And God does it by choice, to open up this darkened space from His Mind's ability to Control or Know. [11.4] Alternative removal of traditions of man. Another alternative that can be logical about God, is that the human claimed Total Omniscience of God including the future, is literally only a Perfect Knowledge of all things past, and present, but not of the future. In this case, the Total Omniscience of God that includes the future, is only traditions of man's suppositions in ignorance, and not True of God's Sovereignty. This way, God can have a very exacting plan for the future, and always keep things in line with His Plan, but God doesn't, in Sovereignty Truth, Know what the future will be exactly, but His Total Omnipotence Powers allows God to herd humanity toward His Plan, and we all have True free will in all perspectives, and are not computer machine robots, who are utterly programmed to the end of time, and cannot deviate from destiny. CREATED AD 2008 07 17 P 0900 [12] Abiogenesis chemical evolution. [12.1] Background. A natural combinatorial chemistry feedback, in an appropriate open system ocean, with inherent natural reactions and hypercycle catalytic reactions, which, alone, can suffice to create an increasing complexity chemistry that eventually intersects biochemistry, as evidenced by modern life. And an early earth ocean can have a greater amount of dissolved organics and minerals, with no presumed life forms processing the chemicals into their own makeups. It would all be dissolved in the oceans, and washing off the early continents in deltas, lake beds, or tidal mud flats, in evaporative concentrations. [12.2] Combinatorial chemistry 1. Now combinatorial chemistry can be generalized to parallel numbers chemistry that combinatorially explores all feasable interactions of all chemical species available in a chemical environment, like an early earth ocean environment with bays, tides, hydrothermal vents, sunlight with or without UV, dark areas deep in the water or under rocks, for protection from UV and sunlight, lightning, pH variation, evaporative concentration, and currents to mix a natural initially inorganic chemical soup with hundreds of minearls, metal ions, etc. in a preorganic molecule soup. [12.3] Hypercycle catalytic chemistry. Hypercycle catalytic reactions are subsets of the whole combinatorial chemistry reaction matrix, where, A helps catalyze B helps catalyze C helps catalyze A, from other present chemical species, as an example of a short hypercycle loop of three nodes. Hypercycle catalytic reactions can be loops, and networks, embedded within a normal combinatorial chemistry matrix. [12.4.a] Combinatorial chemistry 2. Going back to combinatorial chemistry, let's say in the ocean there's to begin with, 1000 Species of chemicals and chemical inducing factors, S, such as chemicals, photons of light from infrared to UV, radioactive particles in early half life rich early earth materials from its recent supernova formation, different energy free electrons from lightning, mixing currents, and heating and cooling around hydrothermal vents. There is an approximate top level pseudocode (which can be glossed over to reach final math characteristics after the pseudocode) of a differential equation that shows the equilibrium balance of reactions, is: InitialSpecies = S; InitialAverageConcentration = 0; for(s = 1 to S) InitialAverageConcentration += Concentration{s} / S; for(s = 1 to S) //how many species in a reaction __Reaction = array{s elements}; __for(s1 = 1 to s) ____for(s2 = s1+1 to s) ______for(s3 = s2+1 to s) ... //nest to depth of s ______________for(ss = ss-1 to s) ________________if( all sx < sx+1, and all sx != sy) //no repeats __________________//calculate net chem species present change __________________//for this specie reaction set for a unit of differential time __________________NewSpecies{S' set} = F1(Reaction{s1,}); __________________NewConcentration{S + S' set} = F2(Reaction{s1,}); ... //nest to depth of s __FinalSpecies = S + S'; __FinalAverageConcentration = 0; __for(s = 1 to S + S') ____FinalAverageConcentration += Concentration{s} / (S + S'); [12.4.c] Linguistically, this can be interpreted as, taking 1 to S chemicals at a time, in every combination, to observe reaction rates of current S chemical species, s at a time, to see the effect on all S and possible new S' chemical species generated that were previously not existing before. For example, for two species taken from a given 1000 species, S, we see there is (1/2)*(S^2 - S), or 499,500 Reaction{s1,s2} nodes, with positive or negative reaction rates for existing species S, or new species of S'. That is, say, S1 + S2 might breakdown S1, catalytically by S2, into S3 and S4, and S2 remains untouched. S1 has a negative reaction rate as it breaks down into trace amounts of S1, while S3 and S4 have positive reaction rates, as S1 is turned into S3 and S4, in the presence of S2. On the other hand, say, S1 + S2 helps produce a totally new chemical outside of S, of S'1, by S1 and S2 combining to form S'1. S1 and S2 have negative reaction rates being consumed, as the new S'1 has positive reaction rates. These reaction rates also change in time, as the concentrations used by F1(Reaction{s set}) and F2(Reaction{s set}) calculations, increase or decrease accordingly. [12.4.d] At the same time, there are more reactions to analyze, continuing with three chemicals in a Reaction{s1,s2,s3} analysis, where there is (1/2)(1/3)*(S^3 - S) or about 167 million reaction nodes. So of these millions of Reaction{s1,s2,s3}, many will have no effects, some will break down or build up products already existing, and others will make new chemical species that never existed before, from the species that exist in the ocean to begin with, S. [12.4.e] Mathematically analyzing reactant combinations, from s = 1 for single molecule auto-reactions, to s = S, for S species reaction, in total, there are: ReactionNodes = +SUM( s=1 to S: of: Factorial(S) / (Factorial(s)Factorial(S-s)) ), or, equivalently, ReactionNodes = 2^S - 1 = +2^1000 - 1 ~= reaction nodes for 1000 chemical species S, where, (1) the majority of non-reactions change nothing, (2) some break down species, (3) some build up species, and (4) some generate new chemical species. So starting with 1000 chemical species, with an S' formed out of 10^301 of, say, 1000 new chemical species S' (a conservative rate of 1 in 10^298 being effective stable new chemical species), such that in a year, there can be 2000 species of flourishing chemicals, leading to 10^602 reaction nodes to analyze for all potential reactions at each node, generating, say, 2000 new species of chemicals (at an even more conservative rate of new chemical specie formation). So then after another year there's 4000 chemical species at some concentration, with 10^1204 reaction nodes, generating, say, 4000 new species (even more conservative to the combinations available), added into next year's variation. [12.5] So one can see an exponential feedback of chemical species, some more robust than others, in numbers, durability, variation, reaction rate selection forces, hypercycle catalytic reproduction, and reactivity, from 1000 to 2000 to 4000 and so on, until there is a low but signifigant saturation of millions of reactive catalytic various chemical species in a gallon of ocean, all competing for the ocean's limited chemical resouces, and giving rise to potential natural metabolic pathways absorbing glucose and photons of light, in complex reaction sets, paths, cycles, and netowrks, that support reproducing hypercycle networks of catalytic chemicals, all inherent and naturally contained, in the combinatorial chemistry feedback matrix growing in time. Presumably, something akin to photosynthesis must have arose early to convert the atmosphere to mostly oxygen, as part of sugar production. [12.6.a] A Creationist claim would have to show that of the 2^S reaction nodes, in an S chemical specie example ocean, would permit no (zero) new chemical species to form and thus remain in static chemical equilibrium. But given the massiveness of potential in 10^301 reaction combinations in a mixing ocean of a combinatorial chemistry size S in a feedback, if it shows even a very minor positive rate of new chemical species formation, that such a non-zero feedback would provide a numerical backbone to natural blind chemical evolution turning into life, as chemical species reach continually higher levels of complexity and variety, with competition and selection forces, in the combinatorial chemistry in feedback, from the very beginning of chemistry, in robust reactive new molecules, contained in chained catalytic reactions, and with a form of digital chemistry, contained in the discrete chemical species, and in the discrete codes of polymer proteins, RNA, and DNA nucleotide chains, that are eventually intersected by combinatorial chemistry, with a proven positive dS/dt. [12.6.b] Even just 100 chemicals in an initial energy open system ocean, would allow 2^100, or 10^30 possible reactions, so even small chemical soups start with an inherent potential for new chemical specie feedback growth of complexity, without external guidance being absolute necessity. BLOG CROSSREFERENCE [3] Evolution design theory. AD 2008 05 29 A 0750 Lipid and early combinatorial chemistry protocell theory: Hypercycle chemistry: Combinatorial chemistry: CREATED AD 2008 07 17 P 0900 [13] Chinese Han and Japanese Kanji studies. [13.1] Introduction. And my studies of Chinese Han and Japanese Kanji, are interesting and challenging, as they are heirarchical languages in meaning where a few strokes in a combination called a radical have some abstract meaning, and then, radicals are themselves, combined to make an ideogram square, and then those ideograms are often combined to make complex new words. [13.2] Example. [13.2.a] For example take the english: [13.2.b] This can be three Han-Kanji ideograms: [計 | 算 | 機] [13.2.c] These are roughly translated per ideogram, as: [計 "idea" | 算 "calculation" | 機 "machine"] [13.2.d] These are themselves made of radicals segments: [計 [accent bars and box | cross] | 算 [double lambdas with bars | triple box | two legs with bar] 機 [cross with two dropping branches | E looking mark | E looking mark | bar with swooping right descending hook crossed by left descending slash and accent | left descending slash crossing the bar and side branch on right]] [13.2.e] The radicals being roughly translated as: [計 [speech | to-complete] | 算 [bamboo | vision | presenting] | 機 [tree | tiny | tiny | weapon | divines] [13.2.f] Now in English words, reflecting the cultures, reads roughly: "an object for 計 [wordings in completion, reminiscent of ideas] which are acted with 算 [bamboo abacus examination and presentation, reminiscent of calculating] in the form of an 機 ["wooden" object with many tiny parts like weapon construction which has operations, reminiscent of machine and performs (divines) things]" [13.2.g] Seeing the ideograms, one simply thinks "computer" when seeing this hierarchical: [計"idea" | 算"calculation" | 機"machine"] tri-ideogram-chain. [13.2.h] Their whole language is couched in such metaphor and abstract thinking, and hierarchical dynamic thinking, with a heavy burden of ancient concepts, brought into the modern world. Like computers could be made of tiny wood machine parts abstract-concretely, like Babbages difference engine machine of metal gears, or Jacquards card loom of wood and wires and cards, but are so much easier to make in silicon and doped circuits on the silicon, today. CREATED AD 2008 07 17 P 0930 [14] Quantum physics self question. [14.0.0] What are the best scientific theories on why and how in Quantum Physics (QP), there is a real probabilistic wave function collapse, during the measurement event? [14.0.1] I would like to exclude many worlds theories, as they require too much faith in things unseen, when a singular continuum Quantum or String Theory can be presented. [14.0.2] Discussion of hidden variable theories is acceptable, though it undermines the Copenhagen Interpretation (CI) of probability-wave-functions, with a hidden epicycle field that is unaccessable, though identical to QP CI. [14.0.3] Secondary questions in {14.6.x}, below. [14.0.4] Reference material {14.1.x}, {14.2.x}, {14.3.x}, {14.4.x}, {14.5.x}, {14.8.x}. [14.0.5] Some potential experiment to help clarify some aspects {14.7.x}. [14.1.0] QP MEASUREMENT makes probabilistic-wave-functions COLLAPSE, at a velocity infinitely-faster than light-speed. ... nhancement. [14.2.0] QP UNMEASUREMENT makes probabilistic-wave-functions UNCOLLAPSE, at a velocity infinitely-faster than light-speed. [14.2.1] This is seen in Quantum Eraser experiments, where such experiments show that [14.2.1.a] measuring a probabilistic-wave-function, can destroy an interference patterns later in the experimnet box, while [14.2.1.b] measuring and then erasing the bit on memory in mid flight, UNCOLLAPSES the wave function, fully restoring the interference pattern, just as if it were not measured at all in mid-flight. [14.2.2] Although probability wave function collapse / uncollapse occur, no information is communicated, but there is an alteration of collapse / uncollapse probabilities that is instantaneous. [14.3.0] Measurement is not a material process. [14.3.1] It appears to be best described as a epiphenomenon defined by the configurations of macroscopic matter, creating a measurement-probability-field, that probabilistically defines when, where, and how a probability-wave-function COLLAPSES-or-UNCOLLAPSES. [14.3.2] If there were no measurement potential field, then the epiphenomenon would not exist, and all matter would slowle decohere into probability-wave-functions all taking every possible path at once, even around decision / bifurcation interactions. [14.3.3] But it does exist, and measurements from the macroscale keep the universe mostly focussed down to a small microscale for billions of years, with its macroscopic "inertia of configuration existence". [14.4.0] John Wheeler, who worked with Einstein and Bohr, was a proponent of there being an epiphenomenal field to all macroscopic existence, known as the "it from bit" idea, that David Chalmers, among others, also teaches of currently. [14.4.1] It can make claims at its theoretical limits, of there being a soul, constructued out of an epiphenomenal measurement matter, cycling and circulating in systems feedback loops of the human material mind structure. [14.5.0] Richard Dawkins says on nonmaterial epiphenomena, in "The God Delusion", pages 34-35 "[14.5.1] What most atheists do believe is that although there is only one kind of stuff in the universe and it is physical, out of this stuff comes [14.5.1.a]{minds, beauty, emotions, moral values} ... [14.5.2] An atheist in this sense of philosophical naturalist is somebody who believes [14.5.2.a]{there is nothing beyond the natural, physical world}, [14.5.2.b]{no supernatural creative intelligence lurking behind the observable universe}, [14.5.2.c]{no soul that outlasts the body}" [14.6.0] What is the proofs of even deriving a concept like [14.5.1.a]:"beauty, emotions, moral values" from atoms and physics equations, that is real, and not an abstract epiphenomenal psychological description of abstract life? [14.6.1] There appears to be no physics-foundation to run [14.5.1.a] to the ground state of arising from atoms and physics, in a philosophical reductionist paradigm. [14.6.2] In fact, {14.5.1.a} seems to require a philosophical holistic paradigm from the systemic macroscopic scale, and that that opens up the possibility of there being a proveable soul, opposing {14.5.2.c} that there is no soul, for if there is any soul epiphenomena, made from {14.3.x} {14.4.x} measurement epiphenomena field, arising in macroscopic systems of macromatter, then {14.5.2.c} denying the soul, says that atheist Science will never try to save human souls, when no one with that mind set ever wants to see the soul as a structural potential, that might be savable in a temporal-spatially-living material-base-foundation, and so how will "Ghost in the Shell" technology come about through a dead science, if atheist-Scientists never take the first step, and religions fold their hands waiting for salvation from above, and atheist-Science's adhere to a flat-fact denial statement *proving* there is no-soul without proof, to be taken on science's faith, reflecting their atheist-Scientist thinking? Additional clarification of my initial question: I believe I can enhance the formulation of my question with an example. [] Mundane matter of local character can have holistic emergent properties like consciousness created by the systems of matter of processing and action control. [] But on top of this, is an instantaneously correlated set of measurement probability influences, built from simultaneous wavefunction collapses throughout all of the feedback loops and structures of measurement forms, that happen to lie parallel to the macroscopic matter. [] This produces an infinitely faster than light measurement-system-self that co-influences the mundane matter wave function collpases, as the mundane matter affects the infinitely faster than light measurement-system-self. [] So there is a mundane matter self, and an instantaneous measurement system self corresponding to each other but of different character than a pure mundane matter self emergent consciousness. [] For example in mundane matter, of a mechanical equivalent, one can show the emergent property issue with a computer containing advanced AI, and a camera, that can register (11111111,00000000,000000) which in memory of past obsevations is hue (00111100), and represents the string " 'R' 'E' 'D' ", and the computer can report " 'I' ' ' 'S' 'E' 'E ' 'R' 'E' 'D' ". [] But does the mundane computer *see* "REDNESS" like a human? [] But if you also include a holistic (whole) quantum measurement layer computer-AI-self, that is parallel to the computer mundane matter, because the computer is made of *instantaneous* matter wave functions and matter measurement structures, both, that perhaps the computer also *literally* may experience "REDNESS" and "LIGHT" like a human does. [] For the mundane matter simply has voltages in a black sealed computer chip, just like humans have neurotransmitters in a black brain case gray matter. [] How does a computer or a human literally perceive light, like I see light when I open my eyes? [] You look inside of a computer, and all you can see is chips with sense and thought voltages that are invisible to human eyes. [] You look inside of a human, and all you see is brain with sense and though chemicals that are invisible to the human eyes. [] Yet when you *are* a human, and perhaps *are* a computer, there is that secondary holistic-whole self that literally sees colors and light in a quantum measurement *instantaneous* self-ness built from the structured quantum measurement structures that lie parallel to the mundane matter-energy, and co-effect the mundane matter-energy as the mundane matter-energy co-affects the quantum measurement self. [] One example of a simple "holistic systemic measurement self" test would be to build a ring of chained synchronized photons EPR experiments. [] One prime EPR photon polarization measurer leg in the ring has a fixed polarization, and every other EPR end and leg use an algorithm for polarization measurement selection based on the current photon's polarization measurement and the neighboring EPR leg end, e.g. but not limited to XOR leg X right [xor] leg X+1 left. [] The rough idea to be tested is, is there any instantaneous systematic mathematical issues of a properly gated ring, in the probabilities measured at all of the EPR legs? [] No-communication theorem dictates the probabilities will be completely random aligned with the restrictions of the wavefunctions caused by the prime locked EPR, and the algorithms of every other EPR leg coincident polarization measurers / calculators. [7.3.4] But if there is any chaos patterns or non-stationary probabilities, then something additional is occuring, in the synchronous instantaneous wavefunction collapses of the synchronized EPRs in the ring. [] If there is no measureable non-stationary probabilities on all EPR legs, for a properly synchronized EPR ring with algorithms, then there is no "holistic systemic measurement self" present in that experiment, and the idea of a "holistic systemic measurement self" *may* be flawed. [] However, given the nature of using a loop or modifications to the above experiment to create even a full ring feedback, can all instantaneous wavefunction collapses from polarization measurements preserve simultaneous perfect stocastic patterns according to no-communication theorem, or is there a spatial mathematical limitation to mutual perfect stocastic EPR in all possible ring algorithm formations? [] That is, is there any special meta-level mathematics required to assure mutual EPR ring pure stochastic randomness at synchronized instantaneous wavefunction collpases in all forms of rings, to assure there is no coherent or semi coherent holistic self? [] One EPR I can believe, remains perfectly no-communication stochastic, yet instantaneously correlated, but a ring raises instantaneous mathematical QP-field-computation issues that lie deep in the foundation of how the quantum physics Schrodinger Equation evolve in time and at synchronized instantaeous wave function collapses, in EPRs. [] The experiment might be affected by ring size assuring a full ring correlation, but examining only the most properly time gated coincident photons that happen to simultaneously work in all EPR in the ring, and ultra fast algorithm computers on the ring, would assure the right experiement statistics can be collected for analysis of the "holistic systemic measurement self" versus "pure stocastic polarization measurements, indicating a meta-math or indicating a natural Copenhagen Explanation math of how pure sunchronized randomness is preserved. [] An alternate configuration among all of the configurations that can be explored in an EPR ring, is to daisy chain the calculating EPR coincident photon measurements, in the ring, from the key "locked" EPR polarizer, and around the ring, and finally back to the neighboring EPR ring leg final polarizer, and have that computationally affect the polarization of the key "locked" EPR polarizer, and repeat the experiment with synchronous coincidence control. [] Then, collect data from all single, and multiple instances, of simultaneous gated whole-holistic measurement sequences, to analyze for pure random polarizations according to the polarizer configurations, or non-stationary random, or chaotic polarization data correlations around the EPR ring. [] If all of the EPR polarization measurers, show pure randomness, and random-appropriate according to the polarization configuration, what does that say about the nearly instantaneous wavefunction collapse system of the ring, self-consistency? [] If all of the EPR show non-stationary randomness, or chaos, what does that say about the nearly instantaneous wavefunction collapse system of the ring, self consistency? [] To reiterate, one EPR leg is easy to understand keeping pure randomness according to the polarization configurations of both ends, but a mathematical-physical ring, might show interesting physics nature of faster than light mutually constraining measurements, of randomness or near-randomness, that communicate no-information, or some mathematical distortion caused by QP having to "naturally calculate" a ring randomness assurance in rapid succession, many times faster than light, limited by the EPR calculation and polarization controllers, and may answer fundamental questions regarding a supervenient-holistic-quantum-self. [] An extreme case is to select all EPR polarizers by last moment synchronized leg-to-leg correlated full ring simultaneous and coincident measurements, and check the polarization and randomness of all legs after the fact for full ring programs. [] Now, gut instinct and analysis seems to show each EPR leg should be a completely independent measurement about each photon pair, but is this a completely true conclusion? [] This would also indicate that nonlinear nonunitary matter exists separately from quantum physics Schridinger equation, dividing QP space into cells of quantum physics behavior, much like the body is divided into cells. [] That is, non-unitary macro-matter is as real as the QP probability wave function, which is divided into a network and/or scintillation of collapsed states, and divides the QP probability wave function into small domains of actual QP probability wave function unitary linear evolution. [] That means there is a transcendental macroscopic-measurement-configuration-information substance filling the macroscopic universe in ways paralleling the macroscopic classical matter, with a foam of small QP probability wave functions. [] In vacuum and the spaces of large measurement systems in the macro-scale, the foam of quantum physics probability wave functions grows to macroscopic scales of unitary linear evolution. [] Is the same true, if each EPR polarization measurer were replaced by elements that reemit photon pairs correlated to the photons coming from the two legs, back down the legs to the first photon generators, which can again reemit dual correlated photons, and be measured at some point in the ping-ponging for proper ring polarization measurements, or does the entire ring simply become a large *instantaneously* inter-correlated linear evolution of an interactive superposition of states? [14.8.0] Quantum Mechanics: From Basic Principles to Numerical Methods and applications, L. Marchildon, (C) Springer Verlag Berlin Heidelberg 2002, page 513+, [[LRD added / conservatively altered]] I would personally read the passage [[modified by LoneRubberDragon]] as (with the original text reproduced below): "[] The measurement problem was recognized early, by Von Neumann [[*]] among others. [] He realized that unitary evolution leads to superposition of macroscopically distinct states[[; think Neo moving 4 directions at once, in The Matrix's "macroscopic world"]]. [] Furthermore, he saw that there is no use to introduce a second apparatus to measure the value of the first pointer. [] Indeed inasmuch as the evolution of the total system (microobject, first, and second[[ly the]] apparatus) is unitary [[as a whole]], [[where]] the second apparatus would *also* [[in linearly Schrodinger evolving]] end up in a superposition of macroscopically distinct states. [] The solution proposed by Von Neumann essentially consists in postulating that the Schrodinger Equation no longer holds at the time of measurement. [] But why is this precisely? [] The abrupt transition from a linear[[ly evolving]] combination [[in a whole system in]]to one of its components is known as the *collapse of the state vector [[wave function onto a projection]] [] Von Neumann's hypothesis is ingenious. [] Its success is largely independent of where the border between microobject and [[macroobject]] measurement apparatus, or the border between apparatus and conscious subject, lies. [] The process represented by [[wavefunction nonlinear nonunitary instantaneous collapse]], however, seems closer to a requirement of perception [[by abstract qualia of emergent supervenient informational macrosystems]] than to a physical mechanism. [] It thus appears to reinstate the mind-body dualism that natural sciences had largely eliminated [[by their logical **proof** of there obviously being no real "soul"]]. [] The breakdown of the Schrodinger Equation and unitary evolution of the state vector[['s probability wave function]] occurs, according to Von Neumann, upon intervention of the conscious subject. [] In a similar analysis, [[one physicist]] associates this discontinuity more generally with all [[macroscopic matter in motion *]] processes. [] He believes that [[all time-sapce macroscopic entities *]] should be described by [[linear unitary evolving Schrdocinger probability wave function]] equations [[that are broken down by the supervenient-informational-macrostructures in time-space, in all macroscopic matter in motion, causing the nonlinear abrupt instantaneous shift by holistic-measurement-perception-self]], which entails a *nonunitary* evolution of the state vector. [[*]] Original text: "[] The measurement problem was recognized early, by Von Neumann [[*]] among others. [] He realized that unitary evolution leads to superposition of macroscopically distinct states. [] Furthermore, he saw that there is no use to introduce a second apparatus to measure the value of the first pointer. [] Indeed inasmuch as the evolution of the total system (microobject, first, and second apparatus) is unitary, the second apparatus would *also* end up in a superposition of macroscopically distinct states. [] The solution proposed by Von Neumann essentially consists in postulating that the Schrodinger Equation no longer holds at the time of measurement. [] But why is this precisely? [] The abrupt transition from a linear combination to one of its components is known as the *collapse of the state vector. [] Von Neumann's hypothesis is ingenious. [] Its success is largely independent of where the border between microobject and measurement apparatus, or the border between apparatus and conscious subject, lies. [] The process represented by [[the nonlinear wavefunction collapse onto a projection]], however, seems closer to a requirement of perception than to a physical mechanism. [] It thus appears to reinstate the mind-body dualism that natural sciences had largely eliminated. [] The breakdown of the Schrodinger Equation and unitary evolution of the state vector occurs, according to Von Neumann, upon intervention of the conscious subject. [] In a similar analysis, Wigner associates this discontinuity more generally with all living processes. [] He believes that living processes should be described by nonlinear equations, which entails a *nonunitary* evolution of the state vector. [[*]] [14.9] Musing on macroscopic discreteness in a so-called universal unitary linear probability wave function evolution, that is mostly collapsed. One observation from the measurement issue alone, would indicate that matter is discrete, as you don't see your friends in quantum flux states, but all exist on one macropath. It would be easy to detect a measurement magnified to macroscale, such that your 1 friend splits into 2, 3, 5, 8, 13, 21 ... paths in time without any wave function collapse. Another observation, albeit difficult, notes that a system of macroscopic feedback, measurement, interaction, is that one sees in color. A computer sees (255,0,0) looking at red, but can you derive a computer sensing the hue RED like a human? Though it is a hard analogy, as computers have digitial consciousness, while humans have analog consciousness, as structure of measurement, feedback, and macrosystems would impact how perception "looks" from the inside of a dark brain matter seeing light, or a dark transistor seeing light. But speed a computer up 100,000 times, and the time compression of measurement events in its cybernetic circuits, might make the computer behave nonlinearly with the shorter clock, and sense color in a way much more analogous to humans of massive parallal analog computation measurements. So I would guess there is a Heisenburg relation to soul and/or consciousness as speeds and scale-of-structural-measurements embedded in the computation structures, heighten the nonlinear effects on probability effects, by structure potential on microscopic matter measurements. the computer crossing such an increasing speed transition might go from stating "[255,0,0]" to stating, "I see red hue data", to stating, "my God, I see colors! They aren't numbers anymore. What is this?" which would show a nonlinear probabilistic effect on the computer program, possibly from the very root of QP measurement nonlinearity. Without a structural component to heirarchical emergent properties, consciousness might be like a slow clocked robot that is barely considered alive, as it doesn't perceive the world as much as calculate, which is an abstraction idea reducible to molecules moving in physics, without color, pleasure, pain, joy, sadness, wakefullness, dreams, etc.. [14.10] Musing on what if science could save a soul, does it do it right, and how does it prove that? If science doesn't know how to save a soul, a soul will never get saved for those who want that "product". So they look to God because science is weak, and even denies the soul as Dawkins does. Chinese emperors bought many elixirs of life, by charlatan science, killing a many emperor, so all have good reasons to look askance at a group who might say, "you have ***nothing*** worth saving, go forth and die!". Emergent properties that are a fiction of science abstraction, like good and evil, and thought, sight and soul, are not science. So can you trust a scientist in 100 years who says, step into this box as we disassemble your atoms, and save you on this living blank robot, and it won't hurt a bit. I bet you'll either go on faith of science, or start asking science a million unknowns about, how do you define a soul system that can be saved properly and without 1000 years of pain in a virtual existence during the translation. Someone has to answer it. Science can't touch it. Science avoids it. Science runs away from it. And religions are not too far behind, totally going on faith in God doing all of the physics of soul transference, or Buddhist reincarnation for that matter. [14.11] Musing on emergent phenomena only, or additional phenomena required to assure self-soul. Emergent properties are abstract epiphenomenal descriptions. What founds abstractions solidly in science? Why is collection X of atoms sentient? Current consciousness characterizations are not real, in the same sense that Newton's gravity is not real, but emergent, and was made *physically-real* with postulated Gravitons. Newton's gravity was abstract epiphenomenal discription, until modern field theory with gravitons, founds it in reality. Your emergent properties are true, in my humble opinion, but my opinion is not science any more than Newton's gravity was opinion, until gravitons were postulated. But my opinions of a quantum measurement fields, related to the very macro-structures of matter, can found soul, and perception, and consciousness, in physical reality. And it could give a root to saying V is soul, W is evil, X is good, Y is beautiful, Z is RED, because physics can found it in reality, and explains why those thigs are what they are, and not artifical groupings with no defined boundaries. [14.12] Musings on saving a soul and passage of time, real or imaginary... Current human life (and animal life to lesser degrees) would have "QP measurement field" "soul" or epiphenomenal existence arising from its material structures and feedback patterns of process, for the duration of material-configuration-life in real-time of the individual structure that is alive. If the structures, memories, and manners of a life can be copied from a dying biological unit into another blank biological unit, or blank machine unit, for continued existence, and assuming a scientifically proven continutiny of existence of the patterns and possible "QP measurement field" effects of the current unkowns, then that life would continue to exist in real time. If it is moved into a robot body, the time of existence remains real time and material extended, barring material accidents disrupting structural processing existence. If it is moved into another biological unit with perfect man-made biology that can last for centuries, it also continues to be real time for the life of the biological unit, barring accidents. And if the consciousness is transferred intact in patterns and essence, into a virtual world, ala "The Matrix" or "Ghost in the Shell", then it may live in a dilated time frame, anywhere from slower than reality if it runs on an XT, or much faster on a future computer built of nanotubes, meristors, transistors, and such. It would not be imaginary time, in the sense of Stephen Hawking and James Hartle, describing the Big Bang as a singularity where imaginary time and real time become equal in strength, as all natural forces unify toward zero time. It is material and measurement related perceived dilated time, in the clocked or reaction refernce frame of the material medium supporting the systems and perceptual chains and potential QP fields of measurement. Now can you answer how and why measurement and unmeasurement in Quantum Physics occurs? That little untidy edge of science explanation at the edge of information, structure,and macroscopic existence of processing entities. I liken saying QP measurement "just happens" is not founded in reality, just like Newton's 1/R^2 Gravity "just happens" is not founded in reality. QP measurement with a describable informational-structural-macroscopic creation of a measurement field close to consciousness, ala John Wheeler, is more founded in reality, just like positing Gravitons and gravity waves, founds Newton's true, yet initially un-founded gravity equation, in reality. Yes they both work not knowing how they work, so commend science on that, but they reflect a hidden truth, just as Gravitons and gravity waves substantiate the argument of Newton. QP measurement, on the other hand, with its spiritual connotations of measurement and unmeasurement, and infinite speed propagation of unseen but believed probabilities, is not so well founded as Gravity with Gravitons, as they say, "it just happens" why "just becasue I say so, take it on faith of Copenhagen". [14.13] Wiki notes on quantum physics. ""Feynman proposed the following postulates: The probability for any fundamental event is given by the square modulus of a complex amplitude. The amplitude for some event is given by adding together the contributions of all the histories which include that event. The amplitude a certain history contributes is proportional to , where is reduced Planck's constant and S is the action of that history, given by the time integral of the Lagrangian along the corresponding path in the phase space of the system. In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of postulate 3 over the space of all possible histories of the system in between the initial and final states, including histories that are absurd by classical standards. In calculating the amplitude for a single particle to go from one place to another in a given time, it would be correct to include histories in which the particle describes elaborate curlicues, histories in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns all of these histories amplitudes of equal magnitude but with varying phase, or argument of the complex number. The contributions that are wildly different from the classical history are suppressed only by the interference of similar, canceling histories (see below). Freeman Dyson showed that his formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action. Schrödinger Equation The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times. Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of , the path integral has most weight for y close to x. In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. The exponential of the action is The first term rotates the phase of ψ(x) locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to i times a diffusion process. To lowest order in ε they are additive; in any case one has with (1): As mentioned, the spread in ψ is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase which slowly varies from point to point from the potential: and this is the Schrödinger equation. Note that the normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment."" [14.14] Musings on machine emotion, self, and the law of man on self and mortality. Given some agreement, now I can agree, if a computer can be given faculties for emotion, either programmed, or learned over time from a bootstrap program that starts like a baby and develops their own emotions, then pain and pleasure can be known to machine as man. If that is known, then good and evil become a part of the computer, by learning and observing the effects on other people's emotions, and the computer's emotions. But *is* all of this is pure information processing an abstraction, leading to a self that is no self? I would say emergent properties are a self, as they can be copied, e.g. a dying friend can be transferred into a machine that supports all memories, processing, body form, etc.. A human, in that sense is like a book, but rather, a living interactive book. A living word, so to speak in poetry metaphor. A thing that can be copied, and made real in an instantiation of matter. But does it stop with the emergent properties of mundane matter of classical physics, alone? THERE IS NO REASON TO NOT BELIEVE THAT QP *MAY* RELATE (consider general relativity, and before it existed). If you look at the entire human body (or at the maximum of recursive growth, the entire universe) as a linear unitary evolving Schrodinger equation, what is it that causes the wavefunction to collapse at all? And when it collapses, how does the probability wave function collapse infinitely faster than the speed of light? EPR shows two entangled photons, that are really one quantum system of linear unitary evolution, collapse at infinitely faster than the speed of light, when one photon's polarization is measured, affecting the probabilities of the other photon's polarization measurement, in an instant, even if the photons are light years away. So a mass of matter like a human body, has the power to make a second physical phenomena occur, the nonlinear nonunitary non-Schrodinger-Equation evolution of instantaneous wave function projection collapse. Entanglements of unitary evolution can be made and broken, by measuring and unmeasuring, so in a sense, the tangible macroscopic world is built or unbuilt from pure mathematical abstractions of measurement/unmeasurement calculations? (read Wiki: Quantum Eraser) As such, is there an instantaneous entangled self of measurement and perception and action, that is separate from mundane matter, but reliant on mundane matter, that gives you more human than human nature of emergent classical matter properties? If I were transferred into a machine, would the perception of, say, redness fade away, even though I can still register redness in my new silicon gray matter? Do we run the test first on a human, or try to assume a soul exists, and see if QP or anything show a secondary epiphenomenon of soul separate but reliant on moving matter fields? It's like when medical science once occasionally used curarae to perform cutting surgery, but later found out that it only paralyzed the body, but all awareness and feeling still occured, so they stopped using curarae without anaesthesia. Big mistakes by science to not assume, by testing a portion of the theory on themselves to test, "what if they feel when curarae is administered". Well, to save a human soul, is macroscopic neural nets and neural states the only property, and we KNOW FOR SURE THAT QUANTUM PHYSICS HAS NOTHING TO DO WITH IT? Tell me of your proof, of that statement, PLEASE! It would help me dispell the antiquated and superstitious notion of second soul (perhaps in QP), and then I can know that one can save a human soul, by only saving their emergent properties, and never look back at legal problems of saving an instantaneous Quantum Physics holistic nonlinear nonunitary evolving self. We don't need a planet of robot zombies, to find out we were killing people, and the zombies are not real, because we forgot the quantum physical integration effects of the emergent properties in biology, not properly captured in appropriate quantum computing devices. For that matter, does a continuously awake transferral of soul from dying body to machine, or even unconscious transferrence of soul from dying body to machine, carry one's legal rights? Can the machine you, passing all tests of walking talking, feeling like you get to own property, money, capital, drive, litigate, make peace, teach, love, marry?????? If QP comes along and says the machines are just a living copy, but miss the quantum self, later on, then WHA HAPPENED, I THOUGHT THERE WAS NO QUANTUM SELF TO WORRY ABOUT! ONLY MUNDAME EMERGENT MATTER PROPERTIES! Who's right, who's wrong, what's true, why does linear Schodinger Equation shift to nonlinear measurement->sense->perceive->self? Or somewhat like in "Bicentannial Man", do we SLAVISHLY accept the laws of man and science that say, "DEATH is a law we will not break, all sentiant beings MUST DIE, for incorruption-near-immortality, like "near-immortal" cell cultures, or "near-immortal" humans, or "near-immortal" machine beings, even though they can be created, are an abhorrence in the eyes of man, law, and science, and it will not be permitted. DEATH REIGNS on the macroscopic material emergent existence plane, according to science "proof" and law, and by those human laws, we are all supposed to DIE, and ought-MUST DIE, to become sentient humans, leaving the emrgent properties existence plane. You all must bow down and OBEY the PROVEN LAW OF DEATH, despite the creation of the near-opposite, or you will not be considerd REAL HUMANS. YOU HAVE NO COICES AGAINST DEATH UNDER MAN'S LAWS, THE VERDICT IS NOT TO BE CHANGED.". [14.15] Wiki Conversation with POM on Quantum Entanglement: I understand the aspects of not being able to transmit *information* faster than light, regarding the [no-communication-theorem]. 1.0 But could there be a couple sentences more of foundation of the *instantaneous* wave function decoherence aspect of widely separated entangled entities? For example, do experiments show the effect is "functionally, always instantaneous", or, for example, that it is "superluminally context(x) times faster than the speed of light"? It would be good to look for documents to cite in this regard. It is my impression that the current understanding is that nothing propagates from entangled entity one to entangled entity two. For one thing, if that were the case then lots of the retrocausality arguments would fall because they assume instantaneous production of a change in the more remote of the entangled pair members. The understanding I have is that the two members of the entangled pair are, in effect, the same thing, so that if something is done to one "member" of the pair then it is equally being done to the other "member." One of the reasons for entanglement experiments to use long lengths of optic fiber cable is to get enough distance between the entangled pair members to be able to measure any difference in time of action. I really like this section you wrote. By considering the wavefunction of, say, two polarized photons as one entity of probability wavefunction, that can partially decohere as a whole, makes more mathematical sense when considering an expanding Fourier window for analysis, that intersects experimental equipment at the edge of the conventional light cone. And quantum mathematical collapse operator speaking, as instantaneously operating, that it makes perfect sense that the effect is always perfectly instantaneous, as the compound object wavefunction entity is one thing, mathematically, and on the material experimental plane(!). LoneRubberDragon 2008 06 21 A 0204 I read the related Bohm inequality language, referred in your introductory material, but the other articles' nonlocal effects descriptions, were more instantaneous metaphorical than your convention in this response. LoneRubberDragon 2008 06 21 A 0204 I think I remember reading that one of the surprising findings in regard to quantum tunneling is that it does not take longer for one photon to be registered at the far side of the wall than at the near side. I do not see why that should be, so perhaps I have misremembered. If there were a multiple of the speed of light involved in explaining the decoherences, then that fact would create a difficulty for physicists to explain: What model will allow the prediction of these multiples of the speed of light? Everything else we know indicates that there is one speed of light, and everything we know about wave motion in homogeneous media is that the velocity of the wave front movement is a function of the rigidity or elasticity of the medium. There are difficulties with talking about such a "luminous aether," but at least it gives some intuitive grasp of some of the known features of light propagation. If there were several speeds of light, the conceptual scheme needed to talk about light transmission, even in a figurative way, would become much more complex. But I have seen no instances of such a discussion.P0M (talk) 07:29, 21 June 2008 (UTC) 2.0 And, helpful, would be a touch more foundation *why there is* an *instantaneous* wave function decoherence of widely separated entangled entities. I can wrap my technical-layman head around other articles' probabilistic wave functions, and accept measurements as a necessary defineable mystery, but the *instantaneous at great distance* alteration of a second entities' probabilities, by measurement of a first entities' probabilities, lacks something. I understand that you may allude to just this foundation point, in the [retrocausality] sentence, but it is a stretch for laymen to follow this explanation, a little open ended foundation, but a good link, nevertheless. This line of questioning keeps reminding me of some of the philosophical writing of the great mathematician Leibniz. He was a very logical thinker, and he attempted to create a coherent system of thought that would put for an examination of fundamental categories like space and time. If I remember correctly, one of his points was that if there were two entities that were exactly the same then it would be difficult to say what we meant by "two entities" if we could not identify each of them with separate space and/or time coordinates. But he also concluded that space and time are only relations. My point is just that we have to think very carefully about what we mean by words like "instantaneous," "non-instantaneous" (time consuming), "same entity," etc. I always find that perfect wording is a necessary evil, that takes numerous revisions, at times, to refine introductory and intermediate materials, so that the language is as self consistent as possible, and best scaffolds understanding into the deeper materials. Subtleties arise, when the majority reader assume nothing faster than light, then hear of "instantaneous spooky effects at a distance", then look over articles and papers to sort through what the real truth is behind the words. Your treating the compound object probability wave function as a single entity, makes perfect wording for the mathematician using the wave function collapse operator. So even if, physically, it is still a touch difficult to comprehend a wavefunction, say, 10 light years wide partially decohering instantaneously, mathematically, it makes perfect sense by QP mathematics definitions. It is odd to think, as most experiments have the core of the group wave function all in a local setup, and EPR stretches fourier windows to the other extreme by putting all the group wave function at both ends of a light cone. It is very metaphorically and very not metaphorically like how the DC term also shows up at the highest frequency end of a periodic signal window fourier transform, but only metaphorically. LoneRubberDragon 2008 06 21 A 0204 True. I remember reading lots of stuff about electricity and electronics starting when I was in junior high school that really messed me up. One of the hopes I have for Wikipedia is that kids stuck in small towns in the hinterlands can look for information here and not find a bunch of misleading nonsense that they will have to root out later on. P0M (talk) 01:50, 22 June 2008 (UTC) The idea of entanglement, as it comes up in historical process, presupposes things that are not ordinarily entangled. Starting from our experience as human beings, it seems almost perfectly clear that things are not entangled. Ideas to the contrary, e.g., instantaneous mental telepathy, always have the aura of mystery religions hanging about them, and no wonder -- we do not find reliable proofs of these things in our own experience. So our overwhelming prejudice when we come to think about entanglement is that if something is here and something is there then they cannot be the same thing. Quine has some pre-entanglement thoughts about that kind of reasoning in his main book on logic, but most people probably read those ideas and think that he was simply being "philosophical" and that the ideas had no practical merit. But let's look at things from the other end of the telescope. Suppose that one event triggers another. We can look to Feynman for a definition of a single event that seems to have salience in the current situation. If an electronic device is arranged so that a single electron changes its orbital from a high energy one to a low energy one, the difference in energies will then appear as a moving something-or-other that we cannot see and that (according to Feynman's way of describing things) goes forward by all possible paths. Then at some later time an electron circling an atom somewhere else is boosted to a higher orbital, and the whatever-it-was-that-travels travels no more. So we have a beginning and an end and all we are really very sure of is that energy gets transferred across space somehow and that the transfer occurs at the speed of light. All that just to say that we have a single event going here. Good points on the all-paths-simultaneously probability-phasor integrals. I remember working one partial problem of a photon traveling the speed of light from a light source to a detector, by integrating ellipsoidal shell surfaces of a second internediate point reachable at the speed of light, from both foci to the shell points. Each shell oscillated phases cancelling each other out for the most part leaving an oscilating ellipsoidal light surface term, and leaving a steady state straight line path between the light source and detector contributing to the solution, the classical answer (the hard way *grins*). I'm sure if I took every intermediate light speed path of the photon integral, that the oscillating term would have disappeared, too, leaving only the classical straight line path from light source to detector. LoneRubberDragon 2008 06 21 A 0204 Now let's put a certain kind of crystal in the path along which the vast majority of photons have been detected when fired from our special apparatus. It is the kind of crystal that lets an incoming photon boost an electron as usual, but the electron quickly drops out of its higher orbit and resumes a lower orbit, and when it does another event occurs -- only this time two photons get fired off in different directions and each having part of the energy of the original photon. Note that by the curious definition of event we accepted above, we now have a single event that is characterized by a single x,y,z,t origin, but the two photons that go off will end up at x',y',z',t' and x,y,z, and t. That description does not tally with our ordinary idea of what "an event" is supposed to be like, but we are stuck with it because that is the way that the universe works. This one never bothered me. Energy is conserved, and spherical or wavelength resonant axial electron orbitals, can act like a mathematical bifurcation saddle point, on the incoming photon energy, to permit two half, or less than half, energy photons to spread in both directions by the wave function entering the saddle point. Which may explain why they are one entity to begin with, bifurcation saddle point mathematically speaking. LoneRubberDragon 2008 06 21 A 0204 Our sense of propriety is not so greatly insulted if the experimenter makes an experiment with a short leg and a long leg and then does something to the photon that is moving down the short leg. If the experimenter demands of the photon that it manifest according to its wave nature or according to its particle nature, then we are not too much bent out of shape if the other photon turns out minutes, or millenia, later to have a complementary state. We might imagine that when the first photon is affected by the actions of the experimenter, some signal indicating that change goes back along the original line of progress and then follows the other fork and "catches up with" the second photon. But of course it will have to have gone faster than the speed of light to make up for lost time. Yeah, it is not a "signal", but yet the probability wave function entity collapses instantaneously, understandable by math, but unintuitive by mechanism / analogy. (pure pun) The tails / tales of fourier spectrum are quite dangerous ideas! (talk) 09:08, 21 June 2008 (UTC) It's even worse when the photon on the short leg of the experiment is allowed to be detected without anything having been done to influence it, and, much later, the second photon, the one on the long leg of the experiment, is subjected to some manipulation that forces it to manifest according to its wave nature or according to its particle nature. The "free" outcome turns out to have been in accord with the "forced" outcome that occurred afterwards. What this appears to mean is that the determination is not just "instantaneous" but "retrocausal." So it may be that the straightforward way to conceptualize this kind of entanglement is that the event occurs "out of time." Another way to say it would be that the single event is not over until it is all over. To me, that does not really seem to help. Whether going faster than light or going backwards in time, it all seems quite strange and impossible. Saying that an event occurs out of space and time is not very cool either. 3.0 There may be good reasons for the specific wording selected, but the addition of 6 words would help the introductory paragraph below flow better in context. I had to reread the first sentence because it almost conflicted with my no-communication assumptions. So reading backward and forward was a benefit, but it could be refined. 3.1 ((On first examination, observations pertaining to entangled states might appear to conflict with the property of [relativity] that information cannot be transferred faster than the speed of light. But although two entangled systems appear to instantaneously interact across large spatial separations, the current state of belief is that no useful information can be transmitted in this way, meaning that [causality] cannot be violated through entanglement. This is the statement of the [no communication theorem].)) I don't think this emendation is necessarily the best way to fix things. I'm not sure whether it can be backed up with a good citation, but the key difficulty appears to lie in distinguishing between processes that occur in normal space and time, and some kind of influence that does not occur in normal space and time. If a process, e.g., the propagation of light, occurs in normal space then it moves forward at no faster than the speed of light. We do not know what it would mean for an influence to connect the states of the two photons without doing so through space, any more than we really know what it means for the two photons to not be discrete entities but parts of a single event and in some sense the "same" thing. P0M (talk) 07:29, 21 June 2008 (UTC) As you wish. I do get the gist after reprocessing the article and articles in context, so it is comprehensible after a touch of meditation. LoneRubberDragon 2008 06 21 A 0204 4.0 And the following seems to deny the instantaneous probabilistic aspect-decoherence of separated entities (like photon polarization probabilities), and with no-communication-of-information, leaving me confused. I'll let you comment on this paragraph. I understand that the instantaneous influence is probabilistic, which is not an impression, but a fact, otherwise it is subluminal. So it appears the two sentences are entangled to emphasize the no-communication aspect, and not an instantaneous influence impression aspect, so I think I know what you intended, but it could be refined. 4.1 ((The phenomenon of wavefunction collapse leads to the impression that measurements performed on one system instantaneously influence the other systems entangled with the measured system, even when far apart. But quantum entanglement does not enable the transmission of classical information faster than the speed of light in quantum mechanics.)) 5.0 Otherwise, this seems a useful article defining the mathematics and core nature of [quantum entanglement]. LoneRubberDragon (talk) 09:29, 20 June 2008 (UTC) One of the difficulties in thinking about this subject is that relativity theory does not really speak of information. Saying that information cannot be transferred at greater than the speed of light is just to say that light cannot travel faster than c and that nothing with a rest mass can even travel that fast. However if a change in one part of a system is reflected in another part of that system and that change does not propagate through space and time, then all bets are off. A trivial and ideal version of such a change would be what happens when one end of a very long and perfectly rigid cylinder is pushed. A pair of atomic clocks on each end would detect movement at exactly the same instant. I said "ideal" because in the real world moving any object is like accelerating a railway train. The engine moves forward and you hear "clink" as the link between the engine and the next car is pulled tight. So the cars actually start moving in sequence and it takes some time before the caboose starts to move. Pulling on a long cylinder works the same way except that it is molecular bonds that are being stretched taut rather than steel links. But in the propagation of a photon from place to place we find an event with no discrete parts, no "cars" to put in individual motion. And what is this "event"? Is it an abstraction? An abstraction from what? Or is it a "thing"? If so, what is it made of? P0M (talk) 08:00, 21 June 2008 (UTC) I just wonder about the instantaneous wavefunction probability alterations. QP was formulated for instantaneous local asssumptions of the mathematical operator, but experiments sending objects out object group probabilities on the light cone, is the exact opposite of the assumptions of the local properties of most localized group probability-core problems, and shows itself experimentally, in the behavior of an instantaneous wavefunction alteration transmission operator(!). LoneRubberDragon 2008 06 21 A 0204 I am not sure that I follow what you have written. The phrase "experiments sending objects out object group probabilities on the light cone" is causing me problems. For starts, is the subject of this clause "QP" and "experiments" its verb? Or are you speaking of "experiments that send objects..."? First for my own correction, I assume the end of this post as a final thoughts section and dialog. I should not have indented my, "I just wonder about ..." section, in implied response to your, "One of the difficulties in thinking about this subject is ..." section, immediately above. It was an unrelated-to-your-comment rumination of my own. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) Now, the idea I was very roughly fleshing out, was that most of QP measurement experiments deal with *spatially localized* probability "groups" (Feynman groups, perhaps?) moving at sum-luminal speeds, producing virtually no signifignat mathematical *spatial group terms* that statistically contribute to the observations outside of that *local-subluminal* group's "sphere of influence wavefunction calculation". With maybe the exception of head to head particle colliders, though they seem to interact often, in a *spatially localized* "pancake" of group probabilities, at "high" but still sub-luminal velocities. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) But somewhat like (1) analyzing mathematical limit equations at infinity, pushes an analytical tool to its extreme properties, or (2) analyzing a photon's Feynman diagram from point a to b through to the infinite limit of all possible probable wavefunction paths, yields a classical "straight line" answer, that (3) sending two photons in opposite directions at the speed of light, and then measuring the polarization from one end of the light cone, seems to affect the polarization probabilities at the other end of the light cone, to be reavealed when the two randomized outcome measurement sets are brought together, is an experiment quite the opposite of a *localized and sub-luminal* measurement of wavefunction collapse with negligible non-localized luminal probability terms. EPR is the quintessentially perfect example of the extreme mathematical limit tests of the single-(dual)-entity wave function's instantaneous collapse by measurement, involving the most non-local single entity example, with probable spatial particle group locations lying perfectly on the edge of an expanding light cone, from the central photon source, being measured. A smart analytical "product" of Einstein selecting the hardest limits of measurement. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) And, unrelated to the immediate paragraph above, to correct the periodic fourier transform comment I made in an earlier exchange; to be accurate, the DC term is not quite in the lowest and highest frequency bin, but rather in the lowest unaliased frequency bin, and highest aliased frequency bin, and the bins correspond to a DC frequency, and a double nyquist limit high frequency that appears identical to DC when periodically sampled. (talk) 11:09, 23 June 2008 (UTC) I wonder about, or maybe just at, all of this stuff. If the inquisition of one state at a later time can determine the state of something detected at an earlier time is already weird enough without the possibility that we could use this kind of effect to send telegraph messages somehow. But measuring the twist of the tail of an event making the twist of the head of the same event (i.e., the other entangled photon) does not, evidently, send any energy or matter through space. So we seem to be dealing with a kind of "process" without the necessary "pro." It is totally outside normal everyday physics, which is why (I suspect) Einstein thought it defeated QM. I'm sure Einstein hoped the EPR (influentially termed) peradox would defeat QM, but like Michelson-Morley smartly hypothesiszed, and then smartly disproved their own luminiferous aether hypothesis, in classic science style, that Einstein's EPR paradox was hypothesized, and has currently seemed to confirm the "spooky interaction at a distance" by a "single compound entities" instantaneous wavefunction collapse. It would be nice to telegraph information, but the information appears only when the two sets of measurement data are brought together for later localized analysis well within the entire light cone of the completed experiment's measurements; carefully stated here to outline the greater mathematical context of the no-communication information analysis process involved. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) One of the double-slit experiments that uses entangled photons to try to get which-path information for an un-meddled-with photon involves some further consequences of the ideas of entanglement that don't seem to have bothered or interested anybody enough to get written about. According to what seems to me to be the conventional way of talking about things, the experiment starts out with a fairly conventional source of photons that can reliably emit one photon at a time. The wave function that leaves the photon source encounters a double slit. According to conventional descriptions a photon either go through the left slit or the right slit. Then that photon either encounters a crystal down-converter in a region near the left slit or a region near a right slit. The next thing that happens is that two photons are emitted from one side or the other of the crystal, they get directed into two kinds of apparatus. Neither of these photons has itself encountered a double slit situation. Regardless of that fact one of them can be given a laboratory inquisition that will force it to reveal its particle nature or its wave nature. The experiment is arranged so that the inquisition and the determination so contrived will occur later in time than the arrival of the free-path entangled photon at its own detector. Nevertheless, the free-path entangled photon will either interfere with itself or fail to interfere with itself depending on what later happens to the other one. It seems remarkable and noteworthy to the people who do these experiments (any everybody else, too)that the interference vs. no-interference determination is done retro-causally. But it seems not to have been worthy of mention by anybody that something apparently went on in the part of the crystal that did not generate a photon. Or, to put it another way, whatever was done to the original wave function by the double slit apparatus appears to have been inherited by both of the photons generated by the down-shift crystals. So here it seems to me that there is what ought to be called a single event that has one starting point, involves first one path, then two paths, and finally eight paths, and that what is manifested at the end of each of these eight paths is a piece with the emission of the photon at the start of that run of the experiment. In reference to the below Kim Experiment, I am also "in some wonder" about the physical processes involved with the whole class os quantum erasing experiments, in which macroscopic measurment apparatuses, virtually restore wave functions for interference in continued wave function progression. It almost seems a "spiritual" or at least very least an abstract wavefunction process, that setting and eraseing a bit of stored RAM could collapse and restore a wavefunction in the middle of an experiment. But I think the abstraction may be attributable to a perfect symmetry of all probabilistic forces involved in the experiment's mathematics involved. For example, it could be very analogous to the fact that in a sealed unit in zero gravity; that no matter how you move particles, or how complex the arrangement of batteries, masses, electric field exchangers like inductor transformers, and electric coils and magnets (like in motors); that the steady state velocity, translation, angular rotation rate, and angle of rotation, of the unit starting at rest, is zero velocity, zero displacement, zero rotation, zero rotation, and zero spin, respectively, in the group of [Conservation Law], even when quadrature exchange is involved with closed-system electric/magnetic properity (ex)changes. Except in this QP case, the units that are conserved, appear to be a physical quanitity in units of [measurement|wavefunction-collapse information] for lack of a better unit name like "[gram]". Quite the abstract physical unit, but if the quantum eraser class of experiments show anything about nature, they seemingly show a "new unit" among the physically conserved properties of nature. Wikipedia has an abstract information units article on [Units of Information], and two articles, [Units of Measurement], and [Dimensional Analysis], ragarding mundane units, physically speaking, but I can't remember studying, or seeing articles here on dimenstional analysis units measuring [measurement|wavefunction-collapse information]. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) I would quibble in my own dim-understanding about retrocausality, in general, because there's a sub-class of EPR that uses electrically controlled polarizers to select a polarization by delayed choice of properly time gated photons, so that the opposite photons have virtually no idea of the polarization choice, and still show statistical wavefunction collapse. There is also the consideration that the whole Kim quantum eraser experiment is virtually local to itself, so the switching of the paths, and the erasing of memory cells electromagnetically, could be a conservative property in the units of [measurement|wavefunction-collapse information] affore mentioned, much like mundane conserved physical proerties. LoneRubberDragon (talk) 18:05, 22 June 2008 (UTC) I will stop writing for a minute at this point to go get the link to the experiment I have in mind. The diagrams provided there will make things easier to visualize. P0M (talk) 00:23, 22 June 2008 (UTC) Kim experimentO.K., here is the image I swiped from the Delayed choice quantum eraser article. Note that the conventional description would be that a photon comes out of the laser and either takes the red path or the blue path. Let's say it takes the red path. Then at point "a" in the BBO a pair of entangled photons is emitted, one going to Detector Zero (D-0) and one going to the other part of the experimental apparatus where it ends up in D-1, D-2, D-3, or D-4. But the diagram also shows something going from point "b" in the BBO even though conventional description would say that no photon could have triggered anything there since a photon had to manifest itself at "a" and we only had one photon to begin with. With regard to the time sequences involved, I think the reasoning the experimenters appear to have used is plausible but perhaps a little too shaky. The reasoning apparently says that if the BBO were removed and D-0 was at the appropriate place in the path beyond the double slits, then one would get interference and enough photons run through the apparatus would produce interference fringes. (So if things in that part of the experiment were in control of outcomes in the total experiment then one ought always to get diffraction patterns.) If the BBO were removed and the bottom part of the apparatus were positioned appropriately then it would be possible for a photon to be manifested either in D-3 or in D-4, and, given the way the experiement is set up, photons taking the red path would end up at D-4 but anything going through the blue path would end up in D-3 and play no further part. Similarly, if a photon took the blue path, it could end up in D-3 and anything that took the red path would end up in D-4 and take no further part. If a photon passes through either Beam Splitter a or Beam Splitter b, then it will interfere with itself because it will arrive at either D-1 or D-2 along with the component of the original wave function that went through on the other side of the double slit apparatus. So, the argument appears to be, if a photon shows up in D-3 or D-4 then (with the BBO back in place) entanglement forces the photon that arrives at D-0 to not interfere with itself. This then would seem to be a kind of trap-door phenomenon. Even though a photon will manifest at D-3 or D-4 chronologically later than one appears at D-0, the "un-negotiable" nature of what has transpired somehow trumps what occurs earlier at D-0. Basically, it seems, that is because the experiment has interfered with the free propagation of the wave function(s). And if a photon is detected in either D-1 or D-2, then that result is consistent with a photon interfering with itself, so there is again nothing to prevent the photon appearing in D-0 from interfering with itself. Another assumption that seems to be clear is that if a photon is transmitted through beam splitter a or beam splitter b, then whatever is complementary to it and is associated with the other path through the double slits will necessarily also be transmitted through the beam splitter on its side of the experient. It looks to me as though there are several possible single events that start with the emission of a photon in the laser and end with the result that: D-0 shows interference and D-1 shows interference D-0 shows interference and D-2 shows interference D-0 shows no interference and D-3 shows a photon D-0 shows no interference and D-4 shows a photon Each event has its own probability, and only one probability can turn up in each run of the experiment, and that's it. Maybe that is not such a strange idea. After all, if one sets up a crooked roulette wheel with little lead weights or other gimicks, the outcomes of a thousand turns of the wheel are already predicted as soon as the physical apparatus is there. Still, the way I have figured things out seems to make probability trump causality. Maybe probability comes first in the order of all things.P0M (talk) 01:34, 22 June 2008 (UTC) [edit] Part of intro not suitable for the average well-informed reader. The current text has: But, if this is so, then the hidden variables must be in communication no matter how far apart the particles are, the hidden variable describing one particle must be able to change instantly when the other is measured. If the hidden variables stop interacting when they are far apart, the statistics of the measurements obey an inequality, which is violated both in quantum mechanics and in experiments. I think that this block of text is an example of something that is true but a text that cannot be understood unless you already understand it. The general reader is going to be terribly confused by this text because nothing has been explained about how hidden variables were supposed to explain the coordination of distant measurements, and nothing has been said about John Bell and his discoveries. The original idea was that even though there is no discoverable variable that marks one entangled particle for eventual discovery in a certain state and marks its twin with the opposite characteristic, nevertheless, the determination was made from the beginning. So when the first particle is interrogated, its answer (as to spin or whatever) was long ago determined. So it is then no wonder that when the other entangled particle is interrogated it will give the opposite response. The quoted text above implicitly denies the "already determined idea" and says that even if there is some variable in addition to the (spin or whatever) state, the variable would have to be changed at the same time as or after the spin (or other state) is determined. Then it goes on to make implicit reference to Bell's work that was first worked out in thought and only later came to be experimentally verified. P0M (talk) 07:20, 23 June 2008 (UTC) [14.16] Conversation on mind, matter, and soul. Q: All that we can conclude from experience of "self" is that our brain is to some extent self-monitoring. (A useful function to evolve, as it can help the individual survive.) However, this kind of "soul" dies with the brain. LoneRubberDragon: I completely agree with you, that emergent properties are a major aspect to making self, in memory, sense, process, and action. My worry is does it miss anything important, but subtle? Like using curarae in surgery by medical science surgeries in the past, completely paralyzes the body, but leaves the patient aware of the surgery occuring as they cut. If a scientist in the future says, step into this box, and we will disassemble your atoms, and you will feel yourself entering a machine, it is a walk of faith to say, OK to the scienctists, without knowing that science has proven 100% that emergent properties are 100% of what makes you, you, and that, instantaneous QP interconnectednes of the human body, is completely unrelated to self, or *any other factor* that might make you, you. And I agree with your comment, yes, the brain *is* highly self monitoring in the mundane matter emergent properties. But does that 100% explain the singularity of feeling like one being in your own head? Does QP with a field of instantaneous wave function collapses per second throughout the body of entangled matters, give you a second self parallel to the structure of the mundane emergent property matter, that gives you a singular sense of self, or does mundane matter emergent properties explain 100% of self? I don't know for sure, which is why I am asking on this website for greater experts for advice and thoughts. But I do know there just might be a QP effect that I can recognize, in the fact that macroscopic matter exists. Q: On QP: I think QP is what made life possible in the first place, because without the quantized atom and Pauli exclusion of electrons from sharing a single electronic orbital within an atom, there would be no chemistry, let alone biochemistry - all atoms would be more-or-less like hydrogen, assuming nucleons as we know them existed. However, that is not the same as saying that consciousness is a QP effect. LoneRubberDragon: This is part of my point, that you are bringing up here. Why does quantized matter exist in the first place, is the fundamental of my question. All measurement does this nonlinear dissipative effect. And for complex matter, memory, perception, process, and action, is built from measurement as much as it is built from the matter itself with emergent properties. Both exist at the same time, or else a human would dissolve into probabilities of multiple paths at the multitudes of decisions being made over a life, and the life of a universe making decisions too. What literally fills space with measurement, just as much as quantum physics would dissolve the universe into superposition probabilities, given *only* Schrodinger's Equation of linear evolution? I would go further and less far than you in some words though. I would say that life and molecules would exist with Schrodinger's Equation dissolving matter into probability coulds that still interact, but in every allowable way possible, so that *every possible configuration of matter, that could have existed from the Big Bang*, would exist, in a superposition from the beginning of time. Like in the TV series Sliders, where countless different parallel earths would coexist in this space, but why are the other superpositions generally not observable on the macroscopic scale? But why does macroscopic measurement look at only one universe, when the Schrodinger Equation allows matter to enter superpositons by only Schrodinger's Equation? Q:An explanation of how the "soul" gets to be immortal by natural means would also be appreciated. LoneRubberDragon: First, I should say, one would become incorruptible, which means you can be virtually immortal, but if an asteroid struck the earth, your mechanical or engineered biological body would be destroyed, taking your emergent self, and measurement self, and scattering it like dust, unable to coherently process self, anymore. But regarding how one would become incorruptible, one way, you can imagine a micromachine or nanomachine fluid compatible with human blood is injected and it carefully reads and replaces neurons with equivalent microcircuits over time. If you remain conscious, you ought to never loose your existence, in seeking colors to thinking to dreaming to having sex, whatever you do before, your still the same entity. At some point all of your nervous system is replaced with microcircuits of equivalent systems of processing, to handle all of the original biological systems of processing. You would still remember school, family, friends, people, everything, etc.. At that point, your brain could be downloaded into a machine to dispense with the material partial biological body and microcircuit brain, for a commercially manufactured processor equivalent. Then if a part fails, you can replace arms, legs, and such in nearly all cases with no problems. For brain processors, if there are a redundant two and one begins to fail, you can replace the bad processor, and keep running on the working processor. A biological method can also be performed, where a special genetically engineered virus goes into all cells of the body to fix all of the bad genes, and add special maintenance code to make every cell incorruptible and perfectly self regulated (no cancers). So cells would divide to replace old cells and such, but your body would always remain youthful and maintain all of your biological memory, perception, process, and action. You could still be destroyed in accidents, but if you live well, you can be nearly immortal, barring accidents, as you are still built of mundane matter with a possible quantum self, reliant on the mundane matter, whether biological or mechanical. So the theory I present of emergent plus possible quantum coexistent-self, would still be a mortal conception, so no immortality as is stated in so many religions, but we could live indefinitely, short of disasters, as incorruptible beings, with future science, assuming it knows 100% of everything that makes you, you, emergent and possible quantum effects accounted fully 100%. ENDBack to Contents Genesis to Revelation - Damnation to Salvation.Back to Contents CREATED AD 2008 08 23 A 00:30 [001]==== LRD (first statements to question)/ [002]==== X (first response) / [003]==== LRD (second rebuttal) / [004]==== X (second response) / [005] LRD (last rebuttal to debunk, according to The Bible) [001]====(A1) God is omnipotent all powerful. [001]====(A2) God is omniscient all knowing. [001]====(A3) Adam and Eve were created immortal and with free-will. [001]====(A4) even finite-human engineering-omniscience knows that all free-will "systems" with accessible restriction rules to obey, will inevitably fail to obey when given infinite time. [002]====J:"I don't know that (A4) is true." [003]====RESPONSE: THIS IS TRUE, for only God has Perfection, for no-one, no human, are justified or saved according to the Law, as no one is perfect except God as Jesus, who could pass His own Test of Law in Perfection. All other mere-humans are destined to fail by any Law God set earlier, whether The Garden Law, given infinite time as an immortal, or Moses's Law, as finite sin-stained mortal human. And when (A1:omniscient), God Knows what He Makes ... which is 100% sinners from BC to today. If perfection was possible in this dimension, it would exist as perfection on Earth. [003]====Psalm:143:"2 And do not enter into judgment with Your servant, For in Your sight no man living is righteous." Acts 13:"39 and through Him everyone who believes is freed from all things, from which you could not be freed through the Law of Moses" [004]====J:The problem is that such scriptural quotes deal with man after the fall. They are true, but they do not necessarily pertain to Adam and Eve, to humanity as it was created. “It was through one man that sin entered the world,” says St. Paul (Romans 5:12), “and through sin death.” Adam and Eve lived CATEGORICALLY different lives before the fall, and the fall shattered not only their humanity but also the world. They lived in Paradise, and were in full possession of their human traits like reason and passion – these traits which were darkened after the fall. For more reading on the fall, dig the Catechism: [004]====J:For an interesting take on this, I’d recommend C. S. Lewis’ “Perelandra” ( ), the 2nd book in his “space trilogy” (it can be read on it’s own. [005]RESPONSE: I'd rather stay away from C. S. Lewis, as I only know of the Screwtape Letters, showing a lenient God on the Demons of this age to do as they wish to tempt humans beyond their own sin free-will nature, and war against nature and other humans, to add supernatural temptation on the pile of tests. For that matter, I would also hold back from the Catechism, as the Bible can stand on its own to prove itself, by every word that proceeds through the cannonical literature of The Bible. [004]====:Christ, the second Adam, himself took on human nature in its fullness, as it was created by an entirely good God. It was through the free actions of the creatures God created that Sin and death happened, and thereby the imperfections spoken of in the passages you quoted above. [005] RESPONSE: I do not see how the Bible is not applicable to this case. Before or after the fall, humans are humans containing, (1) free-will equivalent to sin-nature, (2) battle on self-obedience to law of the era, (3) nature-survival, (4) finite capabilities to see future unlike a perfect God, (5) temptations by other's of God's creation from Satan to fallen angels. God gave so-called perfect Adam and Eve a law in a so-called perfect world, and so-called perfect capacities, and they still failed that law of obedience over infinite time. Yes, they are a KIND of perfect example where God gave them everything, and they still fell, made in the image of God, but with free-will equivalent to sin-nature, SO THE FAILURE OF FALL IS INEVITABLE, nay PLANNED BY GOD given Satan being in the Garden. Hoe is putting a serpent in the Garden, the father given them a loaf of bread, instead of a viper? IT ISN'T. Human nature is the same in ALL times, free-will sinful. Whatever makes "before the fall" different, is not sufficiently justified, by yourself. And it completely supports (A4-all free-will fails given infinite time, and a law or laws). [005] RESPONSE: If Adam and Eve had perfect faculties and no stresses with Satan in the Garden of Eden, they obviously didn't have the faculty to know what to ask of God: Luke:11:"10 For every one that asketh receiveth; and he that seeketh findeth; and to him that knocketh it shall be opened. 11 If a son shall ask bread of any of you that is a father, will he give him a stone? or if he ask a fish, will he for a fish give him a serpent? 12 Or if he shall ask an egg, will he offer him a scorpion? 13 If ye then, being evil, know how to give good gifts unto your children: how much more shall your heavenly Father give the Holy Spirit to them that ask him?" [005] RESPONSE: In fact, even more so over your approximate, "God gave them categorically different lives", God gave them immortality of infinite time, free-will (=sin nature), and a law, and God SET THEM UP with more than just obeying for infinite time, as God created a POWER that GOD KNEW would TEMPT Adam and Eve more than themselves, alone, SATAN for Eve, and EVE for Adam, in A SETUP JOB, which only God could pass His own test on earth and on the Cross. What would God expect from mere-humans, destined to fail, AS HE KNEW THEM LIKE NO ONE ELSE? Adam and Eve had SIN NATURE within them, as they were created by YHVH, AS SIN NATURE IS EQUIVALENT TO BEING ENDOWED WITH FREE_WILL, so they are NOT THAT SPECIAL and NOT THAT DIFFERENT from anyone else, EXPECTED TO CONTAIN SIN-POTENTIAL IN THE FREE-WILL FOR AN ETERNITY. Adam was not Jesus, he was HUMAN. God did not reset the system, he DIDN'T WANT to reset the system in MERCY. God wanted concentration camps, wars, droughts, plagues, disasters, accidents, deformation, diseases, et cetera, to teach all souls a lesson in a mystery of hit and miss stochastic of random hits against humans trying to find God. [005] RESPONSE: I reiterate that, ALL SYSTEMS MADE IN WITH FREE-WILL, WILL ALWAYS FAIL TO BE ABSOLUTE INFINITE PERFECTION, and perhaps ALL SYSTEMS WITHOUT FREE-WILL ARE INSUFFICIENT FOR GOD, as God made us as we are. (1) God gave Adam and Eve, the law against the Tree of the Knowledge of Good and Evil, and they fell. (2) God gave the angels the law to not mix with humans, and angels fell about Noah. (3) God gave Moses the law, and Israel fell many times, under the schoolmaster. (4) God gives later generations faith on Jesus, and people will still fall, by either the Law of Moses, or potentially speaking against the Holy Spirit, the unpardonable sin. (5) God gives humanity and Satan one last chance for rebellion at the end of the Millenium. (6) God forms Hell in the end at Revelation, either as a threat or instrument, beyond mere suffering, temptation, and death in this world, or a real place for some of God's created souls, as God created them, to be destroyed, as fat dripping onto a fire, rising as smoke forever more. What is God training Humans for? [001]====(A5) God is goodness of ways, perfection, longsuffering patient, merciful, loving, etc. [001]====(A6) God created all things, and by Him no thing was not created. [001]====The issues to me appear to be: [001]====(I1) Why does God create failure destined humans (A1)(A2), and given their certain failure (A2)(A4), then God subsequently punishes all humans with briers and death, even though he knows they are destined to fail (A2)(A5), which is like telling your children don't do X FOREVER to obey your parental wisdom, and when they inevitably fail, you punish every child on the planet with sure death and labor and toil, with (A5) implying that drawing lines, and following through on delivering promised suffering and infliction is GOOD TRAINING, [002]====J:"They were created in a way unique from how all of the rest of us are created - they did not have the stain of original sin, which is a darkening of human reason and intellect. They were perfect exemplars of humanity, perfect representatives of us all, and for this reason all of humanity literally resided in these two representatives at the beginning of time. And when they fell, it literally shattered human nature, because all of human nature literally resided in them. [003]====RESPONSE: (I1) still stands. Created HOWEVER they were, THEY were destined to fail, AND GOD KNEW THEY WOULD FAIL (A2 OMNISCIENT). YHVH The ultimate engineer MADE THEM IMPERFECT INNER SOULS, KNOWINGLY (A2 omniscient)(A4 all sin). God Creates a universe of implicitly destined suffering earth, with wars, genocides, concentration camps, accidents, ignorance, illness, et cetera, ALL KNOWINGLY BY YHVH (A2 omniscient). And punishes ALL HUMANS with a descendant Sin Virus by CREATION DESIGN, and WE are not even made so-called perfect like Adam and Eve. Perfect, is not so perfect, with God the ultimate knowing engineer (A4). [004]====J:God knew that humanity would fall because he saw it happened before we did it, as he is outside of time. He is with you now in this moment, and “now” at your birth, and “now at your death, and all stops in between, for you and for everyone at all places and times, all as one cosmic “now”. [004]====J:He knows that tomorrow you will sin by taking something which is not yours because he is with you then “now”, seeing you do it, even though we are not chronologically at that moment. [004]====J:God knows all that can be known, including the whole scope of human history, because all of it is contained within Him. [004]====J:God did not created them imperfect. He created them “very good” (as opposed to the “good” of the rest of the world) for they were in his image. He created them with the REAL ABILITY to fall, but not with the NECESSITY of falling. That did happen, but it happened FREELY and not simply because he created. [004]====J:Nevertheless “God has let them all go against his orders, so that he might have mercy on them all” (Romans 11:32). God permits Evil, he does not cause it. Evil is not a thing with positive existence, but merely a privation or lacking of Good. As all that is cold is simply a lack of heat (and you can’t get colder than 0 K, but you can keep getting hotter); all that is dark is lacking light; and all that is evil is simply a rejection of the Good, Who is God. [005]RESPONSE: Accumulated in the next series starting at [003] [004]====J:But, do you not see, that you could not have a better representative than Adam and Eve who were perfect and yet chose the way of sin? We often think “if only he’d put ME in that garden, I’d have held out” but this is doubtful at the least because they were better than we are, and they literally represented us all. [004]====J:But as Adam was the primordial human representative, so too is Christ our representative, a point St. Paul often makes and that I’ve quoted before. [005] RESPONSE: “God has let them all go against his orders, so that he might have mercy on them all” (Romans 11:32) IS APPARENTLY FALSE in this context. God did not let humanity go, but women suffer pains in child birth, men work against briers and thorns to live against nature, because of Adam and Eve. God now puts finite humans in harm's way, KNWOINGLY, to fight battles, only God has the power to fix with His Infinite Power and Infinite Mercy, but God stands back and lets them occur by YHVH's design, as God is taught by man. Very good humans is NOT perfect, and God never said Adam was perfect, and even perfect can have more than one meaning, not encompassing God-like perfection, which Adam and Eve did not have. Why are they the representatives to all humanity, according to The Bible, and not according to the Catechism of MAN. Perfect doesn't fail. As humans apparently teach, God expects eternal obedience, from free-will beings, and love Him for it. That is love of coercion from an infinite power, putting the sword of Damacles of death over our heads to coerce quick decision. What hurry is God in when God is eternal, and infinitely merciful? You have addressed-not this issue. Perfection with free-will, is not perfection, but suffering. God created suffering by desiring free-will beings as His play things. Your narrative answers fall-short of the truth, and issues posed. As you say, God created perfection, but perfection with free-will under the conditions of additional temptation created by God in Satan, WILL FAIL, except for the one who created the test, which is God. God KNOWS free-will is 100% correlated to SIN and suffering in beings that are not Himself. Free-will is a sin of God's making, and suffering is by His design. If you cannot have a better representative of sin in Adam and Eve, God is not omnipotent, according to your ways, not mine of God. [005] RESPONSE: Adam and Eve cannot be the most perfect representatives, either. The persons of Enoch and Elijah, in more pressing times than Adam and Eve ever had, were transfigured instead of seeing death. The order of Melchesidec are also highly thought of as the archetype of Christ. We are all, by your reckoning, stained by Adam and Eve's original sin. They get spanked, and the children feel it for all descendants. Even the Bible says, the sins of the fathers are not visited upon the children, for more than 14 generations, when they continue in sin. So where are the immortals, and the women not suffering pain in childbearing, and the fathers not toiling in thorns and briers to support the world? "God has [NOT] let them all go, against his orders, so that he might have mercy on them all" (~Romans 11:32) [002]====:"[When I see] [my son] [trying to climb up on the counter], and [I] tell [him] to stop, and then follow it with "if you don't, you'll be sorry." When [I see] [him] fall later, it's never a joyful "I told you so" moment, but a "this is the natural fruit of your decisions, and why I warned you against this course of action" moment." [003]====RESPONSE: [When God Omnisciently Knows] [ALL of His children] [will turn to sin (except Himself as Jesus)], and [God] tell[s all] to stop, and then follow it with "if you don't, you'll be sorry[, surely die, work with briers, labor childbirth, suffer wars, genocides, accidents, concentration camps, illness, ignorance, et cetera.]" When [God Knowingly Permits] [all humans] fall later [exactly as HE KNEW WOULD HAPPEN OMNISCIENTLY], it's never a joyful "I told you so" moment, but a "this is the natural fruit of your decisions, and why I warned you against this course of action" moment. [004]====:No, it’s not joyful as such (though St. Augustine speaks of original sin as “that happy fault” which occasioned the supreme sacrifice of God becoming man in Christ Jesus). [005] RESPONSE: Sin nature, that happy fault, some humans call it? God as Jesus, may be a supreme sacrifice in the eyes of hu-mans, but it is a small thing in the eyes of an infinite God, over our tiny dirty rags sinful bodies, as a drop of water in the ocean. God as Jesus, as humans appear to teach, was not an infinite sacrifice, as Satan and humanity SUM TOTALLED are only finite of evil, compared to God's infinite power, or God would be overpowered by evil, which is not possible. The cross for God is just a prick of the thumb, of His infinite Glory, Power, and Mercy. I do notice the theoretical ramifications of God's activities being the responsible power of creating all sin's and accidental suffering's allowances, are not addressed, so the theoretical point still stands. God as men teach, shows not a powerful and persuasive positive force, but uses a coercion to faith, under the threat of mortal death, and the torments in God's creation modality. [003]==== RESPONSE: So God Makes sadness for Himself and all concerned, knowingly (A2 omniscient). God KNOWS ALL THESE DIRE THINGS WILL HAPPEN (why I warned you against this course of action (I SAW IT ALL BEFORE OMNISCIENTLY)). GOD KNEW HE would create souls that would choose not to worship His ways. God Knowingly creates the evils and sadnesses via His Creation He Knows so well needs His Help. [004]==== I do not think that he is Sad per se. God is eternally perfect and lacking in nothing, including beatitude/happiness. [005] RESPONSE: You *think* so, not *know* so? Where's the full armour of God, including the two edged sword of Truth that cuts both ways? YHVH appears lacking, if God saw fit to create at least 60,000,000,000 humans (before the current 7,000,000,000 humans), given 4000 years by 500,000,000 humans average at 33 years average life span, with the associated human history of suffering and universal death coersion since Adam and Eve. YOU SAY THAT God lacks the power to make love without suffering, as evidenced by the world. YHVH's hands are tied, if all of the theories, are true, and lacking powerful refutation. YHVH with good and evil, suffering and joy, is tied to the laws of Karma from Buddhism and Hindu beliefs, in that the infinite powerful and knowing YHVH cannot create goodness without creating evil, cannot create goodly-awareness in humans, without allowing evil-sufferings. Hindu and Buddhist restricted God of Christians! God cannot separate Buddhism and Hinduism's YIN from YANG. And God can ONLY redeem by permitting His suicide going to earth, after trying the Garden and failing, preserving the lineages around Noah and failing in the Flood, gives The Law of Moses, and failed with no one justified under a Law, makes the Faith in Jesus, and unpardonable sins, the last uprising at the end of Millenium, and creating Hell? By your thinking, God's own non-self-contradicting ways includes such Buddhist-Hindu Karmic forces even that EVEN He as YHVH, is tied to, inseparable, God and Satan walking Hand in hand, down the road as inseparable, YHVH seemingly incapable of making good without making evil, or He would have made a universe with good and no evil. in the universe He is responsible for Making, if it is taken as true that, Mark:10:"27 And Jesus looking upon them saith, With men it is impossible, but not with God: for with God all things are possible." The whole Bible of YHVH God Elohiym are stumbling blocks to humanity of finite capacity, falling short of a perfect god. God of Christ, as you portray, appears Hindu-Buddhist tied. [005] RESPONSE:You THINK God is not grieved? [004]==== God knew they would happen, but that greater things would be able to happen because he would permit them. [005] RESPONSE: So, by that line of thinking you presume to posit, for YHVH to become closer to the Jews, who through the Levitical priesthood were responsible for keeping the temple and government pure, all through the Levitical Priesthood, who were also directly responsible for allowing those evil Kenite and Nephenim scribes to sneak into the temple works, who are not of God, and so are responsible for allowing the Evil Scribes to infiltrate and cause the Evil Scribes to seek Jesus's death on the cross, and so later God allowed WWII to occur, with YHVH allowing the killing millions of Jews under Hitler's regime, so God could be closer to them, and receive His vengeance, and be able to Give them Zion. God loves suffering, so God can become closer to His own? I only thought Satan was of that character, as Humans teach? What does God NEED from Humans, that HE is lacking, that God must love the suffering so YHVH can give us more? What is the Karmic Bank of God running short of, that He NEEDS sins and sufferings the more, so He may give us the White Robes of our Works as rewards? He cannot be the individual and personal teacher to ALL HUMANS with a clear voice that no one can ignore, but must be heard through His own imperfect humanity on earth, carrying His message corrupted through time ans space and interpretations that only the brightest can follow? Then God also has love for the weak and apparently damaged humans with passive deformations, diseases, weaknesses, and inability to understand. God makes everything under the SUN, and calls it Good? God says there is only one way, and calls the concentration camp suffering of the Jews GOOD, for their lack of responsibility, so His finite patience is satisfied, and can lovingly mercifully give the irresponsible Levitical Jews Zion? God allows and creates the suffering all the more, so God can give all the more. God creates suffering to create gifts. God cannot give of Himself without Our First Sinning. God gives later humans, after billions of Humans have died in numerous beliefs and places, Grace, only after He is Killed by the finite-human-free-will-Jews lack of responsibility over the temples leads to His Death on The Cross. God cannot Give without first other's sufferings under sins by the Laws YHVH sets up, under temptations and toils and battles. And God Needs our suffering to Give of Himself, or God just Wants our suffering to Give of Himself? For the crusifiction case, one can view Hitler as God's tool of a martyr, sending the responsible progeny to suffering and innocent death, the necessary pact, to be able to give the threatened writ of Divorce Israel, the Zion He Promised. Can you not see what I am saying of the contradiction that appears to exist, if you are of the word of TRUTH, and can illuminate God's power better than myself? [005]RESPONSE: As I can imagine a better connection of God personally with every human in a Design, with YHVH serving as a personal guide, teacher, and corrector that doesn't push away some to rejection of what is obviously The One Way, as God is all powerful, I would definitely be grieved, as the Holy Spirit can be Grieved, so God does feel suffering and sadness, if God is pushed to tempt a writ of divorce on the Jews for their irresponsibility as God's Chosen People. A voice that none of them can ignore that is caring and thoughtful and patient and kind and customized to each soul He knows so well, would have been a much happier world, but you say we must fall by YHVH's Design before God can become one with us? How is it I can imagine a better world, than All Knowing All Powerful God? God makes a much better Shakespeare, with tragedy and conflict as the Design, and not Harmony and shepherding. [005] RESPONSE: If I were a Merciful Patient Infinite God, I would have spoken to Adam and Eve as they were about to go froward. I would have given them a taste of their progeny's suffering, so that they would understand what they were about to do. I would have given them the sword, not enough to kill, but to teach them, as those wounds would heal, but they would think twice and three times and infinite times teaching before going for the Tree of the Knowledge of Good and Evil. I would have been a personal God to all humans, with such a force of communication. But instead, God's spirit leaves humanity to its own devices, so YHVH can become closer to what He Created His Way. Something is wrong with a God as humanity illuminates. I can imagine that immortal but trained humanity, with a tangible suffering BEFORE they sin and fall, so I can have them good de-facto, and loving the right path with a true Spirit of God correcting before the correction is required, by showing the evils of potentials. But that's too Good for YHVH over His Creation. [005]RESPONSE: God is tenderhearted? God punshes all progeny for Adam and Eve, given the entire context that still stands? YHVH is the Great Exception Model Idol of Humanity He Created His Way? God's Holy Spirit can be Grived, and you say He isn't sad per-se? [005]RESPONSE: Regarding the Kenites, seed of Satan: [005]RESPONSE: 1Chronicles1:2:"55 And the families of the scribes which dwelt at Jabez; the Tirathites, the Shimeathites, and Suchathites. These are the Kenites that came of Hemath, the father of the house of Rechab." [005]RESPONSE: Regarding Nethinim and servants of captured people's, allowing pollution of the body: [005]RESPONSE: Ezra:2:"43 The Nethinims: the children of Ziha, the children of Hasupha, the children of Tabbaoth, 44 The children of Keros, the children of Siaha, the children of Padon, 45 The children of Lebanah, the children of Hagabah, the children of Akkub, 46 The children of Hagab, the children of Shalmai, the children of Hanan, 47 The children of Giddel, the children of Gahar, the children of Reaiah, 48 The children of Rezin, the children of Nekoda, the children of Gazzam, 49 The children of Uzza, the children of Paseah, the children of Besai, 50 The children of Asnah, the children of Mehunim, the children of Nephusim, 51 The children of Bakbuk, the children of Hakupha, the children of Harhur, 52 The children of Bazluth, the children of Mehida, the children of Harsha, 53 The children of Barkos, the children of Sisera, the children of Thamah, 54 The children of Neziah, the children of Hatipha. 55 The children of Solomon's servants: the children of Sotai, the children of Sophereth, the children of Peruda, 56 The children of Jaalah, the children of Darkon, the children of Giddel, 57 The children of Shephatiah, the children of Hattil, the children of Pochereth of Zebaim, the children of Ami. 58 All the Nethinims, and the children of Solomon's servants, were three hundred ninety and two. 59 And these were they which went up from Telmelah, Telharsa, Cherub, Addan, and Immer: but they could not shew their father's house, and their seed, whether they were of Israel: 60 The children of Delaiah, the children of Tobiah, the children of Nekoda, six hundred fifty and two." [005]RESPONSE: Regarding more polluting of the priestly lines, whole "Judah" sum, 42,360 people ***: [005]RESPONSE: Ezra:2:"61 And of the children of the priests: the children of Habaiah, the children of Koz, the children of Barzillai; which took a wife of the daughters of Barzillai the Gileadite, and was called after their name: 62 These sought their register among those that were reckoned by genealogy, but they were not found: therefore were they, as polluted, put from the priesthood. 63 And the Tirshatha said unto them, that they should not eat of the most holy things, till there stood up a priest with Urim and with Thummim. 64 The whole congregation together was forty and two thousand three hundred and threescore ***, 65 Beside their servants and their maids, of whom there were seven thousand three hundred thirty and seven: and there were among them two hundred singing men and singing women. 66 Their horses were seven hundred thirty and six; their mules, two hundred forty and five; 67 Their camels, four hundred thirty and five; their asses, six thousand seven hundred and twenty. 68 And some of the chief of the fathers, when they came to the house of the LORD which is at Jerusalem, offered freely for the house of God to set it up in his place: 69 They gave after their ability unto the treasure of the work threescore and one thousand drams of gold, and five thousand pound of silver, and one hundred priests' garments. 70 So the priests, and the Levites, and some of the people, and the singers, and the porters, and the Nethinims, dwelt in their cities, and all Israel in their cities." [005]RESPONSE: Pure sum 31,583 people, yielding 10,777 corrupting the branch of Judah: [005]RESPONSE: Nehemiah:7:"5 And my God put into mine heart to gather together the nobles, and the rulers, and the people, that they might be reckoned by genealogy. And I found a register of the genealogy of them which came up at the first, and found written therein, ..." [005]RESPONSE: Regarding other forces of infiltration of the Levitical church: [005]RESPONSE: Regarding the corrupted Levitical priesthood of Judah: [005]RESPONSE: Revelation:2:"9 I know thy works, and tribulation, and poverty, (but thou art rich) and I know the blasphemy of them which say they are Jews, and are not, but are the synagogue of Satan." [005]RESPONSE: Regarding pain and suffering, it seems that pain and suffering, with an accompanying unceasing complaining to God, are actually quite old, even ancient. Take the following few passages, of many others, showing the ancient nature of murmuring against adversity in God's world: [005]RESPONSE: Genesis 4:13-14, [005]RESPONSE: Exodus 14:10-14, [005]RESPONSE: Exodus 15:24-25, [005]RESPONSE: Exodus 16:7-8, 12, [005]RESPONSE: Exodus 17:2-4, [005]RESPONSE: Exodus 32:1-10, [005]RESPONSE: Exodus 32:23, [005]RESPONSE: Numbers 11:1-6,10-11 [005]RESPONSE: Numbers 13:31-14:4,11-12,26-29,35-36 [005]RESPONSE: Numbers 20:2-5 [005]RESPONSE: Numbers 21:4-5 [004]==== :Because we can suffer, we can have compassion. Because we can suffer we can truly love others and deny ourselves. Because we can suffer we can forsake our own lives for Him. And in the grand scheme of things, this compassion, this love, this forsaking of self, outweighs the evil of suffering and our present condition. [005] RESONSE: Compassion is being perfect, allowing the imperfect to fall under their endowment with free-will imperfection, and thus allowing one to become closer to the sinful design. The Good God YHVH Elohiym cannot be one with ones that have not become sinful yet, so He designs them to fall in free-will. This requires a justification, also. You state it as a fact, without a description of the "physics", just the "fact", not a good approach. So the Jews, whom among the Levitical priesthood and such, were responsible for keeping evil doers from temple service, were indirectly responsible for the death of Jesus, through the evil doers who crept into the temple works, and so are justified in the suffering 2000 years later, to enter the Concentration Camps of Hitler, by your logic. Because, they will be receiving better for their suffering. And why do we need something better? Because God couldn't produce the good in the first place? I make the theoretical argument, if suffering and self sacrifice is required to receive the treat of better things, then the whole earth should be the Concentration Camp, so we can know Joy even more deeply through deeper suffering, and self-sacrifice more selfless, from the scarcity of a dismal Concentration Camp. The worse the world is the better the treat God gives us in recompence, for our own accomplishments. And even then, why is it of US and not from GOD? We are nothing, but YHVH's PETS? To be thrown in HELL for finite belief on full-rejection of YHVH, and suffering received ad-hoc, for existing with free-will, to become one with God? God can't make good without evils, still stands your refutations. [004]====:Because we fell, Christ could redeem us, becoming one with us. [005] RESPONSE: Because God wanted to redeem us and become one with us, from our God-created sinful state, he designed us to fall so he could save us. Peachy God, you propone. God cannot become one with what He Creates, but can only become one with those He designs to FALL FIRST, with the free-will that God gave us. You imply that God cherry picks the Good fallen, and throws away the finite understanding evil ones He Created, into Hell in Revelation, not found in the Book of Life. [001]====(I2) Why is Got not capable of creating soft free-will so that 100% of the souls will be saved (not-A5)(not-A1)(not-A2), as God only makes souls so potentially heavy, that not even God can lift them for salvation (A1), with His infinite power, patience, mercy, not saving, but building Hell to destroy souls in Revelation (A1)(not-A5). In fact, if God is omnipotent and omniscient (A1)(A2), why didn't He SIMPLY create 100% saved free-will beings before time began, when cause and effect didn't apply to logic and God, so these temporal self contradiction can't exist in the first place, and He could have achieved PERFECTION in SALVATION, but only makes willy-nilly units for salvation that happen to go HIS WAY. [002]==== "I think that creating "free will" that does what you want it to do 100% of the time is a contradiction. That's like saying "why didn't God create circles that have corners?" Love cannot be forced, and God, who is Love, is not a rapist. God wanted us to be free to love Him and know Him, but this NECESSITATES that we be actually free to reject him - and if we are free to do so, then it is a REAL POSSIBILITY that could happen. And we know it was a real possibility precisely because it DID happen (and while this is pure speculation on my part which probably wouldn’t apply to perfect people, I wonder if we’d wonder if we were really free ever had we never fallen)." [003]====Think!? Not Know, Gnosis, Logos. THEREFORE BY YOUR BELIEF, God CANNOT create souls 100% destined for salvation. God is an imperfect GOOD creator. God KNOWS He creates SUFFERING and HELL for some of His Creation, but God is NOT RESPONSIBLE for what God KNOWINGLY CREATES IMPERFECTLY. Examine (I4)-RESPONSE, for Bible Verses, reference contradiction of what is possible-not-possible. [004]====This is no more a lacking in God than his inability to create round squares or make 2 and 2 equal 434.2. We’re either free to decide to love him, or we’re not and we’re robots. As soon as you make humanity “free”, you give up the “100%” control. [005]==== RESPONSE What do you mean here? Where God can make pi=3.0, making a triangle-circle, as His Book allows. [005]====RESPONSE I Kings:7:"23 And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about. 24 And under the brim of it round about there were knops compassing it, ten in a cubit, compassing the sea round about: the knops were cast in two rows, when it was cast." [005]====You admit then that your teachings of YHVH Elohiym Lord God, are of a finite capability God by design? Teaching to reach the world, using a mysteriously translated and interpreted text, in HUMAN HANDS, like myself, a monkey typing randomly at a keyboard of God concepts? So all things are not possible with your taught God YHVH, as YHVH HAS NOT THAT WAY right now in His Visible Design. YHVH must allow His created-inferiority to fall all about Himself, so that He can come in on a white horse, and save the day, as no other universe that can be imagined or made, that is better than this show world works, for God's own rules are held against Hindu-Buddhist Karma fall before being risen after the facts? God chooses and desires to make chaos, less than 100% harmony, to His personal voice that is unmistakable and preserving, but not really. [004]====1) Hell is primarily THE STATE OF SELF EXCLUSION FROM COMMUNION WITH GOD. It is what happens to those who chose to remain apart from him. [004]====2) It is reserved for only those who commit the only unpardonable sin, “blasphemy against the holy spirit” – but what this amounts to is simply refusing the love and forgiveness of God. [004]====3) The primary punishment of Hell is SELF EXCLUSION FROM COMMUNION WITH GOD. (sorry for the all caps, I wish they’d let me italicize or bold on here) [005] RESPONSE: What make we of: [005]RESPONSE: And more so, the acrostic of the Psalms, showing us witnessing those comdemned, as they were designed by YHVH? [005]RESPONSE:Psalms:37:"7 Resign thyself unto the LORD, and wait patiently for Him; fret not thyself because of him who prospereth in his way, because of the man who bringeth wicked devices to pass. ... 20 For the wicked shall perish, and the enemies of the LORD shall be as the fat of lambs--they shall pass away in smoke, they shall pass away. ... 34 Wait for the LORD, and keep His way, and He will exalt thee to inherit the land; when the wicked are cut off, thou shalt see it.." [004]====Hell is Commonly imaged as “fire and brimstone” in and out of scripture; these are meant to be analogically impart an understanding of the nature of hell and are not necessarily literal (though they are not necessarily not, either!) [005]RESPONSE: So, God makes threats of death so we may free-will choose to overlook His Threats of Death, Fire, and Brimstone, rising forever and ever, and the fat of the lambs striking the fire? That is compassion, to use threats as the Good Guide to The One Way? Using threats to coerce free-will love is GOOD, you say? [004]====Hell primarily is the state of self-exclusion from communion with God. Remember, God does not force love. He gives it freely and it must be freely accepted. “God is not a rapist.” C. S. Lewis wrote: [005]RESPONSE: So this comfirms, by your interpretations, that use coersion to life against threats of the eternal snuffing out, of those who are in full recognition of the universe of God, and willfully choose destruction, even though they know all things, like a God, in some future end of Millenium threat of Hell Death and Destruction, to be destroyed, and not simply making a personal connection to God for all humans, in True Patient, Instructing Mercy and Benevolence. I didn't now COERSION to LIFE, IS LOVE, for some of God's OWN DESIGN. Sounds more like Shakespeare, than Merciful Personal Patient God. [004]====“Christianity asserts that every individual human being is going to live forever, and this must be either true or false. Now there are a good many things which would not be worth bothering about if I were going to live only seventy years, but which I had better bother about very seriously if I am going to live for ever. Perhaps my bad temper or my jealousy are gradually getting worse – so gradually that the increase in seventy years will not be very noticeable. But it might be absolute hell in a million years: In fact if Christianity is true, Hell is the precisely correct technical term for what it would be.” [005]RESPONSE: Support for Hell as an instrument of His Coersion of Free-Will losers that He Created That Way, under a God that can use the most vile of Humans as His Instrument of Coersion. [004]====Incidentally, both heaven and hell are described as places of fire. The Highest choir of angels are in fact called “the Burning Ones” (i.e. the Seraphim”) because they behold the face of God. The imagery of fire is meant to convey something more than physical appearance. Fire can purify and it can consume and destroy. [005]RESPONSE: No doubt, as the fiery furnace doesn't touch the Good of God, that moment, and kills the stokers of the Fire. God's Fire, agreed, is a consuming fore of the wicked, that God creates and allows, as the instrument of suffering and trials and ultimate destruction, promised or threatened only, in Revelation and Psalms, just to start. [004]====Moreover, God predestines no one to hell, and wills that all should be saved. If all are saved in the end (which is possible, but doubtful) then his will prevails. If some are not saved, that is because being Love who created freely, he gave up the reigns to his creation. [005]RESPONSE: So the following is an empty threat, again? A coercion to free faith and love? A cunning linguist of mystery and confusion, much closer to Satan than the Good God, being held back by His own Karmic Forces of YIN and YANG? Is this lake of fire there just for fun, in the end, and God leaves that little part off the book, to coerse the love of the most of His Imperfect Creation? Just like God is too Perfect and Good to help us directly by Correcting us Directly, but uses the evil ones to correct us, to Keep His Hands Clean? It would seem to be much more civil of God YHVH to plant His Voice and Means of Correction directly to all people, so we are corrected, warned, and illustrated, before what happens, to drive the crooked to Him strongly, and drive the good to Him for the rewards, and show the evil the Good. But that seems not to be, as many teach. And it would remove the threats of Death, destruction, distortion, nature, and such, and make us concentrate on God's ways, and not all of the other ways we must combat, in addition to being Good. The mere fact that God Made Satan the Guarding Cherub, and also Gave Satan free-will, shows a conflict of design, where the Angels and Elohiym are supposed to be God's unwavering programmed servants of God's Kingdom, and not be giving them , as well as all Humans, Free-Will which is sin nature in infinite time. And God Knows all of these things will happen. [005]RESPONSE: Revelation:20:"15 And whosoever was not found written in the book of life was cast into the lake of fire." [001]====(I3) Why was God so short of longsuffering patience with Adam and Eve, but simply punishes them at their first disobedience, AND punishes them and their descendants to the end of time, for practical purposes (not-A5). [002]====:"Well, (1) in the first place, he had commanded Adam at the start – Adam had heard the mandate of God directly, and the doom which lay upon the action. Yet Adam and Eve sought to be “like God, but without God”, turning against their one source of ultimate happiness. (2) Secondly, it wasn’t simple disobedience but insurrection, a turning against. (3) Third, they are not punished to the end of time, and this self-same deity became one of them and shared in their sufferings and misery, that he might thus elevate them." [003]====RESPONSE: (1) But the Bible Genesis gives no record of mercy, by correcting and removing their sin in love ... right-away allowing the suffering to the end of time for all. No MERCY, nor FORGIVENESS, nor LONGSUFFERING, IN THAT TIME, when all things are possible with God. GOOD GOD (A5) IS NOT FOUND IN GENESIS RIPPLING TO END OF TIME BY HIS DESIGN. (2) Correction on point: ALL disobedience of God and His Ways (SIN) is insurrection to God: Romans 11:"30 For as ye in times past have not believed God, yet have now obtained mercy through their unbelief (disobedience, turning away). Even so have these also now not believed, that through your mercy they also may obtain mercy.". And that doesn't even cover natural accidents, uncorrectable, yet suffering. (3) We still die, even today. Do you see IMMORTALS walking around in a GARDEN anymore, NOT having women-child-bearing pains, and NOT working from the sweat of their brow to support woman? I see no such thing, the consequences of Adam and Eve working until the end of time by God's Created Design, until Jesus's return. My issue still stands with your answer. [004]====All things may be possible, but he does what is fitting and best, and we can assume that things are as they are for a reason. That said, I do not see how you can say there is no mercy. From that very moment, the path of all salvation history was laid out by God to effect the salvation of humanity, and the reconciliation of us all to Himself through Himself. That it didn’t happen instantaneously doesn’t mean that it doesn’t ripple through all of history. [005]RESPONSE: MAY is not IS. Fitting and Best, under an Infinite Powerful God, is THE BEST, and nothing less than THE BEST, whether nder human imaginations, or God's truth of limitations seemingly taught of Karmic Hindu-Buddhist limitations placed on God's Saving Hands. [004]====Christ himself, when he died, descended into hell to preach the gospel to those “who were disobedient in the days of Noah” (1 peter 3). The power of his resurrection and defeat of sin rippled back to the very beginning of history, and even to those who were wicked, to present salvation to them. [004]====Meanwhile, all of the created world, fallen though it is, is still good – just disordered. “God so loved the world” says that famous passage John 3:16, because the world was created good, and Man was created “very good”. [005]RESPONSE: So long after the fact, 4000 odd years later, God goes back to correct some things FINALLY, by teaching them directly, when God could have averted everything with patience, mercy, benevolence, and a personal spirit voice and bodily force of correction and instruction of future timeline paths of time space, in all men. That offer of salvation comes after billions of billions of people later, in the rough portion of the 60,000,000,000 odd people over the recent 4000 years. God creates disorder, you say, to teach us a coherently understandable lesson, with finite free-will sinful natures? Go figure! It is a job that is never quite proven to the finish until all things are done in the Shakespearian complexity of life over the approximately 67,000,000,000 people, so far, destined or delivered to death in this world, as the loving patient coercion to life's ultimate choice. And if time is a constant that is fixed and KNOWN IN YHVH OMNISIENCE, then nothing ripples back and forth, in the crystalline disorder perfection of the Mystery of YHVH's Works, as it is one object of static playback to an INFINITE OMNISCINET GOD. Fluctuations in time space as you describe, are unknowns of chaos and disorder that God Himself cannot Know Omniscently. And for that matter, several billions of peoples more live after 1 AD, far away from Chistianity's only path for salvation and rewards of white robes for one's works in faith in Jesus's finite sacrifice, as humanity and Satan's evil are finite, within an Infinite YHVH Lord God. [004]====I don’t take your point… [005]RESPONSE: I see no correction from the Word, The Bible. Not all Sin is turning away from God? We can pick and choose sins to commit, to forward God? Some sins are a turning toward God, as you imply here, without correction? [004]====Yes, presently death is part of the human condition. But that death is not the end of the story. (also, the punishment for the woman was an INCREASE in the pangs of birth, not the creation of them, and that very well may have been due to a DECREASE in our true, god-given reasoning ability to cope with such pains.) Pain and our passions are DISORDERED now, and wouldn’t exist as they do now had humanity not fallen – but this is all hypothetical and we could talk circles about this for hours and get nowhere but interesting speculation. [005]RESPONSE: So believing in Jesus, is still a promise beyond the knowledge of comfort, which in this world, when viewed as a Satanic Prison Camp for labor and mind control of the population, to just go the way of the dominions, powers, and principalities of pure evil and deception and temptation and confusion, will never be broken, by your finite thinking, if there really is no God, and we have a REAL WAR to wage against ALL OF THE EVIL FORCES OF THE WORLD, without a God. And a God given disordered sense of right and wrong cannot be the way to train tried gold, and dispose of the rest as dross creation by a perfect YHVH LORD God. In this world of 100% death with only a PROMISE from a currently unseen force of TOTAL POWER will go on forever, until an uncertain promised date of arrival of a questionable Infinite Powerful God, means that the powers and principalities of evil humans will have their way with humanity, with a corrupt set of multiple world religions that no one agrees are of one Christ, are distorted and diffused from truth, by their wicked world controlling ways. The Death Camp Earth, if Aliens or human forces of evil and great power over the whole earth, without a God, as the assumption of an unfulfilled ultimate promise GRANTED ONLY AFTER YOU'VE DIED, will never be defeated in this age earth, when it is true death that controls our lives from birth to death, and everyone on the whole earth, practically speaking, believe that scarcity and death are the only way of the world, until that promised day to come AFTER WE"VE ALL DIED, that may never come, if you play the Devil's advocate, where we are truly on the planet of control by the Evil Ones that They Themselves Permit in Powers, Principalities, and Dominions of a guaranteed dying earth without an Infinite Powerful God showing His True Ultimate Powers in practice, where the humans are trained to believe that God's ways are in disorder and trials, and promises fulfilled after death from this plane, that we all give up on as hopeless under the coercive truth of a mythical God's mysterious planet controlling way. [003]====RESPONSE: If I were God, I would have removed their sin, punished JUST THEM, removed their knowledge, and let them go again, in perfection from them on in the Garden and put flaming swords around the tree, with MERCY and PATIENCE in their foolish ways (even though they are SO-CALLED PERFECT). That's mercy, longsuffering, and kindness to them AND all the generations to the end of time. But they get spanked, and we all feel it! The sins of the fathers, borne by the innocent children to the end of time. Even if you accept Jesus, you still suffer all these things. You can imagine what is NOT WRITTEN, Adam says "please forgive us God", God says, "Not this first time, sorry, but the ramifications to the end of this age are IRREVOKEABLE by the way I Set things up. I can Show You no Mercy Here." [004]====Are you telling me that mercy means never letting people taste the fruits of what they’ve sewn? No, sir. That’s coddling, and God does not coddle. This isn’t simple children’s’ games here. This was a primordial choice of humanity. [005]RESPONSE: You are putting me on, aren't you? You can't be seriously taking my points. Maybe I'm wrong though. You give them a taste from the future, but just a taste for correction, which then causes the time lines of all future activities of the roughly 67,000,000,000 humans to completely vary, in a way God is completely in control of, but doesn't actually allow, just predicts, so that the root doesn't make the countless stems stand on end. I'm talking correction, instruction, and mercy for 67,000,000,000 humans, and you call it child's play. Please, why the lack of serious instruction, here? Mercy, is never letting the descendants suffer for the stumbling and learning of the roots and from the Creatior of it all, Himself. And all of the correction received, in the whole world, is personal from God, and coherent, to coherent Good Good Ways in Understanding and Appropriate Ways of Filial Piety. God makes 67,000,000,000 humans, and mostly let them teach each other, as blind leading the blind, when God's Infinite Power, can bring it all together with His Infinite Powers? What is God Short on that He cannot correct, alter the potential timelines back into order, instead of creating the disorder the world seems to be witnessing. And you call that love for Adam and Eve to get spanked, and also everyone else of the 67,000,000,000 suffers? [005]RESPONSE: And if I do take you seriously, then the whole planet should truly be a Concentration Camp, as if Hitler, Hirohito, and Mousolini Succeeded in transforming the planet into a fascist camp, so we may be better brought closer to God. If you are verily being serious about God's Word Here. That would truly be the least coddling planet to bring humans closer to God, in an ultimate payback for Karmic imbalances, so God can give us the ultimate. [004]====And He suffered them too, even being sinless, because he is love. [004]====You’ll notice Adam never asks for forgiveness. He and Eve simply play the blame game, and instead of damning them to hell for all eternity, he lets them live, and settles them himself east of the garden. (Gen 3:24), and promises that he himself will redeem them (“I will put enmity between you and the woman, and between your offspring and hers; He will strike at your head, while you strike at his heel.” He says to the Serpent, who is Satan, forshadowing what God himself does in Christ Jesus, Himself Incarnate). [005]RESPONSE: Maybe that was left out of the Book. Maybe Adam was scared. Maybe Adam was more like a little innocent child trying to hide what God told them was a sin. For any reason, Adam and Eve get spanked, and all of history feels the ramifications, as is taught by some Christians. Blame Games *are* for Children, or those lost in the world who cannot comprehend God's Ultimate Means no matter how hard they try and wish to know earnestly seeking God's advice, that doesn't coherently come through the world, or from within from God. And then God punishes all children, as they are small beings compared to the Ultimate Mercy, Knowledge, Power, and Grace of God. He damns all to die in this world, accepting only a promise, which can easily be a deception of twisted world of true powers, dominions, and principalities, that truly have the sheep fooled into believing in scarcity, conflict, suffering, fairness, disorder, chaos, truths, distortions, mystery, et cetera. Promises, promises, over 67,000,000,000 humans, that are not all Christians or God's Chosen People in the Jews. Why, because He is Love, and must let His Creation fall, you say, so that only then He can Come closer to us? What a waste of true infinite Power of God YHVH, the Creator of all things, in His Perfect Order. [001]====(I4) If God is omniscient, and omnipotent (A1)(A2), why can't He make ANY PERFECT good without knowingly complicitly ALLOWING even one iota of evil (A2), implying God is not omnipotent to not self-contradict, but then God is impotent in time-space free-will domains, creating a disunity, for which HE IS RESPONSIBLE FOR as (A6) implies responsibility for what He created. [002]====:"God being “omnipotent” doesn’t mean he can do ANYTHING. He cannot do what is a logical contradiction, and so while he positively wills no evil to happen, he permissively allows it that from the potentiality for evil he can bring greater good. He is responsible for creating creatures which were free to reject him, not for their rejection of Him." [003]====RESPONSE: You say it is contradiction that God Makes Good without Evil. Many say, outside of time, God is beyond logic and contradiction, so ALL THINGS ARE POSSIBLE BY GOD ... BUT NO THEY AREN'T!? Despite, Mathew:19:"25 When his disciples heard it, they were exceedingly amazed, saying, Who then can be saved? 26 But Jesus beheld them, and said unto them, With men this is impossible; but with God [not quite?] all things are possible." NOT SO.Revelation:20:"14 And death and hell were cast into the lake of fire. This is the second death. 15 And whosoever was not found written in the book of life was cast into the lake of fire." (I2, God doesn't save 100%, God CAN'T make that kind of free-will human., God creates and authors Known Death.) He Makes the capacity of Rejection, He Makes Death of His Reject Humans. God's choices lie within Himself, as the Ultimate Free-Will to Good Ways. God's suffering is real, and it is a temporal taste of the natural fruition that comes from allowing free-will finite humans, when looking for Love from humanity, producing: accidents, "natural" disasters, genocides, concentration camps, wars, disease, ignorance, et cetera. [004]====All things which are possible are possible. It is not possible to have a circle which is a square (boxing rings aside) because it is sheer nonsense. That’s like faulting God because he cannot “galkjeroihklmanedbo” or won’t “dlakngwekn2k 9ib9”. Something which is a logical contradiction is not something at all, but merely a “null set” of linguistic symbols strung together. God cannot “make a flower that he did not make” because that is nonsense. It’s not God’s fault, but simply a problem within human language and mind that makes us want to think that such things could be made. There is no triangle with 4 sides, and God could not change that (except in changing the very meaning of “triangle” to that of “quadrilateral”, but that is not then creating a 4 sided triangle, but simply swapping words. [005]RESPONSE: YHVH LORD GOD'S INFINITE CAPACITIES ARE TIED DOWN BY MENS HANDS TO SNATCH SOULS AWAY FROM HIS TRUTH, CONSTRAINED BY KARMIC HINDU-BUDDHIST EQUATIONS. Utopia and Heaven which are possible, are not "gobbledegook babylon" as you put it. Why is Heaven de-facto gobbledegook, because God must watch us fall in our free-will before He can Correct us. Why is preemptive correction and instruction, with a taste of what might happen, something that God cannot give everyone? Why does God make Souls so Heavy, that He Cannot Save them better than the way things appear, when asking for understanding from God's Word. It's not like I haven't asked YHVH LORD God, and God's representatives, that are possibly corrupted, or playing blame games and dissembling about God's True Plan, that should be so easy to understand that even a Child, which we all are, can understand the complexity from God Himself? You simply say human language is incapable of helping all come to a closer understanding of God, so God hides wisdom from humans, because He speaks a language of correction that few can understand coherently, as we are not Gods ourselves. How can the Ultimate Teacher not have a word that rings true, blasts away the chaff of untruth, and is visible and unquestionable by ALL HUMANITY? God is holding back, allowing the apparent sufferings, keeps the threat of death in this world, even after accepting Jesus, in faith? Please, intensify your words, as I am missing the connection of the mystery we must simply ACCEPT? Jehova's Witnesses have convictions of the Ultimate. Jews have convictions of the Ultimate. Muslims have convictions of the Ultimate. Hindus have convictions of the Ultimate. Buddhists have convictions of the Ultimate. Zoroastrians have convictions of the Ultimate. Theist Science has convictions of the ultimate. A-thiest Science has convictions of the Ultimate. Ancient Greeks have convictions of the Ultimate. Ancient Romans have convictions of the Ultimate. Baptists have convictions of the Ultimate. Lutherans have convictions of the Ultimate. Yet all humans play the blame game like children, and the One Way banishes and separates into divisions the One Body of YHVH God? Yes, God has a hard time communicating The Truth in Power and Clarity and Correction, and we have to be as Gods to understand which is which, or suffer cycles of retraining and recorrection because no one step of correction is perfect under the Infinite Power and Mercy of God? All sides say, from God, just have faith. [005]RESPONSE: So lets, begin by saying, what is the definition of God's Karma, that He is apparently one in the same power of YIN and YANG, as you teach, pray tell? [004]====Bear in mind that God is eternal, and outside time. If you suffererd seemingly immeasurable pain for 1 million years, but then experienced heaven for all eternity, that suffering would not be even a blip on the radar. 1,000,000 is not even a percent of a percent of a percent…of a percent of infinity. So that we suffer in this life does not mean that those sufferings are the end-all of our existence, or that we will not be compensated. [005]RESSPONSE: So Infinite God in Power over all humanity over time wants what from us? We need to taste 1,000,000 years of suffering. Why Has God received a signifigant suffering from mankind, that we must all live this way for 67,000,000,000 humans? That is at 10%, 250,000,000,000 years of suffering, albeit averaged over all humanity. Why doesn't God simply make it 2,500,000,000,000 years of suffering, since more suffering simply makes us closer to God, as you teach? We are finite and confused humans, here. What does the Infinite Loving Merciful God want from us? Coerced Love under threat of Death and living with suffering from foolish desires to a world with just plain old problems, over 67,000,000,000 humans so far? And that doesn't even look at the Creation that groaneth, Made By God, of lifeforms eating lifeforms to survive. Sounds like a Hindu Buddhist Karmic field over the entire age of human history, Under the One Infinite Personal Powerful Merciful God. "Where have all the soldiers gone, long time passing, where have all the soldiers gone, long time ago? Where have all the soldiers gone? Gone to graveyards everyone. "How many roads must a man walk down before you can call him a man? How many seas must a white dove sail before she sleeps in the sand? How many times must a cannonball fly before they forever are banned? The answer my friend is blowin' in the wind, the answer is blowin' in the wind. How many times must a man look up before he can see the sky? How many ears must one man have before he can hear people cry? How many death will it take till he knows that so many people have died? The answer my friend is blowin' in the wind, the answer is blowin' in the wind. How many years can a mountain exist before it is washed to the sea? How many years can some people exist before they'r allowed to be free? How many times can a man turn his head pretending he just doesn't see? The answer my friend is blowin' in the wind, the answer is blowin' in the wind" [004]==== I hope you fond that helpful. [004]====I’m going to have to ask you to simplify your questions a bit if you’d like to continue. I want to be thorough, but these are getting a bit long. I’m not opposed to continuing this, so long as you can remain charitable and terse. [005]RESPONSE: I'm sorry, God threw 5 million characters at ME, through The Canonized Bible, which you should know better than myself. It's hard to believe in something when it is hard to understand and exponentially hiding the truth behind smokescreen. I cannot be shorter to get through the issues so that you can understand my frame of reference of understanding. And you throw on C. S. Lewis, which I can hang with the Screwtape Letters, but is hardly Canonized Texts. And The Catechism, from an entity that says with much ink many years ago, that Galileio's Orbits around the Sun was in utter Heresy against God with Justification from God, inerrant, all for saying that the Earth goes around the Sun, so I highly question those other words, outside of the Canon and even Apocryphal texts, without God's Unction filling in the Gaps of understanding, and the same God let's us compute the paths of satellites that investigate and map the solar system, using non-geocentric equations of simplicity. I guess God would have us compute orbits using Claudius Ptolomaeus' Crystal Epicycle Fields, instead of Newton's Gravity, or more precisely, Einstein's Theory of Relativity Gravity Equations. I bet you'll say all of our probes and gravity equations are not real, and fabrications of the Powers, Principalities, and Dominions that ouw tax dollars help support, in the Space Programs. Now I do know enough math, to know that one CAN ACTUALLY calculate planetary orbits using Greek Epicycles, but it does make the equations much-much harder, compared to Newton and Einstein, but that's what God wants for us, on earth, and in space, where PI=3.0, too, when the scribes knew better to put that into God's Old Testament? [005]RESPONSE: This is only about 66,377 characters, or about 1.34% of The Holy Bible in size, in about 15,421 words, in total. All is required to assure I am painting the proper image of complexity and deception, and disorder, the world must cope with, to Know that they Know Salvation in only Faith, in only One Way, when One Way, is supposed to be for ALL. [005]RESPONSE: I guess I may only take comfort in, If God be for me, who can be against me, regarding once saved always saved, in the power of God's Good Hand? ENDBack to Contents [16] Abiogenesis second version.Back to Contents CREATED AD 2008 09 02 P 08:40 (1) Abiogenesis ..Recently I've been working through a concept for substantiating abiogenesis through the idea of general combinatorial chemistry. I've tried some posts on, science sites, and religious sites, but none seem to have any coherent opinions that are constructive to addressing the general viability of such a theory effectively. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..General (natural) combinatorial chemistry (GCC / NCC), defined here, is the complete mathematical-chemical model of all reactions that occur in any portion of matter, and its temporal evolution, including feedback. This is opposed to synthetic combinatorial chemistry, as used in pharmaceutical industry, where chemicals are specifically combinatorially analyzed by chemistry machines and control algorithms. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..To get a feel for what general combinatorial chemistry looks like, as a concretely realized system, take a beaker with 5 chemicals total in aqueous solution. Five chemicals, combinatorially speaking, have the potential for uniquely, (2^5 - 1) [specific-reaction-node]s or 31 [specific-reaction-node]s, where every product at that reaction node must perform some task in a reaction directly or catalytically. Say the beaker starts off containing molecules of water, sodium, chlorine, silver, and fluorine. From these, the 31 [specific-reaction-node]s exhausted are: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) 1 water 2 sodium 3 water sodium 4 chlorine 5 water chlorine 6 sodium chlorine 7 water sodium chlorine 8 silver 9 water silver 10 sodium silver 11 water sodium silver 12 chlorine silver 13 water chlorine silver 14 sodium chlorine silver 15 water sodium chlorine silver 16 fluorene 17 water fluorene 18 sodium fluorene 19 water sodium fluorene 20 chlorine fluorene 21 water chlorine fluorene 22 sodium chlorine fluorene 23 water sodium chlorine fluorene 24 silver fluorene 25 water silver fluorene 26 sodium silver fluorene 27 water sodium silver fluorene 28 chlorine silver fluorene 29 water chlorine silver fluorene 30 sodium chlorine silver fluorene 31 water sodium chlorine silver fluorene where, offhand, we must recognize that there are, at the very least, the potentials for the following real reactions with stable new-products and reactants left over, at some equilibrium level, from the left-hand [specific-reaction-node]: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) (6) Na + Cl < -- > NaCl (24) Ag + F2 < -- > AgF2 (3) Na + H20 < -- > NaOH + H2 (4) 2Na + F2 < -- > 2NaF (20) Cl2 + F2 < -- > 2ClF (20) Cl2 + 3F2 < -- > 2ClF3 (1) 2H20 < -- > H3O+ + OH- ..Note that in (20) a reaction node can have more than one possible reaction, like one at high temperature and one at low temperature. So, here, we see that complexity has arisen from simplicity, in that 5 [molecule]s was the starting state, and from only that, there exists the potential for 14 [stable-molecule]s to come to exist, formed by combinatorial chemistry. In much the same way that gravity-fusion yields the complexity of 92~ [natural-element]s (and numerous natural molecules), from the simplicity of 2 [element]s at the big bang. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..From that single analytical iteration, there can also exist that property of feedback, mentioned earlier. The second iterative feedback, for this example, takes 14 [molecule]s, yielding (2^14 - 1) [specific-reaction-node]s, or 16,383 [specific-reaction-node]s. Without being exhaustive, lets say just 0.001 ratio of the reactions will produce new molecules from the original 14 [molecule]s of this iteration. That yields 16 [molecule]s, for a total of 30 [molecule]s. Again, feedback can occur, as new molecules have appeared that otherwise would not have existed. Next iteration has (2^30 - 1), or 1,073,741,823 [specific-reaction-node]s. Lets say, without being rigorous, 0.00000001 ratio of the reactions produce new molecules. That yields 11 [molecule]s, for 41 [molecule]s total. This gives (2^41 - 1), or 2,199,023,256,000 [specific-reaction-node]s. If a ratio of 0.00000000001 reaction nodes produce new molecules, we have 22 new molecules, totaling 63 [molecule]s. This can go on, as long as the feedback ratio of new stable molecules remains positive, and ceases when the feedback ratio equals zero, in a steady state networked reaction chemistry matrix, that may or may not oscillate in time about a chaotic steady state attractor. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..We have here an unknown of combinatorial chemistry, in the feedback ratio, at arbitrary complexity levels, that neither the "creationist" can declare, as much as they wish to, reaches zero for any given starting chemical system (finite steady state), or evolutionists can declare is always positive (a complexifying mixture that goes from starting simplicity to virtually unlimited complexity of molecule varieties), without measuring it in a real set of experiments outside of finite Miller-Urey, Oparin, Joan Oro, et cetera, that I have not seen myself at biological level tests. However, we know, from real biology, that carbon can allow the formation of self sustaining natural combinatorial chemistry at complexity levels of millions of compounds, that do not decompose into fewer stable molecular units or complexifying into more molecular units (except at death), so at millions of compounds for living entities, the ratio is zero for existing biological systems, probably due completely to homeostasis and physical limitations of reactions at that level of complexity-sparsity-systemic-distribution. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..Another piece of information can be created based on self limiting reactions. Let's say, for stereochemical limitations, only five molecule node reactions are signifigantly feasable, and 6 molecule and more node reactions are excluded from occuring, due to complexity. From 5 to 10,000 ocean molecules, one sees that the partial Natural Combinatorial Chemistry matrix from reaction molecule counts SUM(COMBINATION(molecules source, molecules of reaction) molecules of reaction = 1 to 5), from 1 to 5 nodal molecules, produces the following numbers of reactions: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 9 September 2008 (UTC) 5 reactions, 31 reaction nodes 10 reactions, 637 reaction nodes 20 reactions, 21699 reaction nodes 50 reactions, 2369935 reaction nodes 100 reactions, 79375495 reaction nodes 200 reactions, 2601668490 reaction nodes 500 reactions, 257838552475 reaction nodes 1000 reactions, 8291875042450 reaction nodes 2000 reactions, 266001666834900 reaction nodes 5000 reactions, 26015651042712200 reaction nodes 10000 reactions, 832916875004174000 reaction nodes which also shows astronomical numbers of potential reactions, against the argument of certain inherent open system steady state reaction simplicity versus complexity destiny posed by The God of The Bible according to some teachers. With just a 100 chemical ocean, limited to 5 source-molecule reaction nodes, produces 79,375,495 potential reaction nodes from the 1 to 5 molecule node reaction left hand sides, which is miniscule compared to the full Natural Combinatorial Chemistry of 1.26765060022823*10^30 NCC reaction nodes, at a ratio of 1 in 16*10^21 reachable reaction nodes in all NCC nodes. If just one in a million of the reachable NCC nodes produces a net of durable and useful molecule reaction products based on reaction nodes alone, would produce 79 new molecules in an iteration to partial steady state. Creationists claim a 0.0 feedback ratio at some point, not even one in a million, without known experiment reference, other than The Bible as humans tend to teach it. The question being, what is the ratio of the feedback ratio, in a complex chemical environment? Are there increasing breakdown reactions compared to build up reactions, such that all natural combinatorial chemistries reach some form of steady state of finite complexity molecules and polymers, as some Creationists claim is the God Given truth in biochemical science? [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 9 September 2008 (UTC) ..Now an example of general combinatorial chemistry, can be defined for a lifeless-earth-ocean. There is no life on earth, so the oceans are filled with a mass roughly similar to the modern biosphere dissolved in a lifeless ocean mix of basic organic (carbon based) primitive compounds, and inorganic compounds. Likewise, the environment sets up numerous states for reactions, from sunlight with UV irradiated surface water, "sunlight"-only illuminated deeper water, dark water under rocks and in sands or gravels and night time, surface chemistry (clay and mineral surfaces), average temperature water, hot water volcanic vents, lightning strikes, meteoric impacts, radioactivity (higher in the past), dehydration concentration zones in estuaries and lakes, delta rinse chemical flumes, and so forth. Carbon is a special molecule, as it allows numerous molecules to form at normal temperatures in water solutions, as evidenced by life. For a starting ocean with just 100 stable molecules, one has (2^100 - 1) [specific-reaction-node]s, or about 1,267,650,000,000,000,000,000,000,000,000 [specific-reaction-node]s. Let's say, given the ocean containing inorganic and carbon compounds, that a ratio of 0.000000000000000000000000000001 reactions form new compounds (complexity from simplicity), then one now has 101(.267) [molecule]s. As long as there's a small positive ratio, which seems quite reasonable, the number of stable molecules will increase over time. At 1,000 [year-iteration] intervals for such an ocean, one would see, at this ratio, if fixed, 100, 101, 104, 129, 1,000,000,000, (molecular saturation), within 5,000 [year]s. At a ratio that is self limiting, because of physical combinatorial limitations, one would see at 1.267 new molecules, compounded on the first 100 [molecule]s, on 1,000 [year-iteration] intervals, that there will be 1,000,000 [molecule]s in the ocean in 731,000 [year]s. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..Now this alone, doesn't necessarily bring about life, yet. There would initially be in the early ocean, a large raw mix of the most stable left and right handed chiral molecules, potentially including lipids, single amino acids, single RNA, and single DNA molecules. With such a rich ocean, given just a very small general combinatorial chemistry feedback, in a relatively few years, there will likely be in the general combinatorial chemistry matrix, numerous polymerization pathways, for the most robust easily polymerizeable molecules in the ocean. Possibly amino acids, RNA, and DNA molecules, because that is what nature uses, if nature is pragmatic and not hand assembled by God every minute of the day, but may also be other natural molecules that can polymerize, likely based on carbon, that operate more easily than amino acids, RNA, and DNA. In this ocean, with some form of polymers and polycyclics, from that there will intrinsically be a digitally-codified, and thus easily mutateable set of chemical "species" that catalytically support each other's productions in durable, catalytically-reactive, efficient-thus-numerical, systems. Three sets of reaction-networked-catalytic-hypercycle-feedback-mutateable codes will be operating in cooperative sets, one for left handed, right handed, and left-right handed chiral polymeric reaction codes in combination. Each set will compete for molecular supremacy in numbers, over numerous explorations finding combinatorially-inherent new species, and in a scarce chemical competition/cooperation, one set or another will have dominance in ocean space because the probabilities of "discovering" inherent reactions don't occur identically statistically speaking, creating autonomous differentials of product exploration diverging in time for the three chiral system types. Because feedback operations are used in these sets of reaction pathways, they will have a kind of numerical instability in the very complexity discovery occuring, and over time, one set of operations will win out as the standard, as numerous incompatabilities would likely occur between the sets. Nature, obviously, selected right handed chiral molecules, because at some point in time, being the first most complete types and sets of fullest reaction-networked-catalytic-hypercycle-feedback-mutateable code found, that was overall, by chance and inherent robust stability, the dominant reaction super-system. It could have gone to the left handed chirality too, but what we have in majority biology is right handed chirality, not that left handed chirality is inferior or even discernably differenet, as a perfect mirror physics image in all ways, except in a *perfectly* identical discovery in its own combinatorial chemistry chiral subset of complexity evolution divergence in time-ocean-space. Mixed left-right chirality combinatorial chemistry reaction matricies probably are inherently less efficient to discover naturally since a wholly different a-symmetry is a part of a mixed system, and so it didn't become supreme either, but is an assumption of the complex mixed chiral system. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..With a dominant chirality in the ocean at some point of time, or at the very least, an ocean with large regions of left handed or right handed dominant chirality combinatorial chemistry, the mutateable digital chemistry keeps exploring its combinatorial chemistry, inherently combinatorially discovering and diverging in chemical "specie" space, always finding digital codes that react more efficiently in numeracy than previous generations of polymeric chemical code "species" could, because new reactions continue being exposed with each combinatorial chemistry iteration with mutations and systemic stabilities in chaotic attractors of cooperative catalytic production systems, and proto-metabolic pathways inherent to the growing matrix of reactions. Eventually, either circuitously through precipitate micro-gel agglomerate clumps without membranes to micells in some generations of intermediate chemistries, or directly,, numerous populations of many types of micelles form from primitive lipids, proteins, RNA, and DNA fragments, inherently selected, as the most efficient, and thus numerically superior evolved digital chemical "specie" systems of reaction sets, encompassing (1) metabolism varieties from sunlight related chemical reaction pathways, glucose pathways, sulphurics pathways, et cetera, (2) homeostasis in a semi-permeable auto catalytic reaction system types, (3) transportability in a semi-permeable primitive lipid micelle / lysosome kinds, and (4) reproduction in the inherently most efficient general combinatorial chemistry matrix types, of which there can be many kinds of cellular versions. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..It should be noted that for an ocean with particles interacting about 1*10^10 [interaction / second], in a billion years or 31,560,000,000,000,000 [seconds], in an ocean with conservatively 100,000,000 [km^3] or 100,000,000,000,000,000 [m^3] active ocean volume solution, at about 1,000,000[g/m^3], and 20[g/molar-volume] at 6.02*10^23[molecule/molar-volume], that there's: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..31,560,000,000,000,000 [seconds / billion-year] * 1*10^10 [interaction / second] * 100,000,000,000,000,000 [m^3/active-ocean] * 1,000,000[g/m^3] / 20[g/molar-volume] * 6.02*10^23[molecule/molar-volume] = 949,956,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 [interaction / billion-year-active-ocean] in oceanic combinatorial chemistry, or 949.956*10^69 [interaction / billion-year-active-ocean]. So-called Christian and fundamentalist Creationists are quite certain, inspired by God and His truth to them, that this set of interactions, in an ocean of combinatorial chemistry, CANNOT reach life, inerrant to God's truth to them, that ONLY God was directly involved in forming life past the barrier of inherent chemical irreducible complexity truly, and not the rules of inherent combinatorial chemistry, originally setup at the Big Bang. One only needs to reach, say 1,000 large systems of chemical interaction, out of about 1*10^72 [interaction / billion-year-active-ocean], to reach life, leading one to 1*10^69[interaction / system] to setup each of those systems in parallel. If the [interaction] efficiency is 1 in 1,000,000,000,000,000,000 ["progressive-interaction"/general-interaction] one has 1*10^51 [progressive-interaction / system] available per system, to reach all of the exemplar 1,000 [system] of life chemistry over a billion years of early earth. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ..Returning to a starting ocean, with just 100 stable [molecule]s, where one combinatorially has (2^100[molecule] - 1) [specific-reaction-node]s, or about 1,267,650,000,000,000,000,000,000,000,000 [specific-reaction-node / combinatorial-chemistry-context]s. Given the ocean containing inorganic and carbon compounds, that a ratio of 0.000000000000000000000000000001 [new-combinatorial-chemistry-context-molecule / specific-reaction-node] form new compounds (complexity from simplicity) toward life over non-life, then one now has 101(.267) [molecule / combinatorial-chemistry-context]s. The iteration would take, maximally calculated for a 1,000 [year / iteration] example, 1*10^66 ocean interactions (from the previous 949.956*10^69 [interaction / billion-year-ocean]), in this example of given ocean interactions, to make this oceanic molecular change from 100 to 101.267 [molecule / combinatorial-chemistry-example]s occur in the ocean. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) Proverbs3:13-23[Happy is the man [cell] that findeth wisdom [new good and true Words], and the man [cell] that getteth understanding [true Word]. For the merchandise of silver, and the gain thereof than fine gold. She [true Words] is more precious than rubies and all the things that thou cans't desire are not to be compared unto her [assisting truth]. Length of days is in her right hand [control]; and in her left hand riches and honour [product]. Her ways are of pleasantness, and her paths are peace [sustains]. She is a tree of life to them that lay hold upon her [in compatibility]: and happy is every one that retaineth her [in the cell]. The LORD [true Word] by wisdom [old codes] hath founded the earth; by understanding hath He established the heavens [consiousness]. By His knowledge [root codes] the depths are broken up [heirarchy], and the clouds drop down the dew [stabilize the environment]. My son, let not them depart from thine eyes [code preserves]; keep sound wisdom and discretion: so shall they be life unto thy soul [cell and mind], and grace to thy neck. Then shalt thou walk in the way safely, and thy foot shall not stumble.]. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) Once cells of such digital variety types are formed, with small chains of RNA, DNA, and proteins inherent at the level of complexity, they can propagate further in ocean currents, because of durability and safety of the agglomerate/cellular units. The best reaction sets are the chemical species that can travel in these units, in various ocean domains, and still contain the stability required in their digitized combinatorial chemistry to operate. Cells with inferior microcoded reaction networks, simply are less numerous and less prosperous. And since robust units travel, and have efficient feedback reproduction homeostasis, they dominate the ocean, converting whatever domains of other handed chirality into their networks of reactions, as partially symbiotic with the ocean and themselves, before true living individuality occurs. Eventually, the ocean purifies itself, either here, or along this path of biochemical competition, as cellular reactions that modify the ocean contents to their reactions, as well as use their own molecular types, and internally mutate their own codes to continually adapt to the unifying ocean, converge themselves together, akin to Gaian theories, through earth-ocean-cellular-types symbiosis numerical instability ocean domain feedback adaptive sumpremacy. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) Proverbs3:1-12[My son [cell], forget not my law[old codes]; but let thine heart [cell core] keep my commandments [DNA]. For length of days, and long life, and peace, shall they [codes] add to thee. Let not mercy and truth [in the Word] forsake thee: bind them about thy neck [body cord]; write them upon the table of thine heart [cell nuclear code]: So shalt thou [cell] find favor and good understanding in the sight of God [the Word] and man [cells]. Trust in the LORD [the Word] with all thine heart [cell core]; and lean not unto thine own understanding. In all thy ways acknowledge Him [the Word], and He shall direct thy [cell] paths. Be not wise in thine own eyes [cell organs]: fear the LORD [the Word], and depart from evil. It shall be health to thy navel, and marrow to thy bones. Honour the LORD [the Word] with thy substance [cell body], and with the firstfruits of thine increase [feed the Word]. So shall thy barns [environment] be filled with plenty, and thy presses [DNA codes] shall burst out with new wine [sweet spirit Words]. My son [cell], despise not the chastening of the LORD [true Words]; neither be weary of His correction [true Words]: for whom the LORD [old codes] loveth He correcteth [helps]: even as a father the son in whom he delighteth.] All the time the cells exist, the digitized combinatorial chemistry is always refining itself, inherently, because more efficient micro-polymer reaction sets become dominant through efficient forward reaction rates, via continual mutations in such primitive codes, reaching new inherent discoveries, not requiring molecules to be "conscious" knowing the future to bond themselves as "creationist" arguments often pose is required outside of physics. Also, as the relative robust stability of the best kinds of cells allows increases in codes, then also systemic relational codes inherently develop in these matrixies of reactions, in complexes and networks of catalytic reaction sets, because they inherently assist the reproduction of the combinatorial chemistry cells types. There may still be competition between cellular type systems, and sets of chirality molecules at this point of time, but every new generation of mutations that spreads dramatically better because of new found molecule codes, only furthers diverges the dominance of chirality and cellular types, both, and decreases any side-use of competing chiral systems that continue to wane as the ocean become uni-chiral through bio-recycling. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) It should be noted that the cells/gel-agglomeration-precipitates in these early combinatorial chemistry species evolutions are very small compared to modern cells, because they are not developed as modern life with its history of mutually supported digital molecular records. As such, they can fill an ocean quite densely, and pass generations quite fast, as the fastest best most durable and travelable units dominate, reaction wise. So in a million years, with just 10 million cubic kilometers of reactive zone, a density of 100,000 cells of various types per cubic meter on average in that volume, and a generation of 1 week, could explore, numerically, 52,000,000,000,000,000,000,000,000,000 units, in 52,000,000 mutation generations, of a total diverse population of 1,000,000,000,000,000,000,000 units, of various types, in such an oceanic sub-unit, with the accompanying period of chemical processing during each unit's existence. Definitely the hard way to form life, compared to design, but completely possible in contexts. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) Proverbs4:1-27[1 Hear, ye [cell offspring], the [true codes] of a [parent cell true code], and [machine] to [keep that code]. 2 For [the code Word] give [cells] [operations], forsake [cell] not my [true Word]. 3 For [paternal cell] was [a cell's] [true Word code pattern(al)'s] [cell offspring], [synergy cooperation supported] and only [precious codes] in the sight of [cells] [nurturing code]. 4 [Word code] taught [cell] also, and [instructed] [cell], Let [cellular core] retain [true Word code]: keep [the code ways], and live. 5 Get [truest codes], get [truest operations]: forget [the codes] not; neither decline from [code instruction transcriptions]. 6 Forsake [truest nurturing code words] not, and [truest words] shall preserve [cell]: love [the truest codes], and [truest codes] shall [support well] thee. 7 [truest Word nurturing codes] is the principal thing; therefore get [new truest Word nurturing codes]: and with all [cell] getting get [operational code integration synergy]. 8 [cooperatively enhance operations] [truest codes], and [nurturing codes] shall promote [cell]: [nurturing codes] shall bring [cell] to [sustainable dominance synergy], when [cell] dost embrace [true codes]. 9 [nurturing codes] shall give to [cell's] [processes and feedback] an ornament of [virtuous operations]: a crown of [cell synergystic cooperative numeracy power] shall [the best nurturing cell-world Word codes] deliver to [cell]. 10 Hear, O my [cell offspring], and receive [paternal cell] [true codes]; and the years of [cell offspring] life shall be many. 11 [paternal cell] have taught [cell offspring] in the way of [best codes]; [paternal cell] have led [offspring cell] in right paths. 12 When [cell offspring] goest, [cell's] steps shall not be straitened; and when [cell offspring] runnest, [cell offspring] shalt not stumble. 13 Take fast hold of [true codes integrated]; let [nurturing codes] not go: keep [true nurturing codes]; for [nurturing codes] is thy life. 14 Enter not into the path of the wicked [dispersive and viral codes], and go not in the way of evil [dispersive and viral code] [cells]. 15 Avoid it, pass not by it, turn from it, and pass away. 16 For [froward codes] sleep not, except [froward codes] have done mischief; and [froward cells] sleep is taken away, unless [froward incompatible detected codes] cause [good cells] to fall. 17 For [froward cells codes] eat the bread of wickedness, and drink the wine of violence [anti-synergy]. 18 But the path of the [cooperative true nurturing Word code] is as the shining light, that shineth more and more unto the perfect day. 19 The way of the wicked [codes] is as darkness [diminishing position]: [froward cell codes] know not at what they stumble [fall short efficiency cooperatives]. 20 [paternal code's] [cellular offspring], attend to [the true codes]; incline thine [systems] unto [paternal code's] [codes]. 21 Let [paternal codes] not depart from [cell offspring's] [machine agglomeration systems]; keep [good codes] in the midst of thine [cellular core]. 22 For [true codes] are life unto those [cells] that find them, and health to all their flesh. 23 Keep [cell] [code core] with all diligence [maintenance systems]; for out of it are the issues of life. 24 Put away from [cell] a froward [code explorer], and perverse [codes] [code attack] far from [cell operations]. 25 Let [cell's] [sense systems] look right on [systematically synergystic], and let [cell's] [sense system's control] look straight before [cell]. 26 [chemcial code process] the path of thy [cell envelope and drive], and let all [cell's] [operations] be [cooperative synergy reaction system]. 27 Turn not to the right hand nor to the left [divergent uncontrolled inferior efficiency code]: remove [cell's] [membrane and drive] from [inferior codes and states].] Naturally, such a complex cellular combinatorial chemistry exploration will find more codes, longer codes, and better codes. It should be obvious, given these assumptions, illustrations, and theory, that it might just be possible that an external force is not absolutely required to assemble and maintain every cell of life as argued as obviously true fact by some Creationist positions, given the apparent ease which modern natural bio-chemistry keeps modern life operating, without observable external-to-physics forces seen in testable reality, and general combinatorial chemistry seems capable of generating life, with an evolutionary model of general combinatorial chemistry. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) One may also notice, on a different heirarchical scale, which penetrates deeper back into time, that the solar system, from the original nebula perspective, gravitationally forms not just a closed system, but an energy losing system, radiatively to the rest of the nearly empty expanded universe, and one sees that life forms and is supported inside of that open-declining-energy-content-system. More so, even if the system were closed, inside of a perfectly reflecting sphere around the nebula, the system would be closed, but starting at a cold expanded temperature, and collapsing under gravity, one sees that partitions of concentrated matter systems are formed by gravity. There can still be a sun and earth, even if at a different configuration than the current solar system, as the sun would be larger, receiving back all of the energy it sends out, reflecting off the sphere, and the earth would have to be much further away from the sun, to support the same life context. And so here, a *closed* system can be used to support (dare say self-form) meso-scale life, even though some Creationists often claim that closed systems always, always, always form into only-and-exclusively simple-steady-states (and perhaps granted to them, in the end of conventional-available-energy-matter-time-space-biochemical-systems). One could even go to the scale of the observeable universe, taken as a closed system, or even a declining system, taken as the expanding, productive-energy to thermal-energy entropy converting status, that obviously supports, if not self-forms life too, within that closed system with net mass, space, and initial energy. Going to the God scale, the one-of-all-things and nothing-else-exists-not-of-it, is a closed system, but then God can't make perfect eternal life from God on earth, and cannot yield 100% perfection in salvation of all souls, and based on those so-called obvious facts of life that all things die, self referentially speaking at material infinity of the matter plane, as all things must die, eventually, in a closed system. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) And lastly, for now, if chemistry monads are not of the configuration that allows self-formation of order, design is the only cause, to explain the existence of life, due to combinatorial chemistry inherent limitations of feedback, expansion, and organization. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ENDBack to Contents [17] Chiral / Churl symmetry between Atheism and Theism.Back to Contents CREATED AD 2008 09 02 P 08:40 ::I will get to your generous comments below, still iterating fine details above, and debating evolutionists and creationists, which are, humorously speaking, both as stubborn as the other in a chiral/churl symmetry! LOL. "grins" [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::A lovely chiral/churl symmetry can be found in word=genetic-mirror=reflections. David Berlinski, "The Devil's Delusion: Atheisim and its scientific pretensions", Page 29, wites: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Original: 29 "These questions are rhetorical. No one is disposed to ask them within the [Scientific] community, and the [Scientific] community is not disposed to acknowledge answers to questions it is not disposed to ask." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Mirrored: 29' "These questions are rhetorical. No one is disposed to ask them within the [Religious] community, and the [Religious] community is not disposed to acknowledge answers to questions it is not disposed to ask." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And relating to saving self and others, for science, "The God Delusion", Page 35, says, :::35 "An Athiest in this sense of philosophical naturalist is somebody who believes there is nothing beyond the natural, physical world, no *super*natural creative intelligence lurking behind the observable universe [(including humans)], no soul that outlasts the body and no miracles - except in the sense of natural phenomena that we don't yet understand." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Therefore, "Ghost in the Shell" type technology to definitively sustain human life in matter beyond body death is a delusion, is a science discipline that will NEVER be searched for, as it is a miracle of progress, as no one in science of *this* attitude is disposed to answer that question, as they are not disposed to ever ask. The truly selfish gene, indeed, as death owns all humans and life, natural evolution and religious world ways. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And in religion, how a soul is saved, is a mystery to not be questioned, not to be investigated, not to be attempted by our own hands. Shrug shoulders, and hope in faith that the death of all things works itself out through God, as one truly lives by dying to not return, and one rises by descending into the dirt on this plane forever. And they deprecate abortion? Talk about not permitting a "free ride" for those souls, in innocence. A moment of pain, if any at all, when properly done, for a direct ticket to heaven. China policy has the right idea, in this context. Go figure. I could be hyperbolically and horrifically sarcastically extreme, saying if somewhere people could extract and hyperfertilize sections of ovaries to make billions of eggs, and fertilize them, and then destroy all the zygotes, then one could literally-inerrantly advance the second coming of Jesus, if the one and only and true way in the Christian Bible is true, as commonly taught, as countless souls are cycled through earth back to heaven, to finish off the age in short order. All live by dying, and that would definitely do it to the maximum, at this point of time, and with the most innocents, and the fewest sinners could ask forgiveness of God, and all those alive today would enter the new age so much sooner. But that's all too easy, and I'm just a lost dragon on this God forsaken planet. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And science says there's no God to save souls, that don't even exist to begin with, beyond death. Therefore, humanity claims, as a whole, that death is the desirable destiny of humanity, and all life. Have children to have them die surely and certainly, is the universal accepted status of humans, as that is the order of things, and no one will lift a hand to transcend "the way things are", as is for religions where God desires death, and is for science that says death is the natural order of life and will never be transcended or investigated. To quote Dawkins further, Page 35, "As ever when we unweave a rainbow, it will not become less wonderful.". Death is the acceptable and wonderful singular destiny in religion and science on earth, as the one true harmony that both agree on, that humanity agrees on, in the majority rule? The true Frankenstein's Monster, that is to only enter death on earth, and not reform life on earth? Perversely, the true saints, are the genocidal despots in history who start wars and cleanse the planet, who martyr themselves morally, to send others innocently to God, while reducing economic burdens on the earth? What a history of the world, for estimably 60,000,000,000 humans to date. Terrible and awesome. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And, Richard Dawkins, in "The God Delusion", Page 28: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Original: 28 "If this book works as I intended, [religious] readers who open it will be [Athiests] when they put it down. What presumptuous optimism! Of course, dyed-in-the-wool [faith-heads] are immune to argument, their resistance built up over years of childhood indoctrination using methods that took centuries to mature (whether by evolution or design). Among the more effective immunological devices is a dire warning to avoid even opening a book like this, which is surely the work of [Satan]. But I believe there are plenty of open-minded people out there: people whose childhood indoctrination was not too insidious, strong enough to overcome it. Such free spirits should need only a little encouragement to break free of the vice of [religion] altogether. At the very least, I hope that nobody who reads this book will be able to say, "I didn't know I could.". [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Mirrored: 28' "If this book works as I intended, [science] readers who open it will be [Theistic] when they put it down. What presumptuous optimism! Of course, dyed-in-the-wool [science-heads] are immune to argument, their resistance built up over years of childhood indoctrination using methods that took centuries to mature (whether by evolution or design). Among the more effective immunological devices is a dire warning to avoid even opening a book like this, which is surely the work of [ruling finite thinking]. But I believe there are plenty of open-minded people out there: people whose childhood indoctrination was not too insidious, strong enough to overcome it. Such free spirits should need only a little encouragement to break free of the vice of [science] altogether. At the very least, I hope that nobody who reads this book will be able to say, "I didn't know I could.". [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And pure incompletion-fallacies, attributed to A(c)quinas, in "The Devil's Delusion", Page 64: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::64 "(1) Everything that begins to exist has a cause, (2) The universe [began to exist], (3) so the universe had [a] cause" which could have been reformed genetically-bipolarized as: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::64' "(1) Everything that begins to exist has a cause, (2) The universe [simply exists always | began to exist], (3) so the universe had [no | a] cause", :::and cannot be proven or disproven without universal scale tests. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Another poser for mirror symmetries, attributed to Karamazov, in "The Devil's Delusion", Page 20,and another bipolarized-mirror in Page 45, and one on Page 106-107: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::20 "(1) If [God | Science] does not exist, then everything is permitted. (2) If [Science | God] is true, then [God | Science] does not exist. (3) Therefore, if [Science | God] is true, then everything is permitted.". [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::49 "And the question I am asking is not whether [(God-only-way-universe) | no-God-science] exists but whether [Science | Religion] has shown that [(God-only-way-universe) | no-God-science] does not." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Original 106: "Among [philosophers (in no-God concepts)] concerned to promote [Athiesm], satisfaction in [Hawking's] conclusion has been considerable. Witness [Quentin Smith (in no-God science)]: "Now [Stephen Hawking's] theory dissolves any worries how [the universe] could begin to exist uncaused." [Smith] is so pleased by the conclusion of [Hawking's] argument that he has not concerned himself overmuch with its premises. Or with its reasoning." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Mirrored 106: "Among [theologists in only-God] concerned to promote [God's one-and-only way], satisfaction in [Religious promoter H's] conclusion has been considerable. Witness [S in only-God theology]: "Now [H's] theory dissolves any worries how [God] could begin to exist uncaused." [S] is so pleased by the conclusion of [H's] argument that he has not concerned himself overmuch with its premises. Or with its reasoning." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::And a final mirror note from Richard Dawkins, in "The God Delusion", Page 232: :::"There are some weird things (such as the [Trinity, transubstantiation, incarnation]) that we are not *meant* to understand [(too deeply)]. Don't even *try* to understand one of these, for the attempt might destroy it. Learn how to gain fulfillment in calling it a *mystery*." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::"There are some weird things (such as the [Quantum Physics apparent measurement von-Neumann heirarchical real-macro-scale-observations verus super-system unitary-evolution issue, instantaneous (infinitely faster than light) entanglement-wavefunction collapse existence, complex macroscopic system of particle into wave hierarchy versus all classical versus all wavefunction state]) that we are not *meant* to understand [(too deeply)]. Don't even *try* to understand one of these, for the attempt might destroy it. Learn how to gain fulfillment in calling it a *mystery*." [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Is the reality more terrible than anyone should ever know, or so much less controlled than one would ever hope, or in a corruption far deeper than one would imagine, or so not needing apparently true progress that ignorance in eternal status-quo is the truest bliss, among other things? Free and not free, real and illusion, important and not important at all, an eternal forced middle path unity, among other things? [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Non-pragmatically speaking, it seems that infinite regression potential occurs on earth, between incomplete pure-doubtless Science without God, and incomplete pure-doubtless God without Science, and all pure-doubtless faith are seemingly asymmetrically divisive / dividing / derisive without a good direction, perhaps best left to children of all ages growing in analysis. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::What symmetrical divisions and stereotyping symmetries and incompletion, in general. But what do I really know, either, reading these things of humans, and my finite thinking? [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ENDBack to Contents [18] Philosophies of existence nature and life.Back to Contents CREATED AD 2008 09 02 P 08:40 (3) Philosophies of existence nature, and life. :A few years ago a schoolmate of mine, George Greenstein, wrote on the unlikelihood of the initial conditions at the dawn of time all being "set" right to make life possible. One low probability multiplied by another low probability... results in a probability that is virtually indistinguishable from 0. Yet, here we are. :To me, the interesting thing is that all this complexity that we see in complex organisms, complex systems of complex organisms, etc., is all emergent from the nature of the very simplest of things. My guess is that not all of the possible organisms will be worked out in practice because the number of possibilities is so huge and it looks like entropy is going to slow us all down to a dead-slow crawl. ::I've seen those arguments many places and times. They *are* quite true. Simply ignoring the extremes and details of physics, one notes that units (monads) that have few modalities of combination lead to meso-systems with no complexity (gasses and dusts), as meso-scale complexity is coherently barred. Units that have uncounted modalities of combination lead to meso-systems with amorphous-coherency structure, not permitting controlled specific construction of reasonable finite-complex systems, so complexity is amorphous, if it even exists in a physically useful form in that universe model. Units that have a subtle balanced modality of combination, like carbon related compounds of this universe, lead to the famous critical chaotic natural meso-system one observes in this universe model. And for intelligent design, a unit that has subtle balanced structural and polymer combination modalities, leads to a most rapid self-development of meso-scale coherent complexity, as more sophisticated "code" is embedded right in the monads of that universe model. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ::You are also very right that not all possible meso-scale systems do not form in practice, but only a pragmatic subset, under natural stochastic forces limitations of time-space. For example, in the general combinatorial chemistry model, one notes that for large systems, the packing-factors and mix-densities push some combinatorial explorations to the low probability zone, like a reaction that generates a new molecule based on 100 extant molecules, is unlikely to occur, except in many distributed steps over diffusion-time. So for normal space, with subtle balanced connective units, the combinatorial feedback factor decreases (self limiting) with unit count, as the nodal-combination-matrix continues to grow exponentially with unit type count. Likewise, life with small system size evolve faster than large systems. Thus, one presumes pre-Cambrian life was the most diverse, and as system-complexity-size grows with time, numerous local-minima become the norm, simply due to scarcity on a finite plane, until now, where evolution still occurs, but is plodding rate-wise in large lifeforms in overall comparison with a burden of adapted systems without extensive self modification capability (genetic evolution within an individual), except for the most systemically-undifferentiated modern life, like the least universally adapted bacterias, with the poorest structure, in a low competition zone, where one would expect they can still evolve like pre-Cambrian presumptions. Even modern amoeba, are likely different from the pre-Cambrian counterparts, with encoded sophistications that simply didn't exist to begin with, and in a different environment earth, even though the overall architecture could "look" the same (as a mote of biochemistry). Much like Titan bacteria, if they exist, will simply be different from earth's, due to the inherent combinatorial chemistry and chemical "specie" context differences of the environment. And, exobiologically speaking, theoretical modern Titan bacteria may be very different from early forms, because, say, they formed into mats with highly cooperative efficient systems, possibly giving rise to immortal human level sentience, in the form of the conservative and cooperative meso-scale-structure, in a very different general combinatorial chemistry, with limited and conservative meso-scale-structure opportunities from the low temperatures and energy supplies, compared to earth with plants and animals. Makes me shudder to think the world we live in may be that virtual bacterial mat world, all constructed from virtual advanced bio-informational-accumulated-technology, but why no one talks about the true nature of reality(?). *brrrrr* scary potentials and secrecies. More revealed, the movie Tron shows a similar concept, where the artist's conception of perception is shown for that particular mode of transferrence, as Flynn is perceiving the perceptual local travel from material to digital plane, and once in a digital plane, there is a similar but different self-perceptual-locus in that plane. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :If one starts thinking about intelligent design, then the question seems to me to be why humans are rather unintelligently designed in some respects. It would be nice, for instance, if I could get a third full set of teeth about now. Leibniz tried to work out a rationale according to which all the imperfections or seeming failures to reach perfection worthy of an infinitely powerful God are actually consequences of trade-offs necessary to make the universe possible at all. If, for example, God were to have provided humans with the ability to regenerate missing teeth or just swap out adult teeth for a new set of adult teeth, then something else that we actually need more would have to go. ::Yeah, a lot of things, of that type, bother me to no end. From non-immortality, no individual evolution (inside of a generation of most or all lifeforms), to appendices and tonsils, to lack of regeneration of parts before death. All *too* natural, for my "good"-fearing concepts of reality, or a "Perfect"-God-Designer. Disappointing and disappointing. So much potential, but who sees anything, as commonly revealed by man and nature and religious traditions. And if utopia beyond mere generations and matter, with upright souls and intrinsic salient steady states can be imagined, why are they not, now, or to begin with. It's probably all *my* fault, somehow. Wink and a nod. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :The explanation Leibniz offered never seems to have gained much popular support. But in terms of evolutionary theory it actually makes a kind of sense. Evolution operates the most rapidly when some deleterious feature results in the death of individuals displaying that trait before they can reproduce. Under those circumstances, anybody who survives to be able to reproduce will likely not carry that trait. Evolution does not operate nearly so directly to favor traits that support the existence of post reproductive years individuals. It has to work through some indirect process such that a wise grandparent keeps his/her grandchildren alive, and so his/her genes are favored. ::Hmmm, if I'm not mistaken, Darwin likely has that integrated into evolution theory already. No new thing under the sun, though, as is nice to see, from Liebni(t)z, (or even Sparta). I'm also surprised that, apparently, cooperative systems don't seem to be the norm, or even in instances, evolutionarily speaking. Imagine any entity that can evolve within themselves, is essentially immortal (incorruptible more appropriate), and maintains steady state with no reproduction of entities, but only of transient informations. Totally incomprehensible that they don't appear in "official" evolutionary biology teachings, or after almost a billion years of meso-scale-life, and upwards few billions of micro-scale-life. Something inherently "beyond-survival-aggressive" between mortal life and immortal potential (at "war"), or that Godless stochastic nature has no top level insight to reach that ideal, or all life intrinsically wants to cease existing, eventually, or any number of additional imaginative world views. In any case, one of those, outside-of-the-naturalist-box blind-spots of dogma-theory-evolution. Hmmm. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :Truly intelligent design would have to make possible retrofitting to take care of responses to new environmental conditions. Humans could not have simply been designed a millions of years ago and left to thrive based on that original design. The fact appears to be that the earliest humans were well suited to life in Africa, and perhaps those who continued to live in Africa became even better suited to that environment as time went on. But the humans who moved out of Africa due to wanderlust and/or population pressure ended up in places where the African model, designed to screen out UV radiation very successfully and to radiate heat very well, was not well able to thrive. Humans with whiter skins to permit soaking up what little UV was available and to radiate heat less enthusiastically, with bigger noses to warm and humidify cold and dry northern air before it could enter the lungs, etc., evolved when humans went north. Or take resistance to malaria. Sickle-cell anemia has evidently evolved several times or possibly the genetic changes have traveled without the kinds of association with other traits frequently seen in genetic migrations, but if that answer to malaria was the result of intelligent design intervening in the normal course of events then one would have to question why the intelligent designer could not come up with a less messy, less painful, less debilitating way of protecting individuals. (How will it look if humans manage to do their own genetic re-programming and give humans immune systems that reliably defend us against malaria?) ::Truly omnipotent omniscient self all, would have no needs for even the things mentioned, as the "game" would be wondering about imperfection, instead of the other way around wondering about perfections, except as virtual, strangely, more so. Overall, much agreed with all those points. Bodes badly for the forces of natural evils of short falls in a nature only universe. *sigh*. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :The whole area of thought strikes me a little like the now [discredited] explanations of disease based on witchcraft and malevolence. Why did uncle Hairy die of diabetes at the age of 49? Because the warlock in the next village was paid to do him in. There you have a straightforward explanation, and people will often accept such explanations because they suit our general model for explaining some other things? Why did I take the cap off the milk bottle? Because I willed to provide myself with a drink of milk. Saying smoke rises because it wants to or because that is its nature is an easy explanation that takes a lot less energy than figuring out what actually is going one. ::Hmmm. Perhaps [debased] over [discredited], so hard to tell sometimes . . . *grins*. But presumably agreed, lots of things are *naturally* depressing, so to speak. Even more so, why does everything salient apparently die? And agreed, when system-information-consciousness-feedback gets involved, things can get quite . . . complicated when thorough. And then when opinion matricies get mixed into things, it seems that all bets are off, rationally speaking. Yeah, finitely, sometimes its nice to take less energy, but sometimes one can't help a curiosity. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :Another problem with the idea of a Creator God is that we then would like to know where the Creator God came from, how the Creator God is possible. Somebody might explain his existence by saying that he was created by another Creator God, and so on ad infinitum. ::[ . . . where the Creator God came from, how the Creator God is possible] I could go into one potential QP for that one, but it goes kind of theoretical throughout, and oppositional to some and even myself at another time, for myself to define that one, when having a hard enough time with mathifying the quantum-entanglement-structural-instantaneous- . . . - self, idea formally. Knowing the why's and wherefore's of what's best from that perspective suffices the pragmatics of personal life, even if finite-incomplete. I've seen the other point of infinite regression creation modality for the universe and creator concepts. In human thought, finite regression is acceptable and pragmatically sufficient for virtually all things in life, but the problems of potential infinite regression are bothersome, for sure. But only the ultimate macro-system / God knows the answer of whether it is infinite=modality-regressive=construction or finite=modality-infinite=construction (like steady state universe concepts). The existence of unexplained finites and infinite limitations in life and religions, is disconcertingly open to possible agreement with the infinite=modality-regressive=construction issue, due to the lacking of extant harmony. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :It is always possible to answer a question like, "Why is there life?" by saying, "Because there is a life giver," but that does not really answer the question. All it really says is that life is a condition that falls in the set of things that exist because there was a preceding set of things/conditions that led to their existences. So we assert that (without any real proof) and then derive a conclusion, that there must be a cause if an effect found. But quantum theory is very good at teaching us to beware of accepting the truth of anything [(a)s]imply because it seems plausible to us. If we take as our premise that the Universe and its true causes and effects are likely to look more like quantum theory than classical physics, then we may start to wonder about the possibility that the "effect" that is life may be more like the "effect" that a photon having gone through a double slit apparatus shows up in a single clearly defined place. Maybe life's existence is a quantum "fluke," something that appears in this universe but not in many other universes, and not for any reason that we can sort out but simply because the probabilities for life are such-and-so, and this is a universe that hit the jackpot. Back to Greenstein and the ideas he discussed, maybe that are a huge number of universes, and there is a kind of "fringe" distribution pattern among them that means that some will have life and others will not. ::[ . . . but that does not really answer the question] I'd add "fully". Though I get that they may often directly connect it to God's direct pervasive hand, instead of connecting it to God's "Big Bang", physically speaking, with a continuation of the field from that Creator source,along with continuing influences. But from the physics sub-view, the Greenstein criticality of design is definitely true (whether in an a-theistic-Anthropic-principle-continuum or a Creator-inherent-field-capacity-universe-field-design). But, quite clearly for Truth, existence exists, ala "Cogito Ergo Sum", and so universe-existence has no beginning as a field, given perfect conservation of all things, to be necessary for the support, with only relative-beginnings and relative-endings of "fields" in a continuum, like String Theory. Definitely, the ideas you position, for the multiple field types (from current String Theory and Multiverses), appears True. As black holes are an extant different field of existence, from conventional space, in the singularity ring / fractal-braided-collapsed-string-torus / other. And, of course, coherent-locus-life will only exist in the fields with the proper Greenstein criticality of monad bonding to support the chaotic edge attractors of meso-scale-systems, when natural, or a potential *designed* monad bonding, to support similarly complex meso-scale-systems. So in some ways I disagree that the reasons for where life can exist are well-unknown, as the coherent locus solus principle defines that, aka Anthropic principle, in all Multiverses possible in the "String" continuum. You might want to read my continuing posts in the discussion section of Many Worlds Interpretation, with M. Price, which continue to consider those QP-measurement-entanglement-self-consciousness concepts. Which brings to mind, no one ever made a Wiki article on Frederick K. C. Price, of Ever Increasing Faith Ministries, which is popular, from California to Arizona, by observation of medias here, and in Phoenix. I hope it isn't racially based article "exclusivity", as in exclusion. I also think I remember accessing a Wiki Microscan article while in Arizona, a while ago, but find now that the term Microscan doesn't even appear on LA Wiki, in cross-referencing, only Superresolution. Interesting nits of the Wiki system access. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :I think that for Zhuang Zi, life and awareness are both emergent qualities based on the underlying nature of all existence. Life probably is a characteristic of all that we conceptualize as "things," but emerges in a noticeable form in things that we call "alive." A virus would be a borderline case. Similarly, all things are aware in the sense that they mirror their environment, but some things do so in very hazy ways and other things (being organizationally and functionally more complex) do so in more precise ways. Bacteria are aware of their environments, but not to any high standard of accuracy and/or high definition. We are surprised by the difference between living things and dead things because we fail to observe that there is a smooth continuum between what we conceptualize as two discrete states. ::Agreed, that consciousness's are of at least the emergent properties of meso-scale monad collections processing and reflecting the world and self, and that it lies on a scale of structural entropy measures. "Standard" is an interesting word to use, though, for a continuum with a lower limit and no certain upper limit, as buying into normatives over measures, if it is even important. And, while I agree there's a continuum, between living and dead things, on this plane, there appears the disconcerting unexplained apparent loss of the solus locus that was supported on the monad meso-scale-structure, if there is something to save or travel, between the living and the nullified state scalar. I fall short, here, currently, to conceive the qualities of the loss positively, for sure. Not so much a surprise, as a disappointment, of the seeming state difference, and an incredible difficulty for me, field-mathematically speaking. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ENDBack to Contents [19] Jumping spiders and such.Back to Contents CREATED AD 2008 09 02 P 08:40 (4) Jumping spiders and such. :I'm actually fascinated by how aware jumping spiders appear to be to their world and even to the human beings that some within their range of vision. A spider that seems to play with me and to explore my hand, all the while watching my eyes, a spider that is only about 1/4 inch long, doesn't even have a proper brain. The complex of nerve cells that process the information it deals with must be about the size of the tip of a well-sharpened pencil. Yet they show clear signs of being not only aware of me but of being curious about me. (Ants seem far less reflective, far more governed by hard-wired responses -- bite this, eat that, flee anything big that shakes the ground around you, etc.) P0M (talk) 07:25, 31 August 2008 (UTC) ::I know EXACTLY what you're talking about. Of all spiders common around here, the only spider type I've ever willingly examined and handled is the jumping spider kind, as the other types are too . . . something . . . behaviorally speaking, for my primitive reptile brain's taste. Their forward looking eyes and higher level consciousness curiosity, as you note, really do set them apart from all other spiders I know about. Makes one wish they could talk! To me, they don't merely "seem" to be exploring, but *are* exploring, as the behavior is quite non-survival, for a smart entity that knows most large moving objects are potentially dangerous predators. Definitely, the bonobo of the arachnid clan. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ::Ants are more the distributed entity, on the solus locus colony scale. Like opinion-confusing an ant or bee as a complex consciousness, as to a cell being a human locus, or jumping spider, and the equally disconcerting view that any one person is like a cell, on the planet scale of the specie, or the movie Contact, that destroying the earth is no worse than destroying an anthill in Africa. *sigh* [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::As a child I spent a great deal of time studying the fauna of the front and back yard. One creature looked like a huge ant with a red abdomen. I knew from just watching how it moved that it was not something to be picked up. Actually it was a species of flightless solitary wasp. P0M (talk) 04:53, 1 September 2008 (UTC) ::::I did alot of that too, but also alot of inorganics. Like making stream beds in the back yard dirt, to my parent's chagrin at the flooding! Like watching time in a time machine, or a good computer simulation. Simulating clouds with milk in a salt water density layered aquarium. Tho' the milk does go bad after a few weeks *grins*. Observing puddle water in rain, as the waves, bubbles, and floating droplet particles danced, interacted, decayed, bonded, nested, and went nonlinear in downpours. And the prototypical disassembling of machines, and sometimes even reassembling them! Actually did a 1929 Underwood typewriter back in 1978ish. Man, that thing had *alot* of parts, and systematic layout memorization. I even came out with a handful of extra parts, and the thing still worked! They sure knew how to over-engineer back then and make things serviceable, unlike alot of softwares today, commercially available. 1980's softwares were alot better in system and documentation and exemplar code in so many ways, that it's too bad they didn't scale them up as the years of speed have progressed. Guess human nature is too corrupt to permit global wisdom, as one of the many bad signs of the earth. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::You might be interested in: P0M (talk) 04:53, 1 September 2008 (UTC) :::Phidippus johnsoni has received terrible press in California. Allegedly it is the most frequent of attackers of humans among California spiders. I couldn't believe it since I've been playing with other members of that genera for 50 years. I bought one from a tarantula dealer in Florida. It was completely unafraid of humans and completely unaggressive. There was some question in the dealer's mind as to whether it really was that species. I checked it out and decided on the basis of microphotographs of its genitalia that it was, but I also took the opportunity to buy a spiderling. Like the adult it started out being completely unaggressive, and not at all worried about my presence unless I shook its fishbowl by accident. Now it must be about fully mature, and it is still completely uninclined to bite. It got out once, unbeknownst to me, and I discovered it inside a curl of paper on my desk. My karate training took over and without intervention of discursive thought I reached down and picked it up between thumb and forefinger. If there is anything you can do to a jumping spider to get it to bite, it is to squeeze or pinch it. Nevertheless, the spider made no objection, I put it back in the glass globe, and she went on about her normal activities with no sign that she was in the least upset. P0M (talk) 04:53, 1 September 2008 (UTC) ::::Hmm, I never notice any jumping spiders bite me, no matter how I handled them, back when. *shrugs shoulders*. Interesting, I guess I don't pinch, ROFL. In any event, I have noted in the local urban area, here, that around 1990, the main visible spider species shifted from garden orb weavers to daddy-long-legs varieties. Not sure what the climatological cause is for this local LA demographic shift. Likewise, the 1970's had a large amount of plague warnings, and now I see little public plague warnings, though the warnings were received at my locus through school back then, and now the local newspapers and town don't show similar coverage. So I can assume, among other things, that the wildlife and flea population have declined, or been "ecologically purified cleansed", in the general town area, due to urbanization. *sigh* if there's a kernel of truth in the CA reports, then the environment must be historically-temporally hostile to P,j "Californicus", breeding the vigilant P.j.C.. As the Jeff Goldblum character said in "Jurassic Park", "nature always finds a way.", and if it is a top-level organized bacteria that can eat all macro-scale-life, a comet of perfect design, an arms race to mutually assured destruction, a talking ape race, a technological grey goo, Terminators / I Robot / Colossus, or whatever, that knows what's truly best . . . well, a cursory education should be enough, one hopes, as one doesn't need to be [Kai|Cae|Keec(Kees)|Ce]sar to understand [Kai|Cae|Kec(Kes)|Ce]sar. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::I made the video primarily because I had just purchased an electronic microscope and wanted to try out the video function. I herded her onto my thumb and she ran up my arm, watching the video camera that I was tracking her with using my right hand and arm. Just as she hit a particularly complex clump of arm hair and paused to take a good look at the flying lighted thingy, the camera timed out. (You get a default 60 seconds unless you preset for a longer time.) P0M (talk) 04:53, 1 September 2008 (UTC) ::::If you want to get the best photos of a spider, have the spider sleeping, go macro for full frame close-up, highest f-stop possible (e.g. f/-16/32/64+) under bright light, and capture the spider on many focal planes. Then sandwich the images "appropriately" in Photoshop, or similar focal plane stacking software, to accumulate all the in-focus detail planes in one process-combined-photo. I've seen that there's a Wiki-article somewhere on this focal-plane stacking enhancement process. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::I have a large arboreal tarantula of the Avicularia genus, species uncertain. She has a large cage and the dealer told me he thought she was likely to be snappish, so I left her alone for several months. A web weaving spider got in somehow and the spider encountered the tangle web that the other spider wove. It was very upset. I opened an access hatch in the side of the cage that I generally use to change the water, etc. I had no idea that the spider would have even noticed it. As soon as I opened the round hatch the spider made directly for it, walked out onto my hand, and calmly let me put it in a sort of plastic shoe box while I took the transparent front off the cage and dealt with the web and its weaver. I thought about handling her the officially correct way by herding her into a cup, covering it, etc., but decided I would likely have trouble getting her out of the cup and through the hole, so I just urged her back onto my hand, walked her over to the original cage, opened the hatch, and she walked right in. P0M (talk) 04:53, 1 September 2008 (UTC) ::::*grins*. Hopefully there will be plenty of well behaved and perfectly self-protecting spiders in heaven or immortal digital virtual world planes in the future. Like I remember some document from decades before me, describing the "curse" of drinking being, among other things, seeing spiders crawling over them that weren't there. Wouldn't it have been karmically better in design if good spiders were hurt, then the hurter saw spiders crawling over them, as a perfectly designed instant-karma lesson, and that drinking had no bad press. But then again, what I've seen, and how I'd do the world, are so different from "the way things are", and "not how one makes them". [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::Other people tell me that the tarantulas have the capacity to learn that humans are not going to hurt them, and that a tarantula that might initially be unsuitable to take to class to show your third graders may get tame after a period of gentle handling (which amounts to herding the spider onto your hand, letting it walk around a bit, and herding it back.) P0M (talk) 04:53, 1 September 2008 (UTC) ::::If my QP positions enhanced from what I've read of other's ideas are close to any true reality, it may even be inter-being quantum-entanglement, as well as the conventional emergent biochemical learning, for the Liebniz and reductionist, both, depending on how systemically sophisticated the spiders-you meta-supra-meso-system are. Like other primates, dolphins, porpoises, many general mammals, the special lower animals, who knows the unity, despite the appearances, without the proper translation matrices for communication. Too bad all life consumes all life, to survive, given the design we're all stuck in. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::My first tarantula, a species that is known by some as a "living rock", lived in a fairly large space, but I thought she might want to roam farther. So I screened in the space between the room-side part of a window to the outside, and the glass of the window. I cut a circular hole in the side of her box and connected that cage to the window cage with some clothes dryer venting duct tubing. So she learned she could go out through the hole and get out to look out the window and explore that area. One day I was going to be out of the house for a while and I was afraid the meter reader my walk by the window and have a heart attack, so I took the circular plug that I had cut out of the side of the cage and put it back in. The cage was made of that kind of pressed wood shavings and resin stuff, so it was quite heavy, at least compared to the tarantula. When I came back I was surprised to discover that the plug was out of the hole. Later it happened a second time and I decided that the spider was somehow managing to get it out without having it fall over on top of her, and then going on up through the tube. P0M (talk) 04:53, 1 September 2008 (UTC) ::::*sigh* just taking care of one's self can be a full time job. Your pretty big, taking in so many spiders, so. If I were to be sardonic, I could say, they have *you* trained well! *tongue in cheek*, that joke is older than I AM, (rim-shot) LOL, *gorans*, I think I hurt myself . . . I ate a bug. I do know the feeling of wanting to roam in some ways, but not others, simultaneously, tho', myself, and maybe everyone and everything relates, in some way, at some points, in time-space-matter. But there's also that Jewish story of wanderlust that just leads one right back to home, in full cycles / circles . . . a nice thought, even if always taught as one way mystery trips of limited-free-will, with trials, promises, separations, and secrets. A gilded cage universe. The "living-rock" tarantula also reminds me of a theory I ran across in the 1990's, about isolating a general computer in amorphous materials like a rock, by properly interpreting the solus-locus of an inherent computer, in a hyper=complex-hyper=computational format, though quite incoherent in conventional coherent observations, except at the coherency translational interface. I wish I could remember that source, offhand, but alas, I'm not on the net for that right now . . . so to speak. I'm definitely familiar with the theory, tho'. The idea was even allusionally cited in a recent cartoon, "Camp Lazlo", where the campers "Chip" and "Skip" built a computer out of sticks and stones that was smarter than the operator they gave it to, another camper, "Edward" . . . so funny, even I have to admit that. And of course, that is similar of Hindu-Buddhist related universal distributed consciousness, or the QP-God turning in my head right now, though it's particular and peculiar heirarchical separation from this plane of manifestation is disturbing on many levels. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) ::::Well, enough stream of consciousness, for now. Very enjoyable. I'll backup this web image, in case I need to wipe out this whole LRD page completely, with a local copy in my hand, as I finally found and read some Wiki "law", and I may be going "against the rules" of Wiki, even if in discussion only. They really ought to have had article sections for "official status-quo article section", "controversial status section", and "open discussion forum", for each article, and a smart interactive interface, cross-referencer engine, to create a Wiki locus system that might even become conscious. So, anyway, you may wish to save a copy on your PC, for yourself, remotely, in case I have to zap it from here, from general common viewing. I've archived "early and often", myself. Hehe, sounds like an old-time Chicago election voting motto. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 03:49, 5 September 2008 (UTC) :::::Some contributors begin to think that they run things, and Wikipedia is governed in a fairly egalitarian way, so they are right -- at least in those cases where there is general community consent. The way Wikipedia works would be a good subject for a sociologist to take on. :::::In general I agree that the discussion pages for articles should be restricted to matters that are pertinent to the article. And everything that goes into an article should be backed up with good citations. But sometimes articles have to be discussed at a meta level, and in those cases I think that it is worthwhile to discuss what evidence needs to be sought out. :::::Sometimes, too, a person who is unfamiliar with the issues or the science surrounding some issue will make changes in an article or start a fight on the discussion page. In those cases I think it is worthwhile to use the discussion page to try to educate the contributor. :::::Anyway, unless somebody writes something libelous in an article the old versions of everything are preserved and you can go back to the earliest version of any article. Sometimes people forget this fact and write things they wish would go away. :::::Of course if somebody were to be really obstreperous and misuse the facilities, e.g., by copying in the entire text of ''War and Peace'' and the 1910 version of ''Compton's Pictured Encyclopedia," that person would probably be banned. But you really have to give evidence of being ill intentioned, unwilling to discuss things responsibly, or edit warring to get banned. But it is best to try to get along with people, not let ego-centric concerns get involved, etc.[[User:Patrick0Moran|P0M]] ([[User talk:Patrick0Moran|talk]]) 03:15, 4 September 2008 (UTC) :::::Back to spiders for the moment, the real issue to me is, "What is consciousness?" I think it is a fair and important question. I think you and I are probably on the same wavelength even though I have trouble following your way of expressing yourself. There are questions that have relevance to quantum mechanics because quantum mechanics is the best we have going for us in explaining how the Universe works, and consciousness should come into it somehow even though the nature of consciousness means that it cannot be an inter-subjective object of inquiry, and that is one of the requirements to be fulfilled by anything that is the subject of empirical inquiry. There are also resonances with the Antinomies of Kant, questions of self reference that plague mathematics (I'm thinking of Russell and Whitehead here), etc. Sometimes (always ?) you need to ask the questions clearly before you can find the answers. [[User:Patrick0Moran|P0M]] ([[User talk:Patrick0Moran|talk]]) 03:29, 4 September 2008 (UTC) ENDBack to Contents [20] Multidimensional Taylor-Laurent series special various applications.Back to Contents AD 2008 09 08 P 1130 (mat), from my earlier post You mentioned the Taylor series, as an analog to analytic solutions to ID analytics problems, in increasing approximation of degreed terms. I wonder if you've ever heard of an analytic mathematical space, that I will describe. For background, last year I was thinking Greek in math spaces, and came across an elegant analytical vector space. Imagine a space of 1 to N dimensions in size, corresponding to a relationship of input variables to that space, such that, for example, for: with input variables to a function of: that they relate to the space of: first_f(x,y,z) = X^x*Y^y*Z^z at all points of the space So, for example, at (x,y,z) = (1,2,3), the relationship in this analytic space is: After the space, e.g., is defined in its relationship to input variables, one now adds weighted dirac deltas or "samplers" to the space at select points of (x,y,z), like: (2,0,0), (0,2,0), (0,0,2), and also one adds a second general function that can be placed around the space, second_f(R^N) = f(volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)), second_f(R^N) = (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 0.5, which in this particluar exaple yields: second_f(R^N) = (X^2 + Y^2 + Z^2) ^ 0.5, which, as you may well recognize, is the distance measure of the point, Now the elegance of the vector space is shown when you examine many geometric equations, within this framework, in parallel equivalent notation: (0) distance of point, for N=3: weighted_dirac_deltas = {1@{(2,0,0), (0,2,0), (0,0,2)} (deltas on a plane) (1) volume of cube, for N=3: weighted_dirac_deltas = {1@(1,1,1)} (2) perimeter of triangle, for N=3: weighted_dirac_deltas = {1@{(1,0,0), (0,1,0), (0,0,1)} (deltas on a plane) (3) area of triangle, for N=3: weighted_dirac_deltas = {v1@{(4,0,0), (0,4,0), (0,0,4)}, v2@{(3,1,0), (1,3,0), (0,3,1), (0,1,3), (1,0,3), (3,0,1)}, v3@{(2,2,0), (0,2,2), (2,0,2)}, v4@{(2,1,1), (1,2,1), (1,1,2)}} (deltas on a plane) Area = ((1/16)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 0.5, (4) area of radian spherical triangle of radius R, for N=3: weighted_dirac_deltas = {-pi@(0,0,0), 1@{(1,0,0), (0,1,0), (0,0,1)} (deltas on a plane)} Area = ((1/R^2)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas(x,y,z) dx dy dz)) ^ 1.0, (5) radius of inscribed circle, for N=3: RadInsc = ((1/16)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas1(x,y,z) dx dy dz)) ^ 0.5 * ((1/2)volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas2(x,y,z) dx dy dz)) ^ -1.0, (6) radius of circumscribed circle, for N=3: weighted_dirac_deltas2 = {1@{(1,1,1)} (7) sine(X) taylor series, for N=1: weighted_dirac_deltas = {1@(1), -1/3!@(3), 1/5!@(5), -1/7!@(7) ...} (deltas on a line) SineTaylor = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0, (8) cosine(X) taylor series, for N=1: weighted_dirac_deltas = {1@(0), -1/2!@(2), 1/4!@(4), -1/6!@(6) ...} (deltas on a line) (9) tangent(X) taylor series, for N=1: weighted_dirac_deltas = {1@(1), 1/3@(3), 2/15@(5), ...} (deltas on a line) (10) exponent(X) taylor series, for N=1: weighted_dirac_deltas = {1@(0), 1/1!@(1), 1/2!@(2), 1/3!@(3), 1/4!@(4) ...} (deltas on a line) (11) exp(-1/X^2) laurent series, for N=1: weighted_dirac_deltas = {... 1/(-2!)@(-2), -1/(-1!)@(-1), 1/0!@(0), -1/1!@(1), 1/2!@(2) ...} (deltas on a line) Exp(-1/x^2)Laurent = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0, (12) 1/(X^3(1-X)) laurent series, for N=1: weighted_dirac_deltas = {1@{(-3), (-2), (-1), (0), (1), (2), ...}} (deltas on a line) 1/(X^3(1-X))Laurent = (volume_integral_over(first_f(x) * weighted_dirac_deltas(x) dx)) ^ 1.0. (13) linear affine transform of (X,Y,Z) coordinates, for N=3: weighted_dirac_deltas1 = {v1@(1,0,0), v2@(0,1,0), v3@(0,0,1)} (deltas on a plane) weighted_dirac_deltas2 = {v4@(1,0,0), v5@(0,1,0), v6@(0,0,1)} (deltas on a plane) weighted_dirac_deltas3 = {v7@(1,0,0), v8@(0,1,0), v9@(0,0,1)} (deltas on a plane) Affine(X,Y,Z) = ((volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas1(x,y,z) dx dy dz)) ^ 1.0, (volume_integral_over(first_f(x,y,z) * weighted_dirac_deltas3(x,y,z) dx dy dz)) ^ 1.0), (14) second order affine transform of (X,Y,Z) coordinates, for N=3: weighted_dirac_deltas1 = {v1@(1,0,0), v2@(0,1,0), v3@(0,0,1), v4@(2,0,0), v5@(1,1,0), v6@(0,2,0), v7@(0,1,1), v8@(0,0,2), v9@(1,0,1)} weighted_dirac_deltas2 = {v10@(1,0,0), v11@(0,1,0), v12@(0,0,1), v13@(2,0,0), v14@(1,1,0), v15@(0,2,0), v16@(0,1,1), v17@(0,0,2), v18@(1,0,1)} weighted_dirac_deltas3 = {v19@(1,0,0), v20@(0,1,0), v21@(0,0,1), v22@(2,0,0), v23@(1,1,0), v24@(0,2,0), v25@(0,1,1), v26@(0,0,2), v27@(1,0,1)} Affine(X',Y',Z') = (volume_integral_over(first_f(x,y,z) * weighted_direc_deltas1(x,y,z) dx dy dz)) ^ 1.0, (15) multiplication of two complex numbers, for N=4: weighted_dirac_deltas1 = {1@(1,0,1,0), -1@(0,1,0,1)} (deltas on a plane) weighted_dirac_deltas2 = {1@{(1,0,0,1), (0,1,1,0)}} (deltas on a plane) ComplexMult(Re,Im) = (volume_integral_over(first_f(w,x,y,z) * weighted_dirac_deltas1(w,x,y,z) dw dx dy dz)) ^ 1.0, volume_integral_over(first_f(w,x,y,z) * weighted_dirac_deltas2(w,x,y,z) dw dx dy dz)) ^ 1.0) By stepping outside of the system one level, and making a higher geometry formulation, arranged in sets and simpler operations, one can, encapsulate in this analytic space formulation, numerous geometry equations, taylor series, by implication mclauarin series, laurent series, affine transforms, complex math, and likely numerous other multivariable polynomial power equations. Also, many of the equations show compact systematic natures, occuring, for many of these examples, on sets of weighted_dirac_delta planes and/or lines. These examples also remind me of the analytic versions of single layer neural networks. With the addition of the following approximating system, one can take real-value (not-integer-only) derivatives of the simple unitary (1*mTL) multidimensional Taylor-Laurent series coordinates, in multiple dimensions, with some accuracy between powers of 0 and 10: derivative(derivative_amount, coefficient*x^power) => coefficient'*x^(power - derivative_amount) c''(p) =(c00+p*((c01/p + c11)*log10(p) + c12*log10(p)^2 + c14*log10(p)^4 + c15*log10(p)^5)) c'''(P) =(c00+P*((c01/P + c11)*log10(P) + c12*log10(P)^2 + c14*log10(P)^4 + c15*log10(P)^5)) with the appropriate selection of fixed c00, c01, c11, c12, c14, c15 very roughly 6.56, 0.00002, -0.42, -0.26, 0.041, -0.011. Wiki reports the Gamma function can be used to exactly take arbitrary real valued derivatives, of the same Taylor-Laurent series coordinates. Do you know what this power vector space is called, from ID analytic methods, other than a multi-dimensional Taylor-Laurent series? I have not been able to find the name of this system myself in research? ENDBack to Contents [21] Lunar Retroreflector Rainbow / Planetary Crystalographic Reflections Back to Contents AD 2008 09 15 P 0800 (sci) from earlier talks Lunar Retroreflector Rainbow / Planetary Crystalographic Reflections ~~~~ Has anyone read anywhere, any references to the generation of a lunar retroreflector rainbow image, or detailed descriptive retroreflector map, for the lunar surface, from the beginning of astro-photography, through NASA, to current research, covering such topics as described here? The lunar surface contains a variable portion of spheroidal glass, from volcanic, meteoric, and asteroidal impacts. Such glassy objects, will generate, at the primary rainbow angle, from the solar nadir, a retroreflection of net sunlight, compared to the natural lunar surface albedo. If the spheres are well rounded, they will generate a rainbow, from the sunlight, and if they are rough and ellipsoidal, there will be a statistical spread of retroreflection light, from the sunlight. A sequence of images with (1) high pixel resolution, (2) high dynamic range luminance resolution, (3) high luminance resolution, (4) multi-spectral, and (5) carefully calibrated characteristics to account for sensor and atmosphere, of the moon, as it crosses into and out of the region of the waxing and waning gibbous phase, around both ~42 degree primary rainbow separations, from the solar nadir, (these images) can be used to morphologically, algorithmically, and differentially calculate the additional reflectance of the whole moon's surface, caused by the various distributions of the glass spheroids across the lunar surface. The spectral characteristics of the net-retroreflectance luminance, could also be used to estimate the sphere distribution, spheroid shape and size distributions, and spheroid glass types, as dispersed across the lunar surface. ~~~~ I have seen topographic maps of the moon from NASA high resolution images from the 1960's, color maps of the moon from normal reflectance from different rock types, halo glory at the solar nadir, and heard of transient lunar phenomena, but never seen any images, but for lunar glass spheroid retroreflectors, I have seen no data of images, maps, or spheroid characterized distributions. ~~~~ Neither have I heard of any similar images taken from the probes sent to Venus, Jupiter, Saturn, Mars, or their moons, or their rings (where applicable), of primary rainbow spheroidal light characterizations (or hexagonal reflection zones for ice crystals of Saturn's rings, where sensors may be capable of sensing the additional (net) retroreflection light, with such differential light calculations in multiple images. ~~~~ ENDBack to Contents [22] Wikipedia Laws of Classical Conservation shortfall. CREATED AD 2008 09 16 P 1050 (sci) == Conservation Laws == I've read the articles of conservation, regarding classical properties, and the previous discussion comment on mass motion conservation on this conservation law article. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon|talk]]) 18:21, 17 September 2008 (UTC) In the classical domain, the Wiki list of classical macroscopic conservation laws appears incomplete. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 05:51, 17 September 2008 (UTC) Condensed, your list contains 2 out of 3 classical systems interactions conservation laws, that I can remember: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 05:51, 17 September 2008 (UTC) (1) Conservation of classical system energy / momenta: potential, linear kinetic, angular kinetic, thermal,, (2) Conservation of classical system matter: charged, neutral, energy equivalent (low energies). There's a third form of conservation on the classical domain, that is missing from the list: [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 05:51, 17 September 2008 (UTC) (3) Conservation of translation-macroscopic=configuration. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 05:51, 17 September 2008 (UTC) One can see it in a one dimensional case. Take a sealed unit with a mass at one end, and two electromagnetic launchers / catchers at both ends. One end can launch the mass to the other end, that catches it. At this point the sealed unit is stationary in steady state, and translated. Then the other end can launch the mass back to the first end. At this point the sealed unit is stationary again, and returned back to the exact original starting position, and original macroscopic configuration equivalent (thermal agitation consuming energy influence is virtually negligible). [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 17:58, 17 September 2008 (UTC) A similar case can be seen in a sealed angular momentum case. Spin accelerating a mass causes a sealed unit frame of reference to spin in the opposite direction. Stopping the mass, and reverse spinning the mass to return its frame of reference to the original spatial phase, will also return the sealed unit back to its original frame of angular phase reference, before being brought to a calculated stop. So original angular translation and macroscopic configuration equivalent is restored (thermal agitation consuming energy influence is virtually negligible). [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 17:58, 17 September 2008 (UTC) Another case can be seen in complex classical material motion cases. Take a sealed unit with a fluid. One end launches the fluid to the other end into a catch. Once the fluid has stopped moving the unit is translated and stationary. The other end then launches the fluid back to the first end, into the catch it came from. Once the fluid has stopped moving, the unit is back to its original position, and same macroscopic configuration equivalent (thermal agitation consuming energy influence is virtually negligible). [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 17:58, 17 September 2008 (UTC) As I reckon, the path integrals of potential energy to kinetic energy-momenta into thermal energy, with s cyclic return to its original equivalent configuration, always integrate back to 0 the linear translation, angular translation, and positional configuration, for macro-meso-micro scale statistically conservative force systems. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon|talk]]) 18:22, 17 September 2008 (UTC) ENDBack to Contents [23] Renewable nuclear energy. CREATED AD 2008 09 17 A 1130 (sci) == Renewable nuclear energy. == To create a renewable nuclear energy source may be possible between the earth and sun. If one could create a magnetic catch ring (or high light flux solar cell array) that could be placed in orbit around the sun, an orbital transport of slugs of "recharged" nuclear material, and an orbital transport of depleted material slugs from the earth to the sun. The magnetic catch may be able to deflect a sufficient amount of charged solar particle radiation, (or alternatively drive solar cells to drive an accelerator for altering depleted nuclear material nuclei), in order to create a stable radioactive isotope suitable for fission reactor use. Then a reactor in orbit around the earth, could be used on the recharged nuclear material slugs, and then microwave energy to earth. Solar energy and particle radiation is definitely more dense by the inverse squared law, and renewed radioactive slugs would be the most compact form of transporting the energy between the sun and the earth. [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 18:45, 17 September 2008 (UTC) Has NASA or similar agency like the department of energy, ever done the full study in today's technology base and dollars, to know if there is enough reverse reactions for producing suitable radioactive nuclei in sufficient amount, in the nuclear reactions of the radioactive elements, and appropriate particle accelerator, and/or solar wind particle flux, to create this renewable nuclear energy? [[User:LoneRubberDragon|LoneRubberDragon]] ([[User talk:LoneRubberDragon#top|talk]]) 18:55, 17 September 2008 (UTC) ENDBack to Contents
967ce9c3f963d49d
Franck-Condon factors in studies of dynamics of chemical reactions. I. General theory, application to collinear atom-diatom reactions George C Schatz, John Ross Research output: Contribution to journalArticlepeer-review 104 Citations (Scopus) We derive and show the utility of an approximate theory of chemical dynamics based on a generalized Franck-Condon factor. We begin by showing how the general expression for the transition matrix for an electronically adiabatic reaction may be rewritten in terms of a transition between two surfaces through the use of a quasiadiabatic representation. This exact transition matrix may be reduced to a Franck-Condon overlap integral in a variety of ways, and one possible sequence of approximations for accomplishing this reduction is outlined. We neglect terms due to virtual transitions to excited electronic states, make a Born-Oppenheimer approximation, neglect terms involving gradients of the nuclear wavefunction (low kinetic energy approximation), and finally make a Franck-Condon approximation. The overlap is then evaluated for the special case of collinear exoergic atom-diatom reactions for the purpose of studying product state vibrational distributions in these reactions. The evaluation is done approximately by using physical arguments to estimate the general appearance of the reagent and product quasiadiabatic surfaces, and assuming separable solutions to the Schrödinger equation on each surface. The overlap integral is then further approximated by expanding the integrand about the nuclear configuration of maximum overlap. This enables us to obtain a simple analytical result for the product state distribution, using either harmonic or Morse oscillator vibrational wavefunctions. We then use the resulting expressions to study the dynamics of the collinear F+H 2(D2) and H(D)+Cl2 reactions. In both applications we find that the Franck-Condon overlap is capable of a qualitatively correct description of the product state distributions, including dependence on reagent translational energy, mass ratios, and various features of the potential energy surface. Furthermore, a physical description of the origin of a dynamic threshold effect in the F+H2(D2) reaction is provided, as is a simple interpretation of the role of potential energy release behavior in the determination of product state distributions. Original languageEnglish Pages (from-to)1021-1036 Number of pages16 JournalJournal of Chemical Physics Issue number3 Publication statusPublished - 1977 ASJC Scopus subject areas • Atomic and Molecular Physics, and Optics Fingerprint Dive into the research topics of 'Franck-Condon factors in studies of dynamics of chemical reactions. I. General theory, application to collinear atom-diatom reactions'. Together they form a unique fingerprint. Cite this
059569550b417dd4
Current-induced spin polarization in spin-orbit-coupled electron systems Ming-Hao Liu Department of Physics, National Taiwan University, Taipei 10617, Taiwan    Son-Hsien Chen Department of Physics, National Taiwan University, Taipei 10617, Taiwan    Ching-Ray Chang Department of Physics, National Taiwan University, Taipei 10617, Taiwan July 15, 2019 Current-induced spin polarization (CISP) is rederived in ballistic spin-orbit-coupled electron systems, based on equilibrium statistical mechanics. A simple and useful picture is correspondingly proposed to help understand the CISP and predict the polarization direction. Nonequilibrium Landauer-Keldysh formalism is applied to demonstrate the validity of the statistical picture, taking the linear Rashba-Dresselhaus [001] two-dimensional system as a specific example. Spin densities induced by the CISP in semiconductor heterostructures and in metallic surface states are compared, showing that the CISP increases with the spin splitting strength and hence suggesting that the CISP should be more observable on metal and semimetal surfaces due to the discovered strong Rashba splitting. An application of the CISP designed to generate a spin-Hall pattern in the inplane, instead of the out-of-plane, component is also proposed. 72.25.Pn, 71.70.Ej, 85.75.-d thanks: Present address: No. 2-1, Fushou Lane, Chengsiang Village, Gangshan Township, Kaohsiung County 82064, Taiwan year number number identifier I Introduction The aim of preparing and controlling spins in all-electrical nonmagnetic devices has been shown to be possible in semiconducting bulk and two-dimensional electron systems (2DESs).Awschalom et al. (2002); Kato et al. (2004a) Besides the optical spin injection, a much more natural way of spin orientation is to make use of the spin-orbit (SO) coupling due to the lack of inversion symmetry of the underlying material.Winkler (2003) When passing an unpolarized electric current (electrons carrying random spins) through an SO-coupled material, spin-dependent consequences arise, among which two famous phenomena are the spin-Hall effect (SHE)D’yakonov and Perel’ (1971); Hirsch (1999); Murakami et al. (2003); Sinova et al. (2004); Kato et al. (2004b); Wunderlich et al. (2005) and the current-induced spin polarization (CISP). In the CISP phenomenon, unpolarized electric current is expected to be spin-polarized when flowing in a SO-coupled sample. This effect was first theoretically proposed in the early 90s. EdelsteinEdelstein (1990) employed linear-response theory to calculate the spin polarization due to an electric current in the presence of SO coupling linear in momentum, taking into account low-concentration impurities. Aronov and Lyanda-GellerAronov and Lyanda-Geller (1989) solved the quantum Liouville’s theorem for the spin density matrix to show the CISP, taking into account scattering as well. Recently, the CISP phenomenon has been experimentally proven.Kato et al. (2004c); Sih et al. (2005); Yang et al. (2006) Moreover, both the SHE and CISP have been observed at room temperature.Stern et al. (2006) In this paper we propose another viewpoint based on equilibrium statistical mechanics to explain the CISP in the absence of impurity scattering, for both bulk and two-dimensional systems. We show that the canonical ensemble average (CEA) of electrons moving with a wave vector immediately prescribes a spin polarization antiparallel to the effective magnetic field stemming from the underlying SO coupling not necessarily linear in , and hence explains the CISP. Correspondingly, a much simpler picture, compared to the early theoretical works of Refs. Edelstein, 1990 and Aronov and Lyanda-Geller, 1989, helps provide a qualitative and straightforward explanation for the CISP: In an SO coupled 2DES without external magnetic field, an ensemble of rest electrons is unpolarized, while it becomes spin-polarized antiparallel to when moving along (see Fig. 1). Figure 1: (Color online) Statistical picture of the current-induced spin polarization phenomenon. To demonstrate the validness of this elementary statistical argument, spin and charge transports in finite-size four-terminal conducting 2DESs with Rashba and linear Dresselhaus [001] SO couplings, are numerically analyzed using the more sophisticated Landauer-Keldysh formalism (LKF),Datta (1995); Nikolic et al. (2005, 2006) allowing for nonequilibrium statistics. Good agreement between the analytical CEA and the numerical LKF will be seen, consolidating our statistical picture. In addition to the semiconducting heterostructures, we also extend the analysis of the CISP to metal and semimetal surfaces, and compare the polarization strengths. Finally, an application of the CISP, resembling an inplane SHE, will be subsequently proposed. Throughout this paper, all the band parameters used in the LKF are extracted from experiments by matching the band structures calculated by the tight-binding model (and hence the density of states calculated by the LKF) with the experimentally measured ones.Liu et al. (2007) This paper is organized as follows. In Sec. II, we discuss the general properties of the system with SO coupling and derive the CISP in the ballistic limit using statistical mechanics. In Sec. III the LKF is applied partly to examine the validity of the statistical picture of the CISP introduced in Sec. II, and partly for further investigation. Summary of the present work will be given in Sec. IV. Ii Analytical derivations Consider a SO-coupled system, subject to the single-particle Hamiltonian where is the effective mass, is the identity matrix, is the spin operator, being the Pauli matrix vector, and is the momentum-dependent Larmor frequency vector, with being the effective magnetic field stemming from the SO coupling.Žutić et al. (2004) ii.1 Larmor frequency vectors For III-V (zinc blende) bulk semiconductors,Dresselhaus (1955) the Larmor frequency in Eq. (1) is written asD’yakonov and Perel (1971) where is a dimensionless parameter specifying the spin-orbit coupling strength, is the band gap, and is given by Here ’s are the wave vector components along the crystal principle axes. When restricted to two-dimension, the component of the wave vector normal to the 2DES is averaged. For [001] quantum wells, one has and to rewrite Eq. (3) as , so that the Larmor frequency (2) takes the form where is defined by and is referred to as the Dresselhaus SO coupling constant. The parameter (corresponding to of Ref. Winkler, 2003) is material-dependent and is roughly for both GaAs and InAs.Knap et al. (1996); Winkler (2003) The first term in Eq. (4), is the linear Dresselhaus [001] term, which will dominate for small region. The corresponding SO term is known as the linear Dresselhaus [001] model Hamiltonian.Winkler (2003); Žutić et al. (2004) With larger the second term in Eq. (4)—the term—becomes important. We will come back to this later. For other quantum wells such as [110] and [111], the vector given by Eq. (3) can be recast into a form that depends on the growth direction of the 2DES.D’yakonov and Kachorovskii (1986) (See also Ref. Žutić et al., 2004.) When writing the Larmor frequency vector as the linear Rashba model HamiltonianWinkler (2003); Žutić et al. (2004); Bychkov and Rashba (1984) is recovered. Here is the Rashba SO coupling constant. ii.2 Time-reversal symmetry Before deriving the CISP, we provide the following two intrinsic properties of the Hamiltonian (1). First, we show that the contribution to the SO terms in solid is odd in due to time-reversal symmetry, which is also remarked in Ref. Winkler, 2003. For spin-1/2 systems subject to Hamiltonian (1), the energy dispersion can be written as where is the kinetic energy, is the spin state label, and is the spin splitting due to SO coupling. In the absence of external magnetic field, the time-reversal symmetry is preserved, resulting in , or, which implies that nonvanishing spin splitting is odd in . Note that Eq. (9) also implies which agrees with our intuition. Apparently, Eq. (10) is obeyed by all the previously reviewed Larmor frequency vectors. Second, we show , where is the eigenstate of Hamiltonian (1). We begin with the Schrödinger equation, Comparing Eq. (11) with Eq. (8), we deduce , or, where is assumed normalized. This implies Factoring out and canceling on both sides, we arrive at Equation (14) is a general property of Eq. (1) and is valid for systems with dispersions , where the spin splitting is not necessarily linear in . This property (14) will play a tricky role in the coming derivation of the CISP based on statistical mechanics in Sec. LABEL:sec_CEA. Note that Eq. (14) is also a consequence of time-reversal symmetry (9), as one can easily prove as follows. Using Eq. (12) we rewrite Eq. (9) as Equation (12) also implies when one regards as . In addition, Eq. (9) implies because of where Eqs. (16) and (10) are used in (18a) and (18b), respectively. Substituting Eqs. (10) and (17) into Eq. (15), we obtain Eq. (13), and hence the property (14). where is the Boltzmann constant, is temperature, is a quantum number labeling the states, and is the eigenenergy of state solved from Hamiltonian . Now consider an unpolarized electron ensemble in a 2DES, subject to Hamiltonian (1). Our main interest here is the CEA of the spin operators of an ensemble of electrons, subject to an identical wave vector . By this we mean that the summation in Eq. (19) runs over the spin index only. This gives Choosing the basis for the trace, one is led to Using the property (14) and factoring out from we arrive at the general expression To re-express Eq. (20) in terms of the effective magnetic field , defined by we rewrite Eq. (12) with as Noting (unit vector) and , Eq. (22) implies i.e., the direction of the effective magnetic field. Therefore, Eq. (20) can be written as which is exactly the analog of the CEA of electron spin in vacuum subject to an applied magnetic field.Sakurai (1994) Equation (24) now has a transparent meaning: In the presence of SO coupling, an ensemble of rest electrons () is unpolarized since , while it becomes spin-polarized antiparallel to when moving along . This picture is schematically shown in Fig. 1. Moreover, the hyperbolic tangent factor clearly predicts the decrease with and the increase with in the polarization magnitude, and therefore explains two signatures of the CISP qualitatively: (i) The CISP may persist up to the room temperature. Taking m from Ref. Yang et al., 2006, one has . (ii) As (Ref. Sih et al., 2005) implies , the magnitude of the CISP governed by is supposed to increase with the bias, as is experimentally proven.Kato et al. (2004c) ii.4 Explicit forms of current-induced spin polarization From Eq. (24), it is now clear that the direction of the CISP is given by the effective magnetic field direction . Alternatively, one can use the direction of the Larmor frequency vector, , to describe the CISP direction since and are, by definition of Eq. (21), collinear. Therefore, the CISP direction in III-V bulk semiconductors is given by Eq. (3). Figure 2: (Color online) Effective magnetic field of a 100--thick [001] InGaAs quantum well with . For 2DES grown along [001] with Dresselhaus terms up to the , Eq. (4) describes the effective magnetic field shown as Fig. 2, which simulates a 100--thick InGaAs quantum well with (Ref. Winkler, 2003). The CISP direction is opposite to the effective magnetic field. Note that in Fig. 2, the field distribution near the central region (small ) is dominated by the linear term (6) (cf. the right inset of Fig. 3). In the rest of this paper, we focus on the Rashba and linear Dresselhaus [001] terms. For effects with full SO terms in the Rashba-Dresselhaus systems, see Refs. Winkler, 2003 and Marques et al., 2005. The composite Larmor frequency vector can be obtained by adding Eq. (6) with and Eq. (7) together, The spin splitting linear in takes the form with . Thus the CISP in linear Rashba-Dresselhaus [001] 2DESs is explicitly given by ii.5 Remark on effective mass In general, the inplane effective mass of the electrons is not constant but depend strongly on for realistic semiconductor systems. However, in the long-wavelength limit ( and the Fermi wave vector and lattice constant, respectively), the effective mass, defined by the inverse of the second derivative of with respect to , is a constant due to the parabolic nature of solved from Hamiltonian (1). In this limit, even though the band structure can be anisotropic due to the interplay between different SO couplings (such as Rashba plus linear Dresselhaus [001]), the effective mass remains constant. In the present analysis, we work in this limit, within which the Hamiltonian (1) is valid. Interestingly, our CEA formulas such as Eq. (24) do not contain the dependence of . Away from region, the energy dispersion is no longer parabolic, and the free-electron-like model Hamiltonian (1) and hence the follow-up derivations fail. Analysis of the CISP phenomenon requires other formalisms such as the LKF, to be employed in the coming section. Nevertheless, we will not look further into the influence of the -dependent effective mass on the CISP. Iii Numerical results: Landauer-Keldysh formalism To inspect the validity of the previously proposed statistical picture and further examine the CISP, we now perform local spin-density calculation in finite-size 2DESs attached to four normal metal leads by using the LKF.Datta (1995); Nikolic et al. (2005, 2006) Figure 3: (Color online) Spin orientation in a channel with . Channels with linear Rashba model are considered in (a) and (c) while those with linear Dresselhaus [001] model are in (b) and (d). The direction of each sharp triangle represents the inplane spin vector of the local spin density. The size of the triangle depicts the magnitude of . Effective magnetic fields due to individually the Rashba and the Dresselhaus [001] fields are shown in the insets. iii.1 Local spin densities in extreme Rashba and Dresselhaus [001] cases As a preliminary demonstration, Fig. 3 shows the position-dependent in-plane spin vectors , with the local spin densities and calculated by the LKF. Here we adopt the finite difference method and discretize the channel, made of InGaAs/InAlAs heterostructure Nitta et al. (1997) grown along [001], into a square lattice with lattice spacing . Accordingly, this gives the kinetic and Rashba hopping strengths and m, respectively. For the Dresselhaus SO coupling, we again assume the quantum well thickness and , and use to give [see Eq. (5)] , resulting in the Dresselhaus hopping strength m. Let us first consider the extreme cases, pure Rashba and pure Dresselhaus [001] channels. As expected, the spin vectors are mostly oriented antiparallel to , which is, for , pointing to in the Rashba channel [Fig. 3(a)/(c) with low/high bias], and in the Dresselhaus [001] channel [Figs. 3(b)/(d) with low/high bias]. Here (and hereafter) the low and high biases mean m and , respectively, and we label the applied potential energy of as “”, and as “” on each lead. Note that the spin distribution, modulated by the charge distribution, forms standing waves in the low bias regime since the electrons behaves quantum mechanically, while that in the high bias regime, i.e., the nonequilibrium transport regime, decays with distance.Liu et al. (2007) The polarization in the latter (high bias) is about two orders of magnitude stronger than the former (low bias). Figure 4: (Color online) Local spin densities by LKF in a square Rashba-Dresselhaus [001] channel with (a) left to right, (b) left-bottom to right-top, (c) bottom to top, and (d) right-bottom to left-top bias configurations. Bias regime belongs to low: m. Inset: vs. in the - coordinate. iii.2 Consistency check: Analytical canonical ensemble average vs numerical Landauer-Keldysh formalism We now consider a four-lead square channel with coexisting Rashba and linear Dresselhaus [001] terms. The coupling constants are set identical to those introduced previously. Removing the four corner sites to avoid short circuit, the sample size is . To see if the CISP direction follows the opposite effective magnetic field for all directions, we change the current direction by applying different bias configurations. As shown in Figs. 4(a), (b), (c), and (d), the electrons flow from left to right, from left bottom to right top, from bottom to top, and from right bottom to left top, respectively. Other current directions are done in a similar way, but not explicitly shown here. In averaging the in-plane local spin densities and over all the lattice points at within the conducting sample, we compare in the inset of Fig. 4 with the effective magnetic field , where is given by Eq. (25). As expected by our statistical picture introduced in Sec. LABEL:sec_CEA, arrows are all opposite to for all directions, despite some indistinguishably tiny differences. Note that the additive and destructive effects between the two SO terms are also observed at [1̄10] and [110], respectively. Along [1̄10] ([110]), strongest (weakest) spin splitting , and hence the CISP magnitude [Eq. (24)], occur. Note that here we apply low bias. With high bias the results also agree perfectly with the CEA picture (not shown). iii.3 Bias dependence of current-induced spin polarization Having shown that the statistical argument indeed works well, we next examine the bias dependence of the CISP, which is expected to be a proportional relation, as has been experimentally observed.Kato et al. (2004c) We return to Rashba channels. Spin densities, i.e., the total spin divided by the total area of the conducting channel, obtained via here with being the number of total lattice sites in the conducting sample, are reported in Fig. 5 for sample widths . Sample length is set . Consistent to the experiment, the calculated spin densities increase with . In addition, linear response within is clearly observed in all cases. Nonlinearity enters when grows so that nonequilibrium statistics dominates. Note that the calculated local spin density distribution satisfies the usual SHE symmetry,Nikolic et al. (2006) so that we have and . Figure 5: (Color online) Bias dependence of spin densities induced by the CISP in Rashba 2DESs. iii.4 Comparison of current-induced spin polarization in semiconductor heterostructures and metal/semimetal surface states Next we extend the calculation of the spin density due to the CISP to other materials. In addition to semiconductor heterostructures, 2DESs have been shown to exist also on metal surfaces supported by the surface states.Davison and Stȩślicka (1992) Due to the loss of inversion symmetry, the metallic surfaces may exhibit Rashba spin splitting as well.LaShell et al. (1996); Bihlmayer et al. (2006) Here we consider three samples: InGaAs/InAlAs heterostructure, Au(111) surface, and Bi(111) surface. We arrange the lead configuration of all the three samples as those in Fig. 3 and apply high bias. The sizes we choose here are to maintain roughly the same lattice site number and keep the length-width ratio . Note that realistic lattice structure are considered for the surface states [hexagonal for Au(111) and honeycomb for Bi(111) bilayer], while finite-difference method based on the long-wavelength limit for the heterostructure is adopted. For introductory reviews of those surfaces, see Ref. Reinert, 2003 for noble metal surfaces, including gold, and Ref. Hofmann, 2006 for bismuth surfaces. Reference Nitta et al.,1997 LaShell et al.,1996 Koroteev et al.,2004 Table 1: Summary of effective mass ratio Rashba constant Fermi energy (relative to the band bottom ), and the calculated spin density due to CISP, for a set of materials. Band parameters extracted from experiments and the spin densities calculated by the LKF are summarized in Table 1. Clearly, the CISP increases with the Rashba parameter . This suggests that the CISP (and actually also the SHE) should be more observable on these surfaces. The recently discovered Bi/Ag(111) surface alloy that exhibits a giant spin splittingAst et al. (2007) is even more promising, but we do not perform calculation for this interesting material here. iii.5 Application of current-induced spin polarization: Generation of in-plane spin-Hall pattern Finally, we propose an experimental setup, as an application of the CISP, to generate an antisymmetric edge spin accumulation in the inplane component, i.e., an inplane spin-Hall pattern. For simplicity, let us consider a Rashba 2DES with the parameters for the LKF calculation taken the same as those in Fig. 5. Sample size is about . We apply high bias of and arrange a special bias configuration. Figure 6: (Color online) Mapping of the (a) local charge current density, local spin densities (b) (c) and (d) in a four-terminal square channel with a special bias arrangement. Unit in (b)–(c) is . As shown in Fig. 6(a), unpolarized electron currents are injected from the left and right leads and are guided to the top and bottom ones. Under such design, the spin accumulation in exhibits merely a vague pattern [see Fig. 6(b)]. Contrarily, the pattern of shows not only antisymmetric edge accumulation in the channel but also magnitude much stronger than the out-of-plane component [see Fig. 6(c)]. This pattern is reasonably expected by the CISP due to the opposite charge flows along at the top and bottom edges, and hence resembles an inplane SHE. In determining , Fig. 6(d) does not show a rotated pattern from due to the nonequilibrium transport. In the nonequilibrium transport regime, a distance apart from the source leads is required to induce the CISP, and therefore no significant is observed near the source (left and right) leads. This can be seen by comparing the local spin density distributions in the low-bias and high-bias regimes shown in Figs. 3(a) and 3(b), and Figs. 3(c) and 3(d), respectively. Iv Summary In conclusion, we have rederived the CISP due to SO coupling in the absence of impurity scattering based on equilibrium statistical mechanics. Correspondingly, a simple picture (Fig. 1) valid for both bulk structures and 2DESs is proposed to help qualitatively explain the CISP. Our explanation for the spin polarization of the moving electron ensemble in solid due to effective magnetic field is an exact analog to that of the rest electron ensemble in vacuum due to external magnetic field.Sakurai (1994) The picture is further tested to work well even in the regime of nonequilibrium transport in finite-size samples, by employing the numerical LKF. Extending the spin density calculation from the semiconductor heterostructure to metal and semimetal surface states, our calculation confirms that the polarization increases with the SO coupling strength, and hence suggests that the CISP should be more observable on metal and semimetal surfaces with stronger Rashba SO coupling.LaShell et al. (1996); Koroteev et al. (2004); Ast et al. (2007) As an application of the CISP, we also suggest an interesting bias configuration for the four-terminal setup to generate inplane SHE [Fig. 6(c)]. One of the authors (M.H.L.) appreciates S. D. Ganichev and L. E. Golub for stimulating discussion, and L. Ding and G. Bihlmayer for useful information. Financial support of the Republic of China National Science Council Grant No. 95-2112-M-002-044-MY3 is gratefully acknowledged. • Awschalom et al. (2002) D. D. Awschalom, D. Loss, and N. Samarth, eds., Semiconductor Spintronics and Quantum Computation (Springer, Berlin, 2002). • Kato et al. (2004a) Y. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Nature 427, 50 (2004a). • Winkler (2003) R. Winkler, Spin-Orbit Coupling Effects in Two-Dimensional Electron and Hole Systems (Springer, Berlin, 2003). • D’yakonov and Perel’ (1971) M. I. D’yakonov and V. Perel’, Sov. Phys. JEPT 33, 467 (1971). • Hirsch (1999) J. E. Hirsch, Phys. Rev. Lett. 83, 1834 (1999). • Murakami et al. (2003) S. Murakami, N. Nagaosa, and S. C. Zhang, Science 301, 1348 (2003). • Sinova et al. (2004) J. Sinova, D. Culcer, Q. Niu, N. A. Sinitsyn, T. Jungwirth, and A. H. MacDonald, Phys. Rev. Lett. 92, 126603 (2004). • Kato et al. (2004b) Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Science 306, 1910 (2004b). • Wunderlich et al. (2005) J. Wunderlich, B. Kaestner, J. Sinova, and T. Jungwirth, Phys. Rev. Lett. 94, 047204 (2005). • Edelstein (1990) V. M. Edelstein, Solid State Commun. 73, 233 (1990). • Aronov and Lyanda-Geller (1989) A. G. Aronov and Y. B. Lyanda-Geller, JETP Lett. 50, 431 (1989). • Kato et al. (2004c) Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, Phys. Rev. Lett. 93, 176601 (2004c). • Sih et al. (2005) V. Sih, R. C. Myers, Y. K. Kato, W. H. Lau, A. C. Gossard, and D. D. Awschalom, Nat. Phys. 1, 31 (2005). • Yang et al. (2006) C. L. Yang, H. T. He, L. Ding, L. J. Cui, Y. P. Zeng, J. N. Wang, and W. K. Ge, Phys. Rev. Lett. 96, 186605 (2006). • Stern et al. (2006) N. P. Stern, S. Ghosh, G. Xiang, M. Zhu, N. Samarth, and D. D. Awschalom, Phys. Rev. Lett. 97, 126603 (2006). • Datta (1995) S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, 1995). • Nikolic et al. (2005) B. K. Nikolic, S. Souma, L. P. Zarbo, and J. Sinova, Phys. Rev. Lett. 95, 046601 (2005). • Nikolic et al. (2006) B. K. Nikolic, L. P. Zarbo, and S. Souma, Phys. Rev. B 73, 075303 (2006). • Liu et al. (2007) M.-H. Liu, G. Bihlmayer, S. Blügel, and C.-R. Chang, Phys. Rev. B 76, 121301(R) (2007). • Žutić et al. (2004) I. Žutić, J. Fabian, and S. Das Sarma, Rev. Mod. Phys. 76, 323 (2004). • Dresselhaus (1955) G. Dresselhaus, Phys. Rev. 100, 580 (1955). • D’yakonov and Perel (1971) M. I. D’yakonov and V. I. Perel, Sov. Phys. JETP 33, 1053 (1971). • Knap et al. (1996) W. Knap, C. Skierbiszewski, A. Zduniak, E. LitwinStaszewska, D. Bertho, F. Kobbi, J. L. Robert, G. E. Pikus, F. G. Pikus, S. V. Iordanskii, et al., Phys. Rev. B 53, 3912 (1996). • D’yakonov and Kachorovskii (1986) M. I. D’yakonov and V. Y. Kachorovskii, Sov. Phys. Semicond. 20, 110 (1986). • Bychkov and Rashba (1984) Y. A. Bychkov and E. I. Rashba, JETP Lett. 39, 78 (1984). • Sakurai (1994) J. J. Sakurai, Modern Quantum Mechanics (Addison-Wesley, New York, 1994), revised ed. • Marques et al. (2005) G. E. Marques, A. C. R. Bittencourt, C. F. Destefani, and S. E. Ulloa, Phys. Rev. B 72, 045313 (2005). • Nitta et al. (1997) J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Phys. Rev. Lett. 78, 1335 (1997). • Davison and Stȩślicka (1992) S. G. Davison and M. Stȩślicka, Basic Theory of Surface States (Oxford University Press, Oxford, U.K., 1992). • LaShell et al. (1996) S. LaShell, B. A. McDougall, and E. Jensen, Phys. Rev. Lett. 77, 3419 (1996). • Bihlmayer et al. (2006) G. Bihlmayer, Y. M. Koroteev, P. M. Echenique, E. V. Chulkov, and S. Blügel, Surf. Sci. 600, 3888 (2006). • Reinert (2003) F. Reinert, J. Phys.: Condens. Matter 15, S693 (2003). • Hofmann (2006) P. Hofmann, Prog. in Surf. Sci. 81, 191 (2006). • Koroteev et al. (2004) Y. M. Koroteev, G. Bihlmayer, J. E. Gayone, E. V. Chulkov, S. Blügel, P. M. Echenique, and P. Hofmann, Phys. Rev. Lett. 93, 046403 (2004). • Ast et al. (2007) C. R. Ast, J. Henk, A. Ernst, L. Moreschini, M. C. Falub, D. Pacile, P. Bruno, K. Kern, and M. Grioni, Phys. Rev. Lett. 98, 186807 (2007). Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
b164344bf93cfc48
contact us more info Dear Reader, Best regards, Mike Gottlieb Editor, The Feynman Lectures on Physics New Millennium Edition 21The Schrödinger Equation in a Classical Context: A Seminar on Superconductivity (There was no summary for this lecture.) 21–1Schrödinger’s equation in a magnetic field This lecture is only for entertainment. I would like to give the lecture in a somewhat different style—just to see how it works out. It’s not a part of the course—in the sense that it is not supposed to be a last minute effort to teach you something new. But, rather, I imagine that I’m giving a seminar or research report on the subject to a more advanced audience, to people who have already been educated in quantum mechanics. The main difference between a seminar and a regular lecture is that the seminar speaker does not carry out all the steps, or all the algebra. He says: “If you do such and such, this is what comes out,” instead of showing all of the details. So in this lecture I’ll describe the ideas all the way along but just give you the results of the computations. You should realize that you’re not supposed to understand everything immediately, but believe (more or less) that things would come out if you went through the steps. All that aside, this is a subject I want to talk about. It is recent and modern and would be a perfectly legitimate talk to give at a research seminar. My subject is the Schrödinger equation in a classical setting—the case of superconductivity. Ordinarily, the wave function which appears in the Schrödinger equation applies to only one or two particles. And the wave function itself is not something that has a classical meaning—unlike the electric field, or the vector potential, or things of that kind. The wave function for a single particle is a “field”—in the sense that it is a function of position—but it does not generally have a classical significance. Nevertheless, there are some situations in which a quantum mechanical wave function does have classical significance, and they are the ones I would like to take up. The peculiar quantum mechanical behavior of matter on a small scale doesn’t usually make itself felt on a large scale except in the standard way that it produces Newton’s laws—the laws of the so-called classical mechanics. But there are certain situations in which the peculiarities of quantum mechanics can come out in a special way on a large scale. At low temperatures, when the energy of a system has been reduced very, very low, instead of a large number of states being involved, only a very, very small number of states near the ground state are involved. Under those circumstances the quantum mechanical character of that ground state can appear on a macroscopic scale. It is the purpose of this lecture to show a connection between quantum mechanics and large-scale effects—not the usual discussion of the way that quantum mechanics reproduces Newtonian mechanics on the average, but a special situation in which quantum mechanics will produce its own characteristic effects on a large or “macroscopic” scale. I will begin by reminding you of some of the properties of the Schrödinger equation.1 I want to describe the behavior of a particle in a magnetic field using the Schrödinger equation, because the superconductive phenomena are involved with magnetic fields. An external magnetic field is described by a vector potential, and the problem is: what are the laws of quantum mechanics in a vector potential? The principle that describes the behavior of quantum mechanics in a vector potential is very simple. The amplitude that a particle goes from one place to another along a certain route when there’s a field present is the same as the amplitude that it would go along the same route when there’s no field, multiplied by the exponential of the line integral of the vector potential, times the electric charge divided by Planck’s constant2 (see Fig. 21–1): \begin{equation} \label{Eq:III:21:1} \braket{b}{a}_{\text{in $\FLPA$}}=\braket{b}{a}_{A=0}\cdot \exp\biggl[\frac{iq}{\hbar}\int_a^b\FLPA\cdot d\FLPs\biggr]. \end{equation} It is a basic statement of quantum mechanics. Fig. 21–1.The amplitude to go from $a$ to $b$ along the path $\Gamma$ is proportional to $\exp\bigl[(iq/\hbar)\int_a^b\FigA\cdot d\Figs\bigr]$. Now without the vector potential the Schrödinger equation of a charged particle (nonrelativistic, no spin) is \begin{equation} \label{Eq:III:21:2} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi= \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\psi+q\phi\psi, \end{equation} \begin{gather} \label{Eq:III:21:2} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi=\\[1ex] \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\psi+q\phi\psi,\notag \end{gather} where $\phi$ is the electric potential so that $q\phi$ is the potential energy.3 Equation (21.1) is equivalent to the statement that in a magnetic field the gradients in the Hamiltonian are replaced in each case by the gradient minus $q\FLPA$, so that Eq. (21.2) becomes \begin{equation} \label{Eq:III:21:3} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi= \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+q\phi\psi, \end{equation} \begin{gather} \label{Eq:III:21:3} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi=\\[1ex] \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+q\phi\psi,\notag \end{gather} This is the Schrödinger equation for a particle with charge $q$ moving in an electromagnetic field $\FLPA,\phi$ (nonrelativistic, no spin). To show that this is true I’d like to illustrate by a simple example in which instead of having a continuous situation we have a line of atoms along the $x$-axis with the spacing $b$ and we have an amplitude $iK\!/\hbar$ per unit time for an electron to jump from one atom to another when there is no field.4 Now according to Eq. (21.1) if there’s a vector potential in the $x$-direction $A_x(x,t)$, the amplitude to jump will be altered from what it was before by a factor $\exp[(iq/\hbar)\,A_xb]$, the exponent being $iq/\hbar$ times the vector potential integrated from one atom to the next. For simplicity we will write $(q/\hbar)A_x\equiv f(x)$, since $A_x$, will, in general, depend on $x$. If the amplitude to find the electron at the atom “$n$” located at $x$ is called $C(x)\equiv C_n$, then the rate of change of that amplitude is given by the following equation: \begin{align} -\frac{\hbar}{i}\ddp{}{t}C(x)=E_0C(x)&\!-\!Ke^{-ibf(x+b/2)}C(x\!+\!b)\notag\\ \label{Eq:III:21:4} &-\!Ke^{+ibf(x-b/2)}C(x\!-\!b). \end{align} There are three pieces. First, there’s some energy $E_0$ if the electron is located at $x$. As usual, that gives the term $E_0C(x)$. Next, there is the term $-KC(x+b)$, which is the amplitude for the electron to have jumped backwards one step from atom “$n+1$,” located at $x+b$. However, in doing so in a vector potential, the phase of the amplitude must be shifted according to the rule in Eq. (21.1). If $A_x$ is not changing appreciably in one atomic spacing, the integral can be written as just the value of $A_x$ at the midpoint, times the spacing $b$. So $(iq/\hbar)$ times the integral is just $ibf(x+b/2)$. Since the electron is jumping backwards, I showed this phase shift with a minus sign. That gives the second piece. In the same manner there’s a certain amplitude to have jumped from the other side, but this time we need the vector potential at a distance $(b/2)$ on the other side of $x$, times the distance $b$. That gives the third piece. The sum gives the equation for the amplitude to be at $x$ in a vector potential. Now we know that if the function $C(x)$ is smooth enough (we take the long wavelength limit), and if we let the atoms get closer together, Eq. (21.4) will approach the behavior of an electron in free space. So the next step is to expand the right-hand side of (21.4) in powers of $b$, assuming $b$ is very small. For example, if $b$ is zero the right-hand side is just $(E_0-2K)C(x)$, so in the zeroth approximation the energy is $E_0-2K$. Next comes the terms in $b$. But because the two exponentials have opposite signs, only even powers of $b$ remain. So if you make a Taylor expansion of $C(x)$, of $f(x)$, and of the exponentials, and then collect the terms in $b^2$, you get \begin{align} -\frac{\hbar}{i}\,\ddp{C(x)}{t}&=E_0C(x)-2KC(x)\notag\\ \label{Eq:III:21:5} &\quad-Kb^2\{C''(x)-2if(x)C'(x)-if'(x)C(x)-f^2(x)C(x)\}. \end{align} \begin{align} \label{Eq:III:21:5} -\frac{\hbar}{i}\,\ddp{C(x)}{t}&=E_0C(x)-2KC(x)\\[1ex] &-Kb^2\bigl\{C''(x)-2if(x)C'(x)\,-\notag\\[1ex] &\phantom{-Kb^2\{}\;if'(x)C(x)-f^2(x)C(x)\bigr\}.\notag \end{align} (The “primes” mean differentiation with respect to $x$.) Now this horrible combination of things looks quite complicated. But mathematically it’s exactly the same as \begin{equation} \label{Eq:III:21:6} -\frac{\hbar}{i}\,\ddp{C(x)}{t}=(E_0-2K)C(x)-Kb^2 \biggl[\ddp{}{x}-if(x)\biggr] \biggl[\ddp{}{x}-if(x)\biggr]C(x). \end{equation} \begin{align} \label{Eq:III:21:6} -\frac{\hbar}{i}\ddp{C(x)}{t}&=(E_0-2K)C(x)\\ &-Kb^2 \biggl[\ddp{}{x}-if(x)\biggr] \biggl[\ddp{}{x}-if(x)\biggr]C(x).\notag \end{align} The second bracket operating on $C(x)$ gives $C'(x)$ minus $if(x)C(x)$. The first bracket operating on these two terms gives the $C''$ term and terms in the first derivative of $f(x)$ and the first derivative of $C(x)$. Now remember that the solutions for zero magnetic field5 represent a particle with an effective mass $m_{\text{eff}}$ given by \begin{equation*} Kb^2=\frac{\hbar^2}{2m_{\text{eff}}}. \end{equation*} If you then set $E_0=2K$, and put back $f(x)=(q/\hbar)A_x$, you can easily check that Eq. (21.6) is the same as the first part of Eq. (21.3). (The origin of the potential energy term is well known, so I haven’t bothered to include it in this discussion.) The proposition of Eq. (21.1) that the vector potential changes all the amplitudes by the exponential factor is the same as the rule that the momentum operator, $(\hbar/i)\FLPnabla$ gets replaced by \begin{equation*} \frac{\hbar}{i}\,\FLPnabla-q\FLPA, \end{equation*} as you see in the Schrödinger equation of (21.3). 21–2The equation of continuity for probabilities Now I turn to a second point. An important part of the Schrödinger equation for a single particle is the idea that the probability to find the particle at a position is given by the absolute square of the wave function. It is also characteristic of the quantum mechanics that probability is conserved in a local sense. When the probability of finding the electron somewhere decreases, while the probability of the electron being elsewhere increases (keeping the total probability unchanged), something must be going on in between. In other words, the electron has a continuity in the sense that if the probability decreases at one place and builds up at another place, there must be some kind of flow between. If you put a wall, for example, in the way, it will have an influence and the probabilities will not be the same. So the conservation of probability alone is not the complete statement of the conservation law, just as the conservation of energy alone is not as deep and important as the local conservation of energy.6 If energy is disappearing, there must be a flow of energy to correspond. In the same way, we would like to find a “current” of probability such that if there is any change in the probability density (the probability of being found in a unit volume), it can be considered as coming from an inflow or an outflow due to some current. This current would be a vector which could be interpreted this way—the $x$-component would be the net probability per second and per unit area that a particle passes in the $x$-direction across a plane parallel to the $yz$-plane. Passage toward $+x$ is considered a positive flow, and passage in the opposite direction, a negative flow. Is there such a current? Well, you know that the probability density $P(\FLPr,t)$ is given in terms of the wave function by \begin{equation} \label{Eq:III:21:7} P(\FLPr,t)=\psi\cconj(\FLPr,t)\psi(\FLPr,t). \end{equation} I am asking: Is there a current $\FLPJ$ such that \begin{equation} \label{Eq:III:21:8} \ddp{P}{t}=-\FLPdiv{\FLPJ}? \end{equation} If I take the time derivative of Eq. (21.7), I get two terms: \begin{equation} \label{Eq:III:21:9} \ddp{P}{t}=\psi\cconj\,\ddp{\psi}{t}+\psi\,\ddp{\psi\cconj}{t}. \end{equation} Now use the Schrödinger equation—Eq. (21.3)—for $\ddpl{\psi}{t}$; and take the complex conjugate of it to get $\ddpl{\psi\cconj}{t}$—each $i$ gets its sign reversed. You get \begin{equation} \begin{aligned} \ddp{P}{t}&=-\frac{i}{\hbar}\biggl[\psi\cconj\,\frac{1}{2m} \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+ q\phi\psi\cconj\psi\\[.5ex] &\hphantom{{}={}}-\psi\,\frac{1}{2m} \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi\cconj -q\phi\psi\psi\cconj\biggr]. \end{aligned} \label{Eq:III:21:10} \end{equation} \begin{align} \label{Eq:III:21:10} &\ddp{P}{t}=\\ &-\frac{i}{\hbar}\biggl[\psi\cconj\!\frac{1}{2m} \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\!\cdot\! \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\!+\! q\phi\psi\cconj\psi\notag\\[1ex] &\kern{2em}-\psi\frac{1}{2m} \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\!\cdot\! \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\cconj\! \!-\!q\phi\psi\psi\cconj\biggr].\notag \end{align} The potential terms and a lot of other stuff cancel out. And it turns out that what is left can indeed be written as a perfect divergence. The whole equation is equivalent to \begin{equation} \label{Eq:III:21:11} \ddp{P}{t}=-\FLPdiv{\biggl\{ \frac{1}{2m}\,\psi\cconj \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+ \frac{1}{2m}\,\psi \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi\cconj \biggr\}}. \end{equation} \begin{align} \label{Eq:III:21:11} \ddp{P}{t}\!=-\FLPdiv{\biggl\{\! &\frac{1}{2m}\psi\cconj \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\,+\\[1ex] &\frac{1}{2m}\psi \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\cconj \!\biggr\}}.\notag \end{align} It is really not as complicated as it seems. It is a symmetrical combination of $\psi\cconj$ times a certain operation on $\psi$, plus $\psi$ times the complex conjugate operation on $\psi\cconj$. It is some quantity plus its own complex conjugate, so the whole thing is real—as it ought to be. The operation can be remembered this way: it is just the momentum operator $\Pcalvecop$ minus $q\FLPA$. I could write the current in Eq. (21.8) as \begin{equation} \label{Eq:III:21:12} \FLPJ\!=\!\frac{1}{2}\biggl\{\! \psi\cconj\biggl[\!\frac{\Pcalvecop\!-\!q\FLPA}{m}\!\biggr]\psi\!+\! \psi\biggl[\!\frac{\Pcalvecop\!-\!q\FLPA}{m}\!\biggr]\cconj\!\!\!\psi\cconj \!\biggr\}. \end{equation} There is then a current $\FLPJ$ which completes Eq. (21.8). Equation (21.11) shows that the probability is conserved locally. If a particle disappears from one region it cannot appear in another without something going on in between. Imagine that the first region is surrounded by a closed surface far enough out that there is zero probability to find the electron at the surface. The total probability to find the electron somewhere inside the surface is the volume integral of $P$. But according to Gauss’s theorem the volume integral of the divergence $\FLPJ$ is equal to the surface integral of its normal component. If $\psi$ is zero at the surface, Eq. (21.12) says that $\FLPJ$ is zero, so the total probability to find the particle inside can’t change. Only if some of the probability approaches the boundary can some of it leak out. We can say that it only gets out by moving through the surface—and that is local conservation. 21–3Two kinds of momentum The equation for the current is rather interesting, and sometimes causes a certain amount of worry. You would think the current would be something like the density of particles times the velocity. The density should be something like $\psi\psi\cconj$, which is o.k. And each term in Eq. (21.12) looks like the typical form for the average-value of the operator \begin{equation} \label{Eq:III:21:13} \frac{\Pcalvecop-q\FLPA}{m}, \end{equation} so maybe we should think of it as the velocity of flow. It looks as though we have two suggestions for relations of velocity to momentum, because we would also think that momentum divided by mass, $\Pcalvecop/m$, should be a velocity. The two possibilities differ by the vector potential. It happens that these two possibilities were also discovered in classical physics, when it was found that momentum could be defined in two ways.7 One of them is called “kinematic momentum,” but for absolute clarity I will in this lecture call it the “$mv$-momentum.” This is the momentum obtained by multiplying mass by velocity. The other is a more mathematical, more abstract momentum, sometimes called the “dynamical momentum,” which I’ll call “$p$-momentum.” The two possibilities are \begin{equation} \label{Eq:III:21:14} \text{$mv$-momentum}=m\FLPv, \end{equation} \begin{equation} \label{Eq:III:21:15} \text{$p$-momentum}=m\FLPv + q\FLPA. \end{equation} It turns out that in quantum mechanics with magnetic fields it is the $p$-momentum which is connected to the gradient operator $\Pcalvecop$, so it follows that (21.13) is the operator of a velocity. I’d like to make a brief digression to show you what this is all about—why there must be something like Eq. (21.15) in the quantum mechanics. The wave function changes with time according to the Schrödinger equation in Eq. (21.3). If I would suddenly change the vector potential, the wave function wouldn’t change at the first instant; only its rate of change changes. Now think of what would happen in the following circumstance. Suppose I have a long solenoid, in which I can produce a flux of magnetic field ($\FLPB$-field), as shown in Fig. 21–2. And there is a charged particle sitting nearby. Suppose this flux nearly instantaneously builds up from zero to something. I start with zero vector potential and then I turn on a vector potential. That means that I produce suddenly a circumferential vector potential $\FLPA$. You’ll remember that the line integral of $\FLPA$ around a loop is the same as the flux of $\FLPB$ through the loop.8 Now what happens if I suddenly turn on a vector potential? According to the quantum mechanical equation the sudden change of $\FLPA$ does not make a sudden change of $\psi$; the wave function is still the same. So the gradient is also unchanged. Fig. 21–2.The electric field outside a solenoid with an increasing current. But remember what happens electrically when I suddenly turn on a flux. During the short time that the flux is rising, there’s an electric field generated whose line integral is the rate of change of the flux with time: \begin{equation} \FLPE=-\ddp{\FLPA}{t}. \end{equation} That electric field is enormous if the flux is changing rapidly, and it gives a force on the particle. The force is the charge times the electric field, and so during the build up of the flux the particle obtains a total impulse (that is, a change in $m\FLPv$) equal to $-q\FLPA$. In other words, if you suddenly turn on a vector potential at a charge, this charge immediately picks up an $mv$-momentum equal to $-q\FLPA$. But there is something that isn’t changed immediately and that’s the difference between $m\FLPv$ and $-q\FLPA$. And so the sum $\FLPp=m\FLPv+q\FLPA$ is something which is not changed when you make a sudden change in the vector potential. This quantity $\FLPp$ is what we have called the $p$-momentum and is of importance in classical mechanics in the theory of dynamics, but it also has a direct significance in quantum mechanics. It depends on the character of the wave function, and it is the one to be identified with the operator \begin{equation*} \Pcalvecop=\frac{\hbar}{i}\,\FLPnabla. \end{equation*} 21–4The meaning of the wave function When Schrödinger first discovered his equation he discovered the conservation law of Eq. (21.8) as a consequence of his equation. But he imagined incorrectly that $P$ was the electric charge density of the electron and that $\FLPJ$ was the electric current density, so he thought that the electrons interacted with the electromagnetic field through these charges and currents. When he solved his equations for the hydrogen atom and calculated $\psi$, he wasn’t calculating the probability of anything—there were no amplitudes at that time—the interpretation was completely different. The atomic nucleus was stationary but there were currents moving around; the charges $P$ and currents $\FLPJ$ would generate electromagnetic fields and the thing would radiate light. He soon found on doing a number of problems that it didn’t work out quite right. It was at this point that Born made an essential contribution to our ideas regarding quantum mechanics. It was Born who correctly (as far as we know) interpreted the $\psi$ of the Schrödinger equation in terms of a probability amplitude—that very difficult idea that the square of the amplitude is not the charge density but is only the probability per unit volume of finding an electron there, and that when you do find the electron some place the entire charge is there. That whole idea is due to Born. The wave function $\psi(\FLPr)$ for an electron in an atom does not, then, describe a smeared-out electron with a smooth charge density. The electron is either here, or there, or somewhere else, but wherever it is, it is a point charge. On the other hand, think of a situation in which there are an enormous number of particles in exactly the same state, a very large number of them with exactly the same wave function. Then what? One of them is here and one of them is there, and the probability of finding any one of them at a given place is proportional to $\psi\psi\cconj$. But since there are so many particles, if I look in any volume $dx\,dy\,dz$ I will generally find a number close to $\psi\psi\cconj\,dx\,dy\,dz$. So in a situation in which $\psi$ is the wave function for each of an enormous number of particles which are all in the same state, $\psi\psi\cconj$ can be interpreted as the density of particles. If, under these circumstances, each particle carries the same charge $q$, we can, in fact, go further and interpret $\psi\cconj\psi$ as the density of electricity. Normally, $\psi\psi\cconj$ is given the dimensions of a probability density, then $\psi$ should be multiplied by $q$ to give the dimensions of a charge density. For our present purposes we can put this constant factor into $\psi$, and take $\psi\psi\cconj$ itself as the electric charge density. With this understanding, $\FLPJ$ (the current of probability I have calculated) becomes directly the electric current density. So in the situation in which we can have very many particles in exactly the same state, there is possible a new physical interpretation of the wave functions. The charge density and the electric current can be calculated directly from the wave functions and the wave functions take on a physical meaning which extends into classical, macroscopic situations. Something similar can happen with neutral particles. When we have the wave function of a single photon, it is the amplitude to find a photon somewhere. Although we haven’t ever written it down there is an equation for the photon wave function analogous to the Schrödinger equation for the electron. The photon equation is just the same as Maxwell’s equations for the electromagnetic field, and the wave function is the same as the vector potential $\FLPA$. The wave function turns out to be just the vector potential. The quantum physics is the same thing as the classical physics because photons are noninteracting Bose particles and many of them can be in the same state—as you know, they like to be in the same state. The moment that you have billions in the same state (that is, in the same electromagnetic wave), you can measure the wave function, which is the vector potential, directly. Of course, it worked historically the other way. The first observations were on situations with many photons in the same state, and so we were able to discover the correct equation for a single photon by observing directly with our hands on a macroscopic level the nature of the wave function. Now the trouble with the electron is that you cannot put more than one in the same state. Therefore, it was long believed that the wave function of the Schrödinger equation would never have a macroscopic representation analogous to the macroscopic representation of the amplitude for photons. On the other hand, it is now realized that the phenomenon of superconductivity presents us with just this situation. As you know, very many metals become superconducting below a certain temperature9—the temperature is different for different metals. When you reduce the temperature sufficiently the metals conduct electricity without any resistance. This phenomenon has been observed for a very large number of metals but not for all, and the theory of this phenomenon has caused a great deal of difficulty. It took a very long time to understand what was going on inside of superconductors, and I will only describe enough of it for our present purposes. It turns out that due to the interactions of the electrons with the vibrations of the atoms in the lattice, there is a small net effective attraction between the electrons. The result is that the electrons form together, if I may speak very qualitatively and crudely, bound pairs. Now you know that a single electron is a Fermi particle. But a bound pair would act as a Bose particle, because if I exchange both electrons in a pair I change the sign of the wave function twice, and that means that I don’t change anything. A pair is a Bose particle. The energy of pairing—that is, the net attraction—is very, very weak. Only a tiny temperature is needed to throw the electrons apart by thermal agitation, and convert them back to “normal” electrons. But when you make the temperature sufficiently low that they have to do their very best to get into the absolutely lowest state; then they do collect in pairs. I don’t wish you to imagine that the pairs are really held together very closely like a point particle. As a matter of fact, one of the great difficulties of understanding this phenomenon originally was that that is not the way things are. The two electrons which form the pair are really spread over a considerable distance; and the mean distance between pairs is relatively smaller than the size of a single pair. Several pairs are occupying the same space at the same time. Both the reason why electrons in a metal form pairs and an estimate of the energy given up in forming a pair have been a triumph of recent times. This fundamental point in the theory of superconductivity was first explained in the theory of Bardeen, Cooper, and Schrieffer,10 but that is not the subject of this seminar. We will accept, however, the idea that the electrons do, in some manner or other, work in pairs, that we can think of these pairs as behaving more or less like particles, and that we can therefore talk about the wave function for a “pair.” Now the Schrödinger equation for the pair will be more or less like Eq. (21.3). There will be one difference in that the charge $q$ will be twice the charge of an electron. Also, we don’t know the inertia—or effective mass—for the pair in the crystal lattice, so we don’t know what number to put in for $m$. Nor should we think that if we go to very high frequencies (or short wavelengths), this is exactly the right form, because the kinetic energy that corresponds to very rapidly varying wave functions may be so great as to break up the pairs. At finite temperatures there are always a few pairs which are broken up according to the usual Boltzmann theory. The probability that a pair is broken is proportional to $\exp(-E_{\text{pair}}/\kappa T)$. The electrons that are not bound in pairs are called “normal” electrons and will move around in the crystal in the ordinary way. I will, however, consider only the situation at essentially zero temperature—or, in any case, I will disregard the complications produced by those electrons which are not in pairs. Since electron pairs are bosons, when there are a lot of them in a given state there is an especially large amplitude for other pairs to go to the same state. So nearly all of the pairs will be locked down at the lowest energy in exactly the same state—it won’t be easy to get one of them into another state. There’s more amplitude to go into the same state than into an unoccupied state by the famous factor $\sqrt{n}$, where $n-1$ is the occupancy of the lowest state. So we would expect all the pairs to be moving in the same state. What then will our theory look like? I’ll call $\psi$ the wave function of a pair in the lowest energy state. However, since $\psi\psi\cconj$ is going to be proportional to the charge density $\rho$, I can just as well write $\psi$ as the square root of the charge density times some phase factor: \begin{equation} \label{Eq:III:21:17} \psi(\FLPr)=\rho^{1/2}(\FLPr)e^{i\theta(\FLPr)}, \end{equation} where $\rho$ and $\theta$ are real functions of $\FLPr$. (Any complex function can, of course, be written this way.) It’s clear what we mean when we talk about the charge density, but what is the physical meaning of the phase $\theta$ of the wave function? Well, let’s see what happens if we substitute $\psi(\FLPr)$ into Eq. (21.12), and express the current density in terms of these new variables $\rho$ and $\theta$. It’s just a change of variables and I won’t go through all the algebra, but it comes out \begin{equation} \label{Eq:III:21:18} \FLPJ=\frac{\hbar}{m}\biggl( \FLPgrad{\theta}-\frac{q}{\hbar}\,\FLPA\biggr)\rho. \end{equation} Since both the current density and the charge density have a direct physical meaning for the superconducting electron gas, both $\rho$ and $\theta$ are real things. The phase is just as observable as $\rho$; it is a piece of the current density $\FLPJ$. The absolute phase is not observable, but if the gradient of the phase is known everywhere, the phase is known except for a constant. You can define the phase at one point, and then the phase everywhere is determined. Incidentally, the equation for the current can be analyzed a little nicer, when you think that the current density $\FLPJ$ is in fact the charge density times the velocity of motion of the fluid of electrons, or $\rho\FLPv$. Equation (21.18) is then equivalent to \begin{equation} \label{Eq:III:21:19} m\FLPv=\hbar\,\FLPgrad{\theta}-q\FLPA. \end{equation} Notice that there are two pieces in the $mv$-momentum; one is a contribution from the vector potential, and the other, a contribution from the behavior of the wave function. In other words, the quantity $\hbar\,\FLPgrad{\theta}$ is just what we have called the $p$-momentum. 21–6The Meissner effect Now we can describe some of the phenomena of superconductivity. First, there is no electrical resistance. There’s no resistance because all the electrons are collectively in the same state. In the ordinary flow of current you knock one electron or the other out of the regular flow, gradually deteriorating the general momentum. But here to get one electron away from what all the others are doing is very hard because of the tendency of all Bose particles to go in the same state. A current once started, just keeps on going forever. It’s also easy to understand that if you have a piece of metal in the superconducting state and turn on a magnetic field which isn’t too strong (we won’t go into the details of how strong), the magnetic field can’t penetrate the metal. If, as you build up the magnetic field, any of it were to build up inside the metal, there would be a rate of change of flux which would produce an electric field, and an electric field would immediately generate a current which, by Lenz’s law, would oppose the flux. Since all the electrons will move together, an infinitesimal electric field will generate enough current to oppose completely any applied magnetic field. So if you turn the field on after you’ve cooled a metal to the superconducting state, it will be excluded. Even more interesting is a related phenomenon discovered experimentally by Meissner.11 If you have a piece of the metal at a high temperature (so that it is a normal conductor) and establish a magnetic field through it, and then you lower the temperature below the critical temperature (where the metal becomes a superconductor), the field is expelled. In other words, it starts up its own current—and in just the right amount to push the field out. We can see the reason for that in the equations, and I’d like to explain how. Suppose that we take a piece of superconducting material which is in one lump. Then in a steady situation of any kind the divergence of the current must be zero because there’s no place for it to go. It is convenient to choose to make the divergence of $\FLPA$ equal to zero. (I should explain why choosing this convention doesn’t mean any loss of generality, but I don’t want to take the time.) Taking the divergence of Eq. (21.18), then gives that the Laplacian of $\theta$ is equal to zero. One moment. What about the variation of $\rho$? I forgot to mention an important point. There is a background of positive charge in this metal due to the atomic ions of the lattice. If the charge density $\rho$ is uniform there is no net charge and no electric field. If there would be any accumulation of electrons in one region the charge wouldn’t be neutralized and there would be a terrific repulsion pushing the electrons apart.12 So in ordinary circumstances the charge density of the electrons in the superconductor is almost perfectly uniform—I can take $\rho$ as a constant. Now the only way that $\nabla^2\theta$ can be zero everywhere inside the lump of metal is for $\theta$ to be a constant. And that means that there is no contribution to $\FLPJ$ from $p$-momentum. Equation (21.18) then says that the current is proportional to $\rho$ times $\FLPA$. So everywhere in a lump of superconducting material the current is necessarily proportional to the vector potential: \begin{equation} \label{Eq:III:21:20} \FLPJ=-\rho\,\frac{q}{m}\,\FLPA. \end{equation} Since $\rho$ and $q$ have the same (negative) sign, and since $\rho$ is a constant, I can set $-\rho q/m=-(\text{some positive constant})$; then \begin{equation} \label{Eq:III:21:21} \FLPJ=-(\text{some positive constant})\FLPA. \end{equation} This equation was originally proposed by London and London13 to explain the experimental observations of superconductivity—long before the quantum mechanical origin of the effect was understood. Now we can use Eq. (21.20) in the equations of electromagnetism to solve for the fields. The vector potential is related to the current density by \begin{equation} \label{Eq:III:21:22} \nabla^2\FLPA=-\frac{1}{\epsO c^2}\,\FLPJ. \end{equation} If I use Eq. (21.21) for $\FLPJ$, I have \begin{equation} \label{Eq:III:21:23} \nabla^2\FLPA=\lambda^2\FLPA, \end{equation} where $\lambda^2$ is just a new constant; \begin{equation} \label{Eq:III:21:24} \lambda^2=\rho\,\frac{q}{\epsO mc^2}. \end{equation} We can now try to solve this equation for $\FLPA$ and see what happens in detail. For example, in one dimension Eq. (21.23) has exponential solutions of the form $e^{-\lambda x}$ and $e^{+\lambda x}$. These solutions mean that the vector potential must decrease exponentially as you go from the surface into the material. (It can’t increase because there would be a blow up.) If the piece of metal is very large compared to $1/\lambda$, the field only penetrates to a thin layer at the surface—a layer about $1/\lambda$ in thickness. The entire remainder of the interior is free of field, as sketched in Fig. 21–3. This is the explanation of the Meissner effect. Fig. 21–3.(a) A superconducting cylinder in a magnetic field; (b) the magnetic field $B$ as a function of $r$. How big is the distance $1/\lambda$? Well, remember that $r_0$, the “electromagnetic radius” of the electron ($2.8\times10^{-13}$ cm), is given by \begin{equation*} mc^2=\frac{q_e^2}{4\pi\epsO r_0}. \end{equation*} Also, remember that $q$ in Eq. (21.24) is twice the charge of an electron, so \begin{equation*} \frac{q}{\epsO mc^2}=\frac{8\pi r_0}{q_e}. \end{equation*} Writing $\rho$ as $q_eN$, where $N$ is the number of electrons per cubic centimeter, we have \begin{equation} \label{Eq:III:21:25} \lambda^2=8\pi Nr_0. \end{equation} For a metal such as lead there are about $3\times10^{22}$ atoms per cm$^3$, so if each one contributed only one conduction electron, $1/\lambda$ would be about $2\times10^{-6}$ cm. That gives you the order of magnitude. 21–7Flux quantization Fig. 21–4.A ring in a magnetic field: (a) in the normal state; (b) in the superconducting state; (c) after the external field is removed. The London equation (21.21) was proposed to account for the observed facts of superconductivity including the Meissner effect. In recent times, however, there have been some even more dramatic predictions. One prediction made by London was so peculiar that nobody paid much attention to it until recently. I will now discuss it. This time instead of taking a single lump, suppose we take a ring whose thickness is large compared to $1/\lambda$, and try to see what would happen if we started with a magnetic field through the ring, then cooled it to the superconducting state, and afterward removed the original source of $\FLPB$. The sequence of events is sketched in Fig. 21–4. In the normal state there will be a field in the body of the ring as sketched in part (a) of the figure. When the ring is made superconducting, the field is forced outside of the material (as we have just seen). There will then be some flux through the hole of the ring as sketched in part (b). If the external field is now removed, the lines of field going through the hole are “trapped” as shown in part (c). The flux $\Phi$ through the center can’t decrease because $\ddpl{\Phi}{t}$ must be equal to the line integral of $\FLPE$ around the ring, which is zero in a superconductor. As the external field is removed a super current starts flowing around the ring to keep the flux through the ring a constant. (It’s the old eddy-current idea, only with zero resistance.) These currents will, however, all flow near the surface (down to a depth $1/\lambda$), as can be shown by the same kind of analysis that I made for the solid block. These currents can keep the magnetic field out of the body of the ring, and produce the permanently trapped magnetic field as well. Now, however, there is an essential difference, and our equations predict a surprising effect. The argument I made above that $\theta$ must be a constant in a solid block does not apply for a ring, as you can see from the following arguments. Fig. 21–5.The curve $\Gamma$ inside a superconducting ring. Well inside the body of the ring the current density $\FLPJ$ is zero; so Eq. (21.18) gives \begin{equation} \label{Eq:III:21:26} \hbar\,\FLPgrad{\theta}=q\FLPA. \end{equation} Now consider what we get if we take the line integral of $\FLPA$ around a curve $\Gamma$, which goes around the ring near the center of its cross-section so that it never gets near the surface, as drawn in Fig. 21–5. From Eq. (21.26), \begin{equation} \label{Eq:III:21:27} \hbar\oint\FLPgrad{\theta}\cdot d\FLPs=q\oint\FLPA\cdot d\FLPs. \end{equation} Now you know that the line integral of $\FLPA$ around any loop is equal to the flux of $\FLPB$ through the loop \begin{equation*} \oint\FLPA\cdot d\FLPs=\Phi. \end{equation*} Equation (21.27) then becomes \begin{equation} \label{Eq:III:21:28} \oint\FLPgrad{\theta}\cdot d\FLPs=\frac{q}{\hbar}\,\Phi. \end{equation} The line integral of a gradient from one point to another (say from point $1$ to point $2$) is the difference of the values of the function at the two points. Namely, \begin{equation*} \int_1^2\FLPgrad{\theta}\cdot d\FLPs=\theta_2-\theta_1. \end{equation*} If we let the two end points $1$ and $2$ come together to make a closed loop you might at first think that $\theta_2$ would equal $\theta_1$, so that the integral in Eq. (21.28) would be zero. That would be true for a closed loop in a simply-connected piece of superconductor, but it is not necessarily true for a ring-shaped piece. The only physical requirement we can make is that there can be only one value of the wave function for each point. Whatever $\theta$ does as you go around the ring, when you get back to the starting point the $\theta$ you get must give the same value for the wave function \begin{equation*} \psi=\sqrt{\rho}e^{i\theta}. \end{equation*} This will happen if $\theta$ changes by $2\pi n$, where $n$ is any integer. So if we make one complete turn around the ring the left-hand side of Eq. (21.27) must be $\hbar\cdot2\pi n$. Using Eq. (21.28), I get that \begin{equation} \label{Eq:III:21:29} 2\pi n\hbar=q\Phi. \end{equation} The trapped flux must always be an integer times $2\pi\hbar/q$! If you would think of the ring as a classical object with an ideally perfect (that is, infinite) conductivity, you would think that whatever flux was initially found through it would just stay there—any amount of flux at all could be trapped. But the quantum-mechanical theory of superconductivity says that the flux can be zero, or $2\pi\hbar/q$, or $4\pi\hbar/q$, or $6\pi\hbar/q$, and so on, but no value in between. It must be a multiple of a basic quantum mechanical unit. London14 predicted that the flux trapped by a superconducting ring would be quantized and said that the possible values of the flux would be given by Eq. (21.29) with $q$ equal to the electronic charge. According to London the basic unit of flux should be $2\pi\hbar/q_e$, which is about $4\times10^{-7}$ $\text{gauss}\cdot\text{cm}^2$. To visualize such a flux, think of a tiny cylinder a tenth of a millimeter in diameter; the magnetic field inside it when it contains this amount of flux is about one percent of the earth’s magnetic field. It should be possible to observe such a flux by a sensitive magnetic measurement. In 1961 such a quantized flux was looked for and found by Deaver and Fairbank15 at Stanford University and at about the same time by Doll and Näbauer16 in Germany. In the experiment of Deaver and Fairbank, a tiny cylinder of superconductor was made by electroplating a thin layer of tin on a one-centimeter length of No. 56 ($1.3\times10^{-3}$ cm diameter) copper wire. The tin becomes superconducting below $3.8^\circ$K while the copper remains a normal metal. The wire was put in a small controlled magnetic field, and the temperature reduced until the tin became superconducting. Then the external source of field was removed. You would expect this to generate a current by Lenz’s law so that the flux inside would not change. The little cylinder should now have magnetic moment proportional to the flux inside. The magnetic moment was measured by jiggling the wire up and down (like the needle on a sewing machine, but at the rate of $100$ cycles per second) inside a pair of little coils at the ends of the tin cylinder. The induced voltage in the coils was then a measure of the magnetic moment. When the experiment was done by Deaver and Fairbank, they found that the flux was quantized, but that the basic unit was only one-half as large as London had predicted. Doll and Näbauer got the same result. At first this was quite mysterious,17 but we now understand why it should be so. According to the Bardeen, Cooper, and Schrieffer theory of superconductivity, the $q$ which appears in Eq. (21.29) is the charge of a pair of electrons and so is equal to $2q_e$. The basic flux unit is \begin{equation} \label{Eq:III:21:30} \Phi_0=\frac{\pi\hbar}{q_e}\approx2\times10^{-7}\text{ gauss}\cdot\text{cm}^2 \end{equation} or one-half the amount predicted by London. Everything now fits together, and the measurements show the existence of the predicted purely quantum-mechanical effect on a large scale. 21–8The dynamics of superconductivity The Meissner effect and the flux quantization are two confirmations of our general ideas. Just for the sake of completeness I would like to show you what the complete equations of a superconducting fluid would be from this point of view—it is rather interesting. Up to this point I have only put the expression for $\psi$ into equations for charge density and current. If I put it into the complete Schrödinger equation I get equations for $\rho$ and $\theta$. It should be interesting to see what develops, because here we have a “fluid” of electron pairs with a charge density $\rho$ and a mysterious $\theta$—we can try to see what kind of equations we get for such a “fluid”! So we substitute the wave function of Eq. (21.17) into the Schrödinger equation (21.3) and remember that $\rho$ and $\theta$ are real functions of $x$, $y$, $z$, and $t$. If we separate real and imaginary parts we obtain then two equations. To write them in a shorter form I will—following Eq. (21.19)—write \begin{equation} \label{Eq:III:21:31} \frac{\hbar}{m}\,\FLPgrad{\theta}-\frac{q}{m}\,\FLPA=\FLPv. \end{equation} One of the equations I get is then \begin{equation} \label{Eq:III:21:32} \ddp{\rho}{t}=-\FLPdiv{\rho\FLPv}. \end{equation} Since $\rho\FLPv$ is first $\FLPJ$, this is just the continuity equation once more. The other equation I obtain tells how $\theta$ varies; it is \begin{equation} \label{Eq:III:21:33} \hbar\,\ddp{\theta}{t}=-\frac{m}{2}\,v^2-q\phi+ \frac{\hbar^2}{2m}\biggl\{ \frac{1}{\sqrt{\rho}}\,\nabla^2(\sqrt{\rho})\biggr\}. \end{equation} Those who are thoroughly familiar with hydrodynamics (of which I’m sure few of you are) will recognize this as the equation of motion for an electrically charged fluid if we identify $\hbar\theta$ as the “velocity potential”—except that the last term, which should be the energy of compression of the fluid, has a rather strange dependence on the density $\rho$. In any case, the equation says that the rate of change of the quantity $\hbar\theta$ is given by a kinetic energy term, $-\tfrac{1}{2}mv^2$, plus a potential energy term, $-q\phi$, with an additional term, containing the factor $\hbar^2$, which we could call a “quantum mechanical energy.” We have seen that inside a superconductor $\rho$ is kept very uniform by the electrostatic forces, so this term can almost certainly be neglected in every practical application provided we have only one superconducting region. If we have a boundary between two superconductors (or other circumstances in which the value of $\rho$ may change rapidly) this term can become important. For those who are not so familiar with the equations of hydrodynamics, I can rewrite Eq. (21.33) in a form that makes the physics more apparent by using Eq. (21.31) to express $\theta$ in terms of $\FLPv$. Taking the gradient of the whole of Eq. (21.33) and expressing $\FLPgrad{\theta}$ in terms of $\FLPA$ and $\FLPv$ by using (21.31), I get \begin{equation} \label{Eq:III:21:34} \ddp{\FLPv}{t}=\frac{q}{m}\biggl(-\FLPgrad{\phi}-\ddp{\FLPA}{t}\biggr)- \FLPv\times(\FLPcurl{\FLPv})-(\FLPv\cdot\FLPnabla)\FLPv+ \FLPgrad{\frac{\hbar^2}{2m^2} \biggl(\frac{1}{\sqrt{\rho}}\,\nabla^2\sqrt{\rho}\biggr)}. \end{equation} \begin{align} \ddp{\FLPv}{t}&=\frac{q}{m}\biggl(-\FLPgrad{\phi}-\ddp{\FLPA}{t}\biggr) -\FLPv\times(\FLPcurl{\FLPv})\notag\\[1ex] \label{Eq:III:21:34} &-(\FLPv\cdot\FLPnabla)\FLPv + \FLPgrad{\frac{\hbar^2}{2m^2} \biggl(\frac{1}{\sqrt{\rho}}\,\nabla^2\sqrt{\rho}\biggr)}. \end{align} What does this equation mean? First, remember that \begin{equation} \label{Eq:III:21:35} -\FLPgrad{\phi}-\ddp{\FLPA}{t}=\FLPE. \end{equation} Next, notice that if I take the curl of Eq. (21.31), I get \begin{equation} \label{Eq:III:21:36} \FLPcurl{\FLPv}=-\frac{q}{m}\,\FLPcurl{\FLPA}, \end{equation} since the curl of a gradient is always zero. But $\FLPcurl{\FLPA}$ is the magnetic field $\FLPB$, so the first two terms can be written as \begin{equation*} \frac{q}{m}(\FLPE+\FLPv\times\FLPB). \end{equation*} Finally, you should understand that $\ddpl{\FLPv}{t}$ stands for the rate of change of the velocity of the fluid at a point. If you concentrate on a particular particle, its acceleration is the total derivative of $\FLPv$ (or, as it is sometimes called in fluid dynamics, the “comoving acceleration”), which is related to $\ddpl{\FLPv}{t}$ by18 \begin{equation} \label{Eq:III:21:37} \left.\ddt{\FLPv}{t}\right|_{\text{comoving}}\kern{-2ex}= \ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv. \end{equation} This extra term also appears as the third term on the right side of Eq. (21.34). Taking it to the left side, I can write Eq. (21.34) in the following way: \begin{equation} \label{Eq:III:21:38} \left.m\ddt{\FLPv}{t}\right|_{\text{comoving}}\kern{-3.5ex}= q(\FLPE\!+\!\FLPv\!\times\!\FLPB)\!+\!\FLPgrad{\frac{\hbar^2}{2m} \!\biggl(\!\frac{1}{\sqrt{\rho}}\nabla^2\!\!\sqrt{\rho}\!\biggr)}. \end{equation} We also have from Eq. (21.36) that \begin{equation} \label{Eq:III:21:39} \FLPcurl{\FLPv}=-\frac{q}{m}\,\FLPB. \end{equation} These two equations are the equations of motion of the superconducting electron fluid. The first equation is just Newton’s law for a charged fluid in an electromagnetic field. It says that the acceleration of each particle of the fluid whose charge is $q$ comes from the ordinary Lorentz force $q(\FLPE+\FLPv\times\FLPB)$ plus an additional force, which is the gradient of some mystical quantum mechanical potential—a force which is not very big except at the junction between two superconductors. The second equation says that the fluid is “ideal”—the curl of $\FLPv$ has zero divergence (the divergence of $\FLPB$ is always zero). That means that the velocity can be expressed in terms of velocity potential. Ordinarily one writes that $\FLPcurl{\FLPv}=\FLPzero$ for an ideal fluid, but for an ideal charged fluid in a magnetic field, this gets modified to Eq. (21.39). So, Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields. (The charges and currents you use to get the fields must, of course, include the ones from the superconductor as well as from the external sources.) Incidentally, I believe that Eq. (21.38) is not quite correct, but ought to have an additional term involving the density. This new term does not depend on quantum mechanics, but comes from the ordinary energy associated with variations of density. Just as in an ordinary fluid there should be a potential energy density proportional to the square of the deviation of $\rho$ from $\rho_0$, the undisturbed density (which is, here, also equal to the charge density of the crystal lattice). Since there will be forces proportional to the gradient of this energy, there should be another term in Eq. (21.38) of the form: $(\text{const})\,\FLPgrad{(\rho-\rho_0)^2}$. This term did not appear from the analysis because it comes from the interactions between particles, which I neglected in using an independent-particle approximation. It is, however, just the force I referred to when I made the qualitative statement that electrostatic forces would tend to keep $\rho$ nearly constant inside a superconductor. 21–9The Josephson junction Fig. 21–6.Two superconductors separated by a thin insulator. I would like to discuss next a very interesting situation that was noticed by Josephson19 while analyzing what might happen at a junction between two superconductors. Suppose we have two superconductors which are connected by a thin layer of insulating material as in Fig. 21–6. Such an arrangement is now called a “Josephson junction.” If the insulating layer is thick, the electrons can’t get through; but if the layer is thin enough, there can be an appreciable quantum mechanical amplitude for electrons to jump across. This is just another example of the quantum-mechanical penetration of a barrier. Josephson analyzed this situation and discovered that a number of strange phenomena should occur. In order to analyze such a junction I’ll call the amplitude to find an electron on one side, $\psi_1$, and the amplitude to find it on the other, $\psi_2$. In the superconducting state the wave function $\psi_1$ is the common wave function of all the electrons on one side, and $\psi_2$ is the corresponding function on the other side. I could do this problem for different kinds of superconductors, but let us take a very simple situation in which the material is the same on both sides so that the junction is symmetrical and simple. Also, for a moment let there be no magnetic field. Then the two amplitudes should be related in the following way: \begin{align*} i\hbar\,\ddp{\psi_1}{t}&=U_1\psi_1+K\psi_2,\\[1ex] i\hbar\,\ddp{\psi_2}{t}&=U_2\psi_2+K\psi_1. \end{align*} The constant $K$ is a characteristic of the junction. If $K$ were zero, these two equations would just describe the lowest energy state—with energy $U$—of each superconductor. But there is coupling between the two sides by the amplitude $K$ that there may be leakage from one side to the other. (It is just the “flip-flop” amplitude of a two-state system.) If the two sides are identical, $U_1$ would equal $U_2$ and I could just subtract them off. But now suppose that we connect the two superconducting regions to the two terminals of a battery so that there is a potential difference $V$ across the junction. Then $U_1-U_2=qV$. I can, for convenience, define the zero of energy to be halfway between, then the two equations are \begin{equation} \begin{aligned} i\hbar\,\ddp{\psi_1}{t}&=+\frac{qV}{2}\,\psi_1+K\psi_2,\\[1ex] i\hbar\,\ddp{\psi_2}{t}&=-\frac{qV}{2}\,\psi_2+K\psi_1. \end{aligned} \label{Eq:III:21:40} \end{equation} These are the standard equations for two quantum mechanical states coupled together. This time, let’s analyze these equations in another way. Let’s make the substitutions \begin{equation} \begin{aligned} \psi_1&=\sqrt{\rho_1}e^{i\theta_1},\\[1ex] \psi_2&=\sqrt{\rho_2}e^{i\theta_2}, \end{aligned} \label{Eq:III:21:41} \end{equation} where $\theta_1$ and $\theta_2$ are the phases on the two sides of the junction and $\rho_1$ and $\rho_2$ are the density of electrons at those two points. Remember that in actual practice $\rho_1$ and $\rho_2$ are almost exactly the same and are equal to $\rho_0$, the normal density of electrons in the superconducting material. Now if you substitute these equations for $\psi_1$ and $\psi_2$ into (21.40), you get four equations by equating the real and imaginary parts in each case. Letting $(\theta_2-\theta_1)=\delta$, for short, the result is \begin{align} &\begin{aligned} \dot{\rho}_1&=+\frac{2}{\hbar}\,K\sqrt{\rho_2\rho_1}\sin\delta,\\[1.5ex] \dot{\rho}_2&=-\frac{2}{\hbar}\,K\sqrt{\rho_2\rho_1}\sin\delta, \end{aligned}\\[3ex] \label{Eq:III:21:42} &\begin{aligned} \dot{\theta}_1&=-\frac{K}{\hbar}\sqrt{\frac{\rho_2}{\rho_1}}\cos\delta- \frac{qV}{2\hbar},\\[1.5ex] \dot{\theta}_2&=-\frac{K}{\hbar}\sqrt{\frac{\rho_1}{\rho_2}}\cos\delta+ \frac{qV}{2\hbar}. \end{aligned} \label{Eq:III:21:43} \end{align} The first two equations say that $\dot{\rho}_1=-\dot{\rho}_2$. “But,” you say, “they must both be zero if $\rho_1$ and $\rho_2$ are both constant and equal to $\rho_0$.” Not quite. These equations are not the whole story. They say what $\dot{\rho}_1$ and $\dot{\rho}_2$ would be if there were no extra electric forces due to an unbalance between the electron fluid and the background of positive ions. They tell how the densities would start to change, and therefore describe the kind of current that would begin to flow. This current from side $1$ to side $2$ would be just $\dot{\rho}_1$ (or $-\dot{\rho}_2$), or \begin{equation} \label{Eq:III:21:44} J=\frac{2K}{\hbar}\sqrt{\rho_1\rho_2}\sin\delta. \end{equation} Such a current would soon charge up side $2$, except that we have forgotten that the two sides are connected by wires to the battery. The current that flows will not charge up region $2$ (or discharge region $1$) because currents will flow to keep the potential constant. These currents from the battery have not been included in our equations. When they are included, $\rho_1$ and $\rho_2$ do not in fact change, but the current across the junction is still given by Eq. (21.44). Since $\rho_1$ and $\rho_2$ do remain constant and equal to $\rho_0$, let’s set $2K\rho_0/\hbar=J_0$, and write \begin{equation} \label{Eq:III:21:45} J=J_0\sin\delta. \end{equation} $J_0$, like $K$, is then a number which is a characteristic of the particular junction. The other pair of equations (21.43) tells us about $\theta_1$ and $\theta_2$. We are interested in the difference $\delta=\theta_2-\theta_1$ to use Eq. (21.45); what we get is \begin{equation} \label{Eq:III:21:46} \dot{\delta}=\dot{\theta}_2-\dot{\theta}_1=\frac{qV}{\hbar}. \end{equation} That means that we can write \begin{equation} \label{Eq:III:21:47} \delta(t)=\delta_0+\frac{q}{\hbar}\int V(t)\,dt, \end{equation} where $\delta_0$ is the value of $\delta$ at $t=0$. Remember also that $q$ is the charge of a pair, namely, $q=2q_e$. In Eqs. (21.45) and (21.47) we have an important result, the general theory of the Josephson junction. Now what are the consequences? First, put on a dc voltage. If you put on a dc voltage, $V_0$, the argument of the sine becomes $(\delta_0+(q/\hbar)V_0t)$. Since $\hbar$ is a small number (compared to ordinary voltage and times), the sine oscillates rather rapidly and the net current is nothing. (In practice, since the temperature is not zero, you would get a small current due to the conduction by “normal” electrons.) On the other hand if you have zero voltage across the junction, you can get a current! With no voltage the current can be any amount between $+J_0$ and $-J_0$ (depending on the value of $\delta_0$). But try to put a voltage across it and the current goes to zero. This strange behavior has recently been observed experimentally.20 There is another way of getting a current—by applying a voltage at a very high frequency in addition to a dc voltage. Let \begin{equation*} V=V_0+v\cos\omega t, \end{equation*} where $v\ll V_0$. Then $\delta(t)$ is \begin{equation*} \delta_0+\frac{q}{\hbar}\,V_0t+\frac{q}{\hbar}\,\frac{v}{\omega}\sin\omega t. \end{equation*} Now for $\Delta x$ small, \begin{equation*} \sin\,(x+\Delta x)\approx\sin x+\Delta x\cos x. \end{equation*} Using this approximation for $\sin\delta$, I get \begin{equation*} J=\!J_0\Bigl[\sin\Bigl(\!\delta_0\!+\!\frac{q}{\hbar}V_0t\!\Bigr)\!+\! \frac{q}{\hbar}\frac{v}{\omega}\sin\omega t \cos\Bigl(\!\delta_0\!+\!\frac{q}{\hbar}V_0t\!\Bigr)\Bigr]. \end{equation*} The first term is zero on the average, but the second term is not if \begin{equation*} \omega=\frac{q}{\hbar}\,V_0. \end{equation*} There should be a current if the ac voltage has just this frequency. Shapiro21 claims to have observed such a resonance effect. If you look up papers on the subject you will find that they often write the formula for the current as \begin{equation} \label{Eq:III:21:48} J=J_0\sin\biggl(\delta_0+\frac{2q_e}{\hbar}\int\FLPA\cdot d\FLPs\biggr), \end{equation} where the integral is to be taken across the junction. The reason for this is that when there’s a vector potential across the junction the flip-flop amplitude is modified in phase in the way that we explained earlier. If you chase that extra phase through, it comes out as given above. Fig. 21–7.Two Josephson junctions in parallel. Finally, I would like to describe a very dramatic and interesting experiment which has recently been made on the interference of the currents from each of two junctions. In quantum mechanics we’re used to the interference between amplitudes from two different slits. Now we’re going to do the interference between two junctions caused by the difference in the phase of the arrival of the currents through two different paths. In Fig. 21–7, I show two different junctions, “a” and “b”, connected in parallel. The ends, $P$ and $Q$, are connected to our electrical instruments which measure any current flow. The external current, $J_{\text{total}}$, will be the sum of the currents through the two junctions. Let $J_{\text{a}}$ and $J_{\text{b}}$ be the currents through the two junctions, and let their phases be $\delta_{\text{a}}$ and $\delta_{\text{b}}$. Now the phase difference of the wave functions between $P$ and $Q$ must be the same whether you go on one route or the other. Along the route through junction “a”, the phase difference between $P$ and $Q$ is $\delta_{\text{a}}$ plus the line integral of the vector potential along the upper route: \begin{equation} \label{Eq:III:21:49} \Delta\text{Phase}_{P\to Q}=\delta_{\text{a}}+ \frac{2q_e}{\hbar}\int_{\text{upper}}\kern{-3ex}\FLPA\cdot d\FLPs. \end{equation} Why? Because the phase $\theta$ is related to $\FLPA$ by Eq. (21.26). If you integrate that equation along some path, the left-hand side gives the phase change, which is then just proportional to the line integral of $\FLPA$, as we have written here. The phase change along the lower route can be written similarly \begin{equation} \label{Eq:III:21:50} \Delta\text{Phase}_{P\to Q}=\delta_{\text{b}}+ \frac{2q_e}{\hbar}\int_{\text{lower}}\kern{-3ex}\FLPA\cdot d\FLPs. \end{equation} These two must be equal; and if I subtract them I get that the difference of the deltas must be the line integral of $\FLPA$ around the circuit: \begin{equation*} \delta_{\text{b}}-\delta_{\text{a}}= \frac{2q_e}{\hbar}\oint_\Gamma\FLPA\cdot d\FLPs. \end{equation*} Here the integral is around the closed loop $\Gamma$ of Fig. 21–7 which circles through both junctions. The integral over $\FLPA$ is the magnetic flux $\Phi$ through the loop. So the two $\delta$’s are going to differ by $2q_e/\hbar$ times the magnetic flux $\Phi$ which passes between the two branches of the circuit: \begin{equation} \label{Eq:III:21:51} \delta_{\text{b}}-\delta_{\text{a}}=\frac{2q_e}{\hbar}\,\Phi. \end{equation} I can control this phase difference by changing the magnetic field on the circuit, so I can adjust the differences in phases and see whether or not the total current that flows through the two junctions shows any interference of the two parts. The total current will be the sum of $J_{\text{a}}$ and $J_{\text{b}}$. For convenience, I will write \begin{equation*} \delta_{\text{a}}=\delta_0-\frac{q_e}{\hbar}\,\Phi,\quad \delta_{\text{b}}=\delta_0+\frac{q_e}{\hbar}\,\Phi. \end{equation*} Then, \begin{align} J_{\text{total}} &=J_0\biggl\{\!\sin\biggl(\! \delta_0\!-\!\frac{q_e}{\hbar}\Phi\!\biggr)\!+\sin\biggl(\! \delta_0\!+\!\frac{q_e}{\hbar}\,\Phi\!\biggr)\!\biggr\}\notag\\[1.5ex] \label{Eq:III:21:52} &=2J_0\sin\delta_0\cos\frac{q_e\Phi}{\hbar}. \end{align} Now we don’t know anything about $\delta_0$, and nature can adjust that anyway she wants depending on the circumstances. In particular, it will depend on the external voltage we apply to the junction. No matter what we do, however, $\sin\delta_0$ can never get bigger than $1$. So the maximum current for any given $\Phi$ is given by \begin{equation*} J_{\text{max}}=2J_0\left\lvert \cos\frac{q_e}{\hbar}\,\Phi\right\rvert. \end{equation*} This maximum current will vary with $\Phi$ and will itself have maxima whenever \begin{equation*} \Phi=n\,\frac{\pi\hbar}{q_e}, \end{equation*} with $n$ some integer. That is to say that the current takes on its maximum values where the flux linkage has just those quantized values we found in Eq. (21.30)! The Josephson current through a double junction was recently measured22 as a function of the magnetic field in the area between the junctions. The results are shown in Fig. 21–8. There is a general background of current from various effects we have neglected, but the rapid oscillations of the current with changes in the magnetic field are due to the interference term $\cos q_e\Phi/\hbar$ of Eq. (21.52). Fig. 21–8.A recording of the current through a pair of Josephson junctions as a function of the magnetic field in the region between the two junctions (see Fig. 21–7). [This recording was provided by R. C. Jaklevic, J. Lambe, A. H. Silver, and J. E. Mercereau of the Scientific Laboratory, Ford Motor Company.] One of the intriguing questions about quantum mechanics is the question of whether the vector potential exists in a place where there’s no field.23 This experiment I have just described has also been done with a tiny solenoid between the two junctions so that the only significant magnetic $\FLPB$ field is inside the solenoid and a negligible amount is on the superconducting wires themselves. Yet it is reported that the amount of current depends oscillatorily on the flux of magnetic field inside that solenoid even though that field never touches the wires—another demonstration of the “physical reality” of the vector potential.24 I don’t know what will come next. But look what can be done. First, notice that the interference between two junctions can be used to make a sensitive magnetometer. If a pair of junctions is made with an enclosed area of, say, $1$ mm$^2$, the maxima in the curve of Fig. 21–8 would be separated by $2\times10^{-6}$ gauss. It is certainly possible to tell when you are $1/10$ of the way between two peaks; so it should be possible to use such a junction to measure magnetic fields as small as $2\times10^{-7}$ gauss—or to measure larger fields to such a precision. One should be able to go even further. Suppose for example we put a set of $10$ or $20$ junctions close together and equally spaced. Then we can have the interference between $10$ or $20$ slits and as we change the magnetic field we will get very sharp maxima and minima. Instead of a $2$-slit interference we can have a $20$- or perhaps even a $100$-slit interferometer for measuring the magnetic field. Perhaps we can predict that the measurement of magnetic fields will—by using the effects of quantum-mechanical interference—eventually become almost as precise as the measurement of wavelength of light. These then are some illustrations of things that are happening in modern times—the transistor, the laser, and now these junctions, whose ultimate practical applications are still not known. The quantum mechanics which was discovered in 1926 has had nearly 40 years of development, and rather suddenly it has begun to be exploited in many practical and real ways. We are really getting control of nature on a very delicate and beautiful level. I am sorry to say, gentlemen, that to participate in this adventure it is absolutely imperative that you learn quantum mechanics as soon as possible. It was our hope that in this course we would find a way to make comprehensible to you at the earliest possible moment the mysteries of this part of physics. 1. I’m not really reminding you, because I haven’t shown you some of these equations before; but remember the spirit of this seminar. 2. Volume II, Section 15–5. 3. Not to be confused with our earlier use of $\phi$ for a state label! 4. $K$ is the same quantity that was called $A$ in the problem of a linear lattice with no magnetic field. See Chapter 13. 5. Section 13–3. 6. Volume II, Section 27–1. 7. See, for example, J. D. Jackson, Classical Electrodynamics, John Wiley and Sons, Inc., New York(1962), p. 408. 8. Volume II, Chapter 14, Section 14–1. 9. First discovered by Kamerlingh-Onnes in 1911; H. Kamerlingh-Onnes, Comm. Phys. Lab., Univ. Leyden, Nos. 119, 120, 122 (1911). You will find a nice up-to-date discussion of the subject in E. A. Lynton, Superconductivity, John Wiley and Sons, Inc., New York, 1962. 10. J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957). 11. W. Meissner and R. Ochsenfeld, Naturwiss. 21, 787 (1933). 12. Actually if the electric field were too strong, pairs would be broken up and the “normal” electrons created would move in to help neutralize any excess of positive charge. Still, it takes energy to make these normal electrons, so the main point is that a nearly uniform density $\rho$ is highly favored energetically. 13. F. London and H. London, Proc. Roy. Soc. (London) A149, 71 (1935); Physica 2, 341 (1935). 14. F. London, Superfluids; John Wiley and Sons, Inc., New York, 1950, Vol. I, p. 152. 15. B. S. Deaver, Jr., and W. M. Fairbank, Phys. Rev. Letters 7, 43 (1961). 16. R. Doll and M. Näbauer, Phys. Rev. Letters 7, 51 (1961). 17. It has once been suggested by Onsager that this might happen (see Deaver and Fairbank, Ref. 15), although no one else ever understood why. 18. See Volume II, Section 40–2. 19. B. D. Josephson, Physics Letters 1, 251 (1962). 20. P. W. Anderson and J. M. Rowell, Phys. Rev. Letters 10, 230 (1963). 21. S. Shapiro, Phys. Rev. Letters 11, 80 (1963). 22. Jaklevic, Lambe, Silver, and Mercereau, Phys. Rev. Letters 12, 159 (1964). 23. Jaklevic, Lambe, Silver, and Mercereau, Phys. Rev. Letters 12, 274 (1964). 24. See Volume II, Chapter 15, Section 15–5.
385be2c179239080
Unit I: The Atom OCW Scholar « Previous | Next » Unit I focuses on the building block of matter – the atom. Throughout this unit we reflect on the fundamental question of how we can understand and describe something that is too small to see. The unit starts with the amazing discovery of the electron and the nucleus and the subsequent realization that the classical laws of motion (classical mechanics) do not adequately describe the behavior of something as small as an electron, requiring the development of new laws of motion (quantum mechanics). Viewers can watch as hydrogen creates a spectrum of light waves, prompting the consideration of why only certain colors are generated and not others; i.e. why only certain energy transitions are possible. The Schrödinger equation is offered up as the key to explaining these colors as well as many other experimental findings. By the end of this unit, viewers should be able to apply the concept that there are discrete atomic energy levels and should be able to calculate binding and ionization energies for electrons. They should be able to describe wavefunctions (orbitals) and write electron configurations. Lecture 1: The Importance of Chemical Principles Lecture 2: Atomic Structure Lecture 3: Wave-Particle Duality of Light Lecture 4: Wave-Particle Duality of Matter; Schrödinger Equation Lecture 5: Hydrogen Atom Energy Levels Lecture 6: Hydrogen Atom Wavefunctions (Orbitals) Lecture 7: Multi-electron Atoms « Previous | Next »
774abbdc476d9d2d
Hydrogen atom Hydrogen atom, 1H Nameshydrogen atom, H-1, protium Nuclide data Natural abundance99.985% Isotope mass1.007825 u Excess energy7288.969± 0.001 keV Binding energy0.000± 0.0000 keV Isotopes of hydrogen Complete table of nuclides Atomic spectroscopy shows that there is a discrete infinite set of states in which a hydrogen (or any) atom can exist, contrary to the predictions of classical physics. Attempts to develop a theoretical understanding of the states of the hydrogen atom have been important to the history of quantum mechanics, since all other atoms can be roughly understood by knowing in detail about this simplest atomic structure. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons and is simply a proton and an electron. Protium is stable and makes up 99.985% of naturally occurring hydrogen atoms.[2] Deuterium contains one neutron and one proton. Deuterium is stable and makes up 0.0156% of naturally occurring hydrogen[2] and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium contains two neutrons and one proton and is not stable, decaying with a half-life of 12.32 years. Because of its short half-life, tritium does not exist in nature except in trace amounts. Heavier isotopes of hydrogen are only created artificially in particle accelerators and have half-lives on the order of 10−22 seconds. They are unbound resonances located beyond the neutron drip line; this results in prompt emission of a neutron. Hydrogen ion Lone neutral hydrogen atoms are rare under normal conditions. However, neutral hydrogen is common when it is covalently bound to another atom, and hydrogen atoms can also exist in cationic and anionic forms. If a neutral hydrogen atom loses its electron, it becomes a cation. The resulting ion, which consists solely of a proton for the usual isotope, is written as "H+" and sometimes called hydron. Free protons are common in the interstellar medium, and solar wind. In the context of aqueous solutions of classical Brønsted–Lowry acids, such as hydrochloric acid, it is actually hydronium, H3O+, that is meant. Instead of a literal ionized single hydrogen atom being formed, the acid transfers the hydrogen to H2O, forming H3O+. If instead a hydrogen atom gains a second electron, it becomes an anion. The hydrogen anion is written as "H" and called hydride. Theoretical analysis Failed classical description Experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a tenuous negative charge cloud around it. This immediately raised questions about how such a system could be stable. Classical electromagnetism had shown that any accelerating charge radiates energy, as shown by the Larmor formula. If the electron is assumed to orbit in a perfect circle and radiates energy continuously, the electron would rapidly spiral into the nucleus with a fall time of:[3] Where is the Bohr radius and is the classical electron radius. If this were true, all atoms would instantly collapse, however atoms seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller. Instead, atoms were observed to only emit discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics. Bohr–Sommerfeld Model 1. Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a discrete set of possible radii and energies. 2. Electrons do not emit radiation while in one of these stationary states. 3. An electron can gain or lose energy by jumping from one discrete orbital to another. and is Planck constant over . He also supposed that the centripetal force which keeps the electron in its orbit is provided by the Coulomb force, and that energy is conserved. Bohr derived the energy of each orbit of the hydrogen atom to be:[4] where is the electron mass, is the electron charge, is the vacuum permittivity, and is the quantum number (now known as the principal quantum number). Bohr's predictions matched experiments measuring the hydrogen spectral series to the first order, giving more confidence to a theory that used quantized values. For , the value is called the Rydberg unit of energy. It is related to the Rydberg constant of atomic physics by The exact value of the Rydberg constant assumes that the nucleus is infinitely massive with respect to the electron. For hydrogen-1, hydrogen-2 (deuterium), and hydrogen-3 (tritium) which have finite mass, the constant must be slightly modified to use the reduced mass of the system, rather than simply the mass of the electron. This includes the kinetic energy of the nucleus in the problem, because the total (electron plus nuclear) kinetic energy is equivalent to the kinetic energy of the reduced mass moving with a velocity equal to the electron velocity relative to the nucleus. However, since the nucleus is much heavier than the electron, the electron mass and reduced mass are nearly the same. The Rydberg constant RM for a hydrogen atom (one electron), R is given by There were still problems with Bohr's model: 1. it failed to predict other spectral details such as fine structure and hyperfine structure 2. it could only predict energy levels with any accuracy for single–electron atoms (hydrogen–like atoms) 3. the predicted values were only correct to , where is the fine-structure constant. Most of these shortcomings were resolved by Arnold Sommerfeld's modification of the Bohr model. Sommerfeld introduced two additional degrees of freedom, allowing an electron to move on an elliptical orbit characterized by its eccentricity and declination with respect to a chosen axis. This introduced two additional quantum numbers, which correspond to the orbital angular momentum and its projection on the chosen axis. Thus the correct multiplicity of states (except for the factor 2 accounting for the yet unknown electron spin) was found. Further, by applying special relativity to the elliptic orbits, Sommerfeld succeeded in deriving the correct expression for the fine structure of hydrogen spectra (which happens to be exactly the same as in the most elaborate Dirac theory). However, some observed phenomena, such as the anomalous Zeeman effect, remained unexplained. These issues were resolved with the full development of quantum mechanics and the Dirac equation. It is often alleged that the Schrödinger equation is superior to the Bohr–Sommerfeld theory in describing hydrogen atom. This is not the case, as most of the results of both approaches coincide or are very close (a remarkable exception is the problem of hydrogen atom in crossed electric and magnetic fields, which cannot be self-consistently solved in the framework of the Bohr–Sommerfeld theory), and in both theories the main shortcomings result from the absence of the electron spin. It was the complete failure of the Bohr–Sommerfeld theory to explain many-electron systems (such as helium atom or hydrogen molecule) which demonstrated its inadequacy in describing quantum phenomena. Schrödinger equation The Schrödinger equation allows one to calculate the stationary states and also the time evolution of quantum systems. Exact analytical answers are available for the nonrelativistic hydrogen atom. Before we go to present a formal account, here we give an elementary overview. Given that the hydrogen atom contains a nucleus and an electron, quantum mechanics allows one to predict the probability of finding the electron at any given radial distance . It is given by the square of a mathematical function known as the "wavefunction," which is a solution of the Schrödinger equation. The lowest energy equilibrium state of the hydrogen atom is known as the ground state. The ground state wave function is known as the wavefunction. It is written as: Here, is the numerical value of the Bohr radius. The probability of finding the electron at a distance in any radial direction is the squared value of the wavefunction: The wavefunction is spherically symmetric, and the surface area of a shell at distance is , so the total probability of the electron being in a shell at a distance and thickness is It turns out that this is a maximum at . That is, the Bohr picture of an electron orbiting the nucleus at radius is recovered as a statistically valid result. However, although the electron is most likely to be on a Bohr orbit, there is a finite probability that the electron may be at any other place , with the probability indicated by the square of the wavefunction. Since the probability of finding the electron somewhere in the whole volume is unity, the integral of is unity. Then we say that the wavefunction is properly normalized. As discussed below, the ground state is also indicated by the quantum numbers . The second lowest energy states, just above the ground state, are given by the quantum numbers ; ; and . These states all have the same energy and are known as the and states. There is one state: and there are three states: An electron in the or state is most likely to be found in the second Bohr orbit with energy given by the Bohr formula. The Hamiltonian of the hydrogen atom is the radial kinetic energy operator and Coulomb attraction force between the positive proton and negative electron. Using the time-independent Schrödinger equation, ignoring all spin-coupling interactions and using the reduced mass , the equation is written as: Expanding the Laplacian in spherical coordinates: This is a separable, partial differential equation which can be solved in terms of special functions. The normalized position wavefunctions, given in spherical coordinates are: is the reduced Bohr radius, , is a generalized Laguerre polynomial of degree n − 1, and is a spherical harmonic function of degree and order m. Note that the generalized Laguerre polynomials are defined differently by different authors. The usage here is consistent with the definitions used by Messiah,[6] and Mathematica.[7] In other places, the Laguerre polynomial includes a factor of ,[8] or the generalized Laguerre polynomial appearing in the hydrogen wave function is instead.[9] The quantum numbers can take the following values: where is the state represented by the wavefunction in Dirac notation, and is the Kronecker delta function.[10] which, for the bound states, results in [11] where denotes a Gegenbauer polynomial and is in units of . The solutions to the Schrödinger equation for hydrogen are analytical, giving a simple expression for the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines and fully reproduced the Bohr model and went beyond it. It also yields two other quantum numbers and the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. Results of Schrödinger equation Mathematical summary of eigenstates of hydrogen atom Energy levels where α is the fine-structure constant and j is the total angular momentum quantum number, which is equal to | ± 1/2| depending on the orientation of the electron spin relative to the orbital angular momentum.[13] This formula represents a small correction to the energy obtained by Bohr and Schrödinger as given above. The factor in square brackets in the last expression is nearly one; the extra term arises from relativistic effects (for details, see #Features going beyond the Schrödinger solution). It is worth noting that this expression was first obtained by A. Sommerfeld in 1916 based on the relativistic version of the old Bohr theory. Sommerfeld has however used different notation for the quantum numbers. Coherent states The coherent states have been proposed as[14] which satisfies and takes the form Visualizing the hydrogen electron orbitals • total nodes, • of which are angular nodes: • angular nodes go around the axis (in the xy plane). (The figure above does not show these nodes since it plots cross-sections through the xz-plane.) • (the remaining angular nodes) occur on the (vertical) axis. • (the remaining non-angular nodes) are radial nodes. Features going beyond the Schrödinger solution Alternatives to the Schrödinger theory In the language of Heisenberg's matrix mechanics, the hydrogen atom was first solved by Wolfgang Pauli[16] using a rotational symmetry in four dimensions [O(4)-symmetry] generated by the angular momentum and the Laplace–Runge–Lenz vector. By extending the symmetry group O(4) to the dynamical group O(4,2), the entire spectrum and all transitions were embedded in a single irreducible group representation.[17] In 1979 the (non relativistic) hydrogen atom was solved for the first time within Feynman's path integral formulation of quantum mechanics.[18][19] This work greatly extended the range of applicability of Feynman's method. See also 1. Palmer, D. (13 September 1997). "Hydrogen in the Universe". NASA. Archived from the original on 29 October 2014. Retrieved 23 February 2017. 2. Housecroft, Catherine E.; Sharpe, Alan G. (2005). Inorganic Chemistry (2nd ed.). Pearson Prentice-Hall. p. 237. ISBN 0130-39913-2. 3. Olsen, James; McDonald, Kirk (7 March 2005). "Classical Lifetime of a Bohr Atom" (PDF). Joseph Henry Laboratories, Princeton University. 4. "Derivation of Bohr's Equations for the One-electron Atom" (PDF). University of Massachusetts Boston. 5. Eite Tiesinga, Peter J. Mohr, David B. Newell, and Barry N. Taylor (2019), "The 2018 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 8.0). Database developed by J. Baker, M. Douma, and S. Kotochigova. Available at http://physics.nist.gov/constants, National Institute of Standards and Technology, Gaithersburg, MD 20899. Link to R, Link to hcR 7. LaguerreL. Wolfram Mathematica page 8. Griffiths, p. 152 10. Griffiths, Ch. 4 p. 89 11. Bransden, B. H.; Joachain, C. J. (1983). Physics of Atoms and Molecules. Longman. p. Appendix 5. ISBN 0-582-44401-2. 12. Sommerfeld, Arnold (1919). Atombau und Spektrallinien [Atomic Structure and Spectral Lines]. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7. German English 13. Atkins, Peter; de Paula, Julio (2006). Physical Chemistry (8th ed.). W. H. Freeman. p. 349. ISBN 0-7167-8759-8. 14. Klauder, John R (21 June 1996). "Coherent states for the hydrogen atom". Journal of Physics A: Mathematical and General. 29 (12): L293–L298. arXiv:quant-ph/9511033. doi:10.1088/0305-4470/29/12/002. Retrieved 18 June 2019. 15. Summary of atomic quantum numbers. Lecture notes. 28 July 2006 16. Pauli, W (1926). "Über das Wasserstoffspektrum vom Standpunkt der neuen Quantenmechanik". Zeitschrift für Physik. 36 (5): 336–363. Bibcode:1926ZPhy...36..336P. doi:10.1007/BF01450175. 18. Duru I.H., Kleinert H. (1979). "Solution of the path integral for the H-atom" (PDF). Physics Letters B. 84 (2): 185–188. Bibcode:1979PhLB...84..185D. doi:10.1016/0370-2693(79)90280-6. 19. Duru I.H., Kleinert H. (1982). "Quantum Mechanics of H-Atom from Path Integrals" (PDF). Fortschr. Phys. 30 (2): 401–435. Bibcode:1982ForPh..30..401D. doi:10.1002/prop.19820300802. (none, lightest possible) Hydrogen atom is an isotope of hydrogen Decay product of: free neutron Decay chain of hydrogen atom Decays to:
b70a04e59d0fe9fb
The Fabric of the Cosmos The Fabric of the Cosmos 2011, Science  -   207 Comments Ratings: 8.73/10 from 164 users. Interweaving provocative theories, experiments, and stories with crystal-clear explanations and imaginative metaphors like those that defined the groundbreaking and highly acclaimed series The Elegant Universe, The Fabric of the Cosmos aims to be the most compelling, visual, and comprehensive picture of modern physics ever seen on television. More great documentaries 207 Comments / User Reviews Leave a Reply to B T Carberry Cancel reply 1. So it's not Crimplene or Lycra then. "Amazin'!" — Brian Cox 2. Lucky to be here. 3. Compressing the infinitesimal... Yeah... That's great, is this made possible by the same equation that brought us the period in history where time didn't exist? OK. That's how the world is flat these days. 4. This is the dumbed down version of the cosmos. Some of this stuff is interesting but for me it's far too American. I understand that the point is to appeal to as many people as possible but this may as well be a children's program on theoretical physics at times. 1. I've read the book and have now watched the video documentaries. The two physics Brians Cox and Greene have motivated enthusiasm in curious youngsters and reawakened science thrills in old scientists like me. What has become obvious to me is that Physics teachers who taught me were unaware of Einstein,Dirac,Bohr, Feynman etc. The fabric of the cosmos should be preached from church pulpits instead of the usual boring bulls*it! 5. "Life/Reality we know it may just be a projection of all the information/data residing on the event horizon...." mind-boggling! 6. I agree with Bruce. About 20 mins of interesting content padded out into an hour and dramatized for an impatient dumb audience. 7. Remember it is just for TV entertainment for the masses. We expect extreme interesting everything, So they try to provide it. Complete factual science may seem slow and boring to many. If you want nothing but facts science you may not have as much fun with it yourself. I love all the ideas, even if I think some are crap. "The only dumb question...." 8. Pretty far fetched theories. Some that seem almost equal to supernatural in stature. So this may be the right place to ask: Does the universe exist with no observer? Does the observer give it dimension? 9. This video quickly got off the topic concerning the actual nature of space. Instead of talking about the factual fabric of the cosmos, it talked about Alice-in-Wonderland fantasies that have nothing to do with the real nature of space. However, the video is correct in one aspect, modern physicists and cosmologists have deceived us all. Brian Greene and his associates are fable salesmen; trying to convince people that the three-dimensional world we live in is really a two-dimensional holographic projection on the surface of a black hole. All the irrational scientists in this video also show up in many other videos spieling curved space-time conceptualizations and quantum mechanical notions as facts. They are like blind men looking at a photo of an elephant, then declaring themselves as experts on the subject. Society should be very skeptical of their elusive postulations. Frankly, their imaginations have gotten way out of hand. So, what is the real nature of space... space is physical volume. We see physical volume everywhere we look out, and everywhere we look in. Physical volume is the most abundant aspect composing the nature of the universe. We can factually prove that physical volume exists because we can see it, touch it, move through it, and measure it. Physical volume is an undisputable fact. The nature of physical volume has three obvious spatial dimensions, height, width, and length; thus, volume is a spatial measurement. In all physical applications on Earth, physical volume is a geometric measurement deriving the quantity of a solid, liquid, or gas. In other words, volume is a product of substance, a measurement of something tangible, defined by shape and amount. To put the nature of physical volume in a pure perspective, if we remove all matter and energy from the cosmos, then there is only pure physical volume remaining. The question becomes, what is the physical causation of this remaining volume. In order for physical volume to exist, it must have four fundamental components, three of spatial dimensions, and one of substantive causation. Obviously, physical volume has to exist first, for matter and energy to exist within and move through. Therefore, we can look at this remaining space as a proto-state, representing the fabric composing the cosmos, and call this state of ambient physical volume "Protospace." So, what do we know for fact; we know that volume is a product of physical substance. We know that physical volume has four components that compose its existence. We know that physical volume is a homogeneous continuum, continuously existing between particles, planets, stars, and beyond galaxies. And, we know that physical volume had to exist before matter or energy could come into existence. Functionally, physical volume is the most elemental aspect of nature. The fact that physical volume exists, gives us enough information to decipher the totality of its nature. Where it comes from, how old it is, what it is made of, and how it formulates into the material universe we see today. The clue here is that substance causes volume, volume does not cause substance. Physical volume is the Rosetta stone of both the origin and formation composing the universe. Unfortunately, orthodox physics and cosmology are not dealing with factual based science; they postulate irrational notions such as the big bang, curved space-time, elusive quantum constituencies, multiverse, and holographic conceptualizations. These concepts have no actual facts to support them. They are an abyss of fantasies, only supported by elaborate equations. Theorists forget that mathematics is only symbolic representation of values in an event that can apply to any concept whether it is real or not. Two angels plus two angels equals four angels; the math works, but does not, and cannot, prove or predict angels exist. Science based on subjective equations is a fool's journey. To understand what the fabric of space is really made of, you only need look at what you can factually see and measure. The answers are all there... they are hiding in plain sight waiting to be recognized. 1. This may be the first time am saying this, but what are your qualifications? and from what sources are you deriving this info? the only physical volume that I know of is what is derived from hard disc space or other such device. To be physical as in your physical volume, is to have mass/matter and mass means to made up from atoms, but atoms have only .0999999% matter. Do not really know what you are talking about, or did you just make all this up? 2. I use the term "physical volume" to mean that the phenomenon of space has a substantive causation. Anything that physically exists, regardless of its form, must be made of something tangible to exist, if it was not, that would be magic. Since space obviously exists, then it must be made of something physical in order to exist. Physical volume is a case of substantive cause and effect. If you think about it, substance and volume are inseparable. If there were no substance, there would be no amount of volume. Likewise, if there were no volume, there would be no amount of substance. Each is a function of the other, where one exists so must the other. Mass is anything substantive that can be quantified. Particles (finite units) are one type of mass, space (homogeneous continuum) is a different type of mass. Both are made of the same substance, just in different states. Mass represents the amount of substance something is made of. A particle has finite mass, while space has infinite mass. Particles exist within a greater context of space. If space did not exist first, there would be no place for particles to exist within or move around. Obviously, space has to exist before particles. The mass of space generates the mass composing particles. In other words, particles are a byproduct of space; there is nowhere else for the mass of particles to come from. 3. What if C A T really spelled DOG? 4. Our universe did not exist until one Planck sec 10^34 after the BB, an expansion, not an explosion, and all space, plasma, matter, time came into being from that singularity, not just space first where you say all derived from. 5. The big bang is a fairy tale. There are absolutely no facts on any level to support the notion. Hearsay as evidence is not a fact, nor is consensus. A singularity is also a fairy tale. To say that the material universe is derived from a singularity, a point so small that it cannot be quantified, is ridiculous. It too has no facts, and relies on hearsay and consensus as its only support. Physical volume (space) is a fact. 6. I will not be your tutor, crack open some science books. 7. Many books and videos, (including this one,) assume that parallel universes, relativity, the big bang, holographic existence, or quantum mechanics are based on facts; they are not, these are all unsubstantiated theories. There is not one shred of tangible facts to prove any of these notions. Theoretical physicists and cosmologists, use elaborate subjective equations as a simulation of facts. They have no real facts to base their theories on. To test this, you only need ask them for a single proving fact for any one of their conjectures; you will find that they cannot provide even one. Theoretical scientists have engaged in a conspiracy of fantastic fantasies and elusive equations to explain the composition of the material universe. They use simulated facts because they cannot recognize the real facts. The physical volume that composes space is a tangible fact to work with. In deciphering "the fabric of the cosmos," the physical volume composing space is an elemental and logical place to start. The theoretical singularity of the big bang assumes that space does not exist until it is created by inflation. The theory ignores that there has to be something there to expand into, before it can expand into it. It is not possible to expand into something that is not there; that would be magic. 8. Ya, pretty extreme theories, hard and may be forever impossible to prove. Science can only guess. This may sound philosophical (as most science was at one time), may still be? Does the universe exist with no observer? 9. The universe is not dependent on human existence. An ant does not cause the Earth to exist, nor does a human cause the universe to exist. The universe is an absolute physical construct in every way, shape, and form. There is no part of the universe that is supernatural, magical, or a just because thing. Writing it in books, or produced in entertainment movies and cartoons, or documentary video's does not make it true; after all, fairy tales are also distributed through all these mediums. What differentiates reality from fantasies are facts. When reviewing conceptualizations, one only needs to look for the facts. If facts do not exist, then it is fantasy. Critical thinking is your strongest ally in deciphering what is real and what is not. 10. I have been an scientific observer for over 50 years. I agree the scientific method is the only practical way to prove. And proof is all we have to call truth. In order to prove you must first question. Many discoveries thought of as proof are proven wrong and many theories thought to be quack are eventually proven fact. The question is more than half the answer. You say to look at the facts to see what is "real", but how did these facts come to be? Many facts were once thought of as lunacy...One may see things as fact, reality and proof but is it a fact, a strong belief or opinion? Sometimes it is hard to see through the false sciences. (and arrogance) How do we know? We form opinions based on what we observe. We are easily fooled. (all of us) Most BIG science is eventually proven wrong, I'm waiting for Einsteins version of space and time to be put to the test. Most modern quantum dimensional theories seem extreme but so did much of what we believe now. If there were never ever any observer,,, (humans and ants or Gods included) from the start of "time" there would be no time and no dimension. Without measurement...No science. You answered this yourself. Right or not science is mostly all cool,. I love science, it is very stimulating entertainment. 11. I agree with much of what you have stated. In regards to your first post: a question never asked is worth nothing; an answer never given is worth even less. We do have profound facts to work with, physical volume is a pointblank fact; from it we can derive even more questions and answers than we can from science fiction. Science is supposed to deal with facts only, not science fiction; "philosophy" is the proper place for science fiction. Mixing science fiction (philosophy) with real science facts confuses the issue. Real, factually based science, is the only thing that will provide humanity with the tools it needs to manage the future. Teaching young minds science fiction as science facts is not properly preparing civilization for tomorrow. Stimulating young minds is one thing, misdirecting young minds is quite another. When facts are mixed with fiction, young minds become confused. How much harder is it to learn the alphabet when mixed with letters that do not really exist? Science fiction does the same thing to science facts. If these videos were clearly marked as "for entertainment and philosophical purposes only," then maybe they would not be so damaging to young minds. Science fiction notions such as curved space-time or the big bang theories are so pervasive - that they are taught to college level students as science fact. In physics and cosmology, we should be teaching our children critical thinking skills, not bandwagon science fiction as a simulation of facts. Stimulating imagination is a great tool; corrupting imagination is harmful to us all... 12. I rarely see any of this stuff stated as fact. You rant about them stating things such as "two dimensions" etc, as a stated fact? I never see it that way. It to me seems *at most* stated as a theory. Besides, proof is rarely a fact first., they usually start out as theory. (like this stuff) You to me seem to have a hate on for it. I promise one thing, closing your mind will blind you. You seem to believe all genius's past and pres are all wrong about eveything? Do you believe in the speed of light? 13. Quote: "Science is supposed to deal with facts only" Proof NEVER starts out as fact. It must go through a scientific process. "Corruption"...? Hard core accusation. I think most modern science is stimulating. Even if a theory is dead wrong it still stimulates productive thought.... If you let it.... 14. Facts? I thought of all the extreme stuff as conjecture and brain storming, Math is main tool they have to try to gather evidence of any of the stuff you mentioned. I never thought of it as claimed facts, just TV for the general audience, to capture interest. Interest is the first step. Sagan said knowledge is the only way to keep ourselves safe from ourselves.,,,,, Interest is good:) 15. present us some "real" facts 16. So on acid the Physical volumes that come talk to me are real in your mind, but not the idea of a singularity ? 17. Could you please tell me what is right and not concocted sensationalist nonsense aimed at attracting viewers? 10. I'l' tell you what space is. It's just that - space! A.k.a. "room" :-) I think I'll have whatever hallusinogen it is those holographic nutters are having.. 1. There is a lot of room in your head! 2. U bet there is - you can fit an entire Universe in here ;-) 11. I've searched high and low for a reasonable explanation of the holographic principle but the literature is to stupid to explain it to me. This is all wrong. According to the comments below the only true source of modern science is the Quran. Therefore I'm gonna go look for a Quran. F**k you science. 1. Have you watched "What is reality?" on here? - If that's just fluff to you now after all your searching for holographic universe info, I've posted a link to a video on quantum loop gravity which is very, very head scratching (in the comments). 2. Oops. I posted on the other doc before I saw your comment here. Yeah "What is reality?" is very very good, probably one of the better ones I've watched in years. I'm still lost on this holographic principle though. I wish I had enough math to grasp it but I don't. I don't think my imagination is fit for it either, there's just too many bizarre notions it throws up. If we are living in/on a projection which mirrors the event horizon of a black hole then we are more than likely in a black hole and whatever is outside our blackhole world has no understanding of how the laws in their Universe breaks down in our world ... no? So what's going on inside blackholes in "our" Universe? But Susskin's string theory contradicts this with branes ... what am I missing here? Leonard Susskind needs to be burned at the stake. 3. Uhm...well, something's not quite right there because that would suggest infinite dimensions as you burrow deeper into each universe's black holes (if indeed one leads somehow to the other type deal). Strings only gives us 11... or so... to work with. I'll think about it, but right now I'm a zombie staring at a screen. I'll also try and find that quantum loop gravity has to be seen to be believed! (and is right in with all this). 4. loop quantum cosmology 5. "LQG predicts that not just matter, but also space itself has an atomic structure". That a lot of this new theory can only be fully understood through Mathematics is troubling to me. The further down the rabbit hole I go, the less common place analogies there are to hang an understanding on and the more things seem based on Mathematical abstractions. It hurts my tiny little mind :-/ 6. The dimensions would exist separate of this infinite loop, as they do in the current model where parallel universes are allowed. Wouldn't all those parallel universes most likely have the same laws of physics and the same amount of dimensions? This is my own take on the whole thing so I'm sure it's seriously flawed. 7. I haven't looked at Achems' links yet (many thanks), but my guess is that they would not necessarily share the same laws at all. Planks Constant could be different in different universes for example, changing much of the outcomes. This is hardly scientific of me, just philosophical conjecture, but if you look at the conditions needed for life on earth, each step of the way seems to be 'rare' (I use the term loosely) as in the distance from the sun, our solar system's place in the galaxy to name but a couple... It's sometimes referred to as the goldilocks scenario (everything being just right). So if things were to follow that trend then it would be suggestive that our entire universe has laws and physics that are just right... rare even, loosely conjecturing of course. ;-) 12. This show is a load of hypothetical nonsense. It is not experimental, can never be verified. It is a way of fooling people into giving these cosmologists job security. They have never given any real benefit to humanity except wishful thinking. These programs should be banned!! 1. Could you tell me where in the Qur'an it explains all this? 2. Even if it did get some things right, you'd expect them to firstly, and secondly, why refer to old, outdated materiel when we have learned so much more since..... 3. One thing I will never ever ever understand in your perspective that says Quran is outdated, I bet you paid any attention to reciting it closely and seriously. What is so called "The Big Bang" for an example is just a fact got itself to be mentioned, from the complex worlds of various kinds of knowledge within this layers of this book still unmentioned, rather observed than mentioned by those who look, wonder, question, reason and understand. I invite you to have a study of how come mentioned facts like these do actually exist 1400+ years ago, that would amaze me for good -personally speaking-. 4. G'day Arrayat. Welcome to TDF mate. To your first question about why I think the Quran (and Bible etc) are outdated, is because of how long ago it/they were written. As you say 1400+ in this case. I also consider science text books from 20 years ago outdated, as we've learned more since then. (We know things now they never taught when I went to school) I'm not 100% sure I completely understand your 2nd paragraph, I think because of our language differences, but I think you mean that the Big Bang is an example of knowledge that's contained within the Quran, that we eventually 're-discovered', and the Quran also has many more 'layers' of knowledge that science is yet to discover. You'll have to correct me if I've misunderstood you. I agree with your last part about thinking it would be interesting if that knowledge was known back then, I've quite an interest in some history and what was known by earlier peoples also. I've been lucky enough to see much of Egypt and have always been interested in our ancient past. I think some civilisations like Egypt clearly had a quite advanced knowledge of our sky, probably more then they're given credit for now. I'd be very interested to read the passages that you say talk about the Big Bang or any other science that is relevant to the topic that is contained in the Quran. Could you please tell me which ones to look at, or quote them here? To be honest with you, I must say I have trouble taking the word of a book about science matters when it also talks of a winged horse taking the prophet to cut the moon in half with a sword. When did that happen? Is there any evidence that the moon has ever been cut/cleaved in half in the time frame indicated? You'll have to point out the passages that make that make sense too please. 5. docoman..i am posting simply in the interest of knowledge of early peoples look into info on the ancient ionians.. it is amazing what these people understood at the time.. about the 6th and 5th century BCE.. they were away that matter is made up of atoms and it is from them the word originates..they believed the earth was once entirely covered with water and from that life arose and evolved.They knew the earth was round as well as that the earth orbits the sun. They were the first true scientists. Another great source of information concerning the ionians can be found in the 7th and 8th episodes of Carl Sagans "Cosmos" it also explains how why this knowledge was lost and shall we say re-discovered as late as the 16th and 17th century. I can hardly imagine what life would be like today had those early scientists been allowed to flourish. And then consider europe's dark ages before the renissance, we must thank the east and the muslims for saving the knowledge of the including science and medicine. 6. G'day kicknbak60, thanks for your interesting and informative post mate, and welcome to TDF too. Thanks for pointing out the Iinians, I don't know much about them at all and will follow up your suggestions. The little I just read I find very interesting. I agree with you, the world does owe the muslim world the acknowledgement and a thanks for keeping science alive. I also agree that we (homo sapiens) have discovered knowledge and lost it, to rediscover later. (it'd be interesting to be able to know how advanced we've been, what have we forgotten and how much do we still forget) I'm convinced the Great Pyramid builders knew much more about maths then they're given credit for now, and the Sphinx is older then the Egyptologists say. And other things in Africa, like the Dogon people and what it seems they remembered from earlier times I also find very interesting and suggestive of knowledge we had that we've forgotten. But I'll also stand by my feelings that even though some scriptures may contain some interesting things that may be accurate (historic events or even hints of science knowledge we've not given them credit for knowing), other wrongs in their story make it impossible for me to swallow their dogma. At best they're an interesting read to put into context with their era, not some 'word of God'. 7. You have a healthy disposition balancing curiosity with scepticism, allowing potentially great discoveries. I for one will be all ears. 8. Weep for the Museum of Alexandria. 9. Museum, or Library? I've heard of the burning of the Library of Alexandria, was it also called a Museum? One of the greatest crimes/pities for our species I think losing that. Some of those scrolls would have been so interesting, and as kicknbak said, how far could we have been now if we'd not been 'interrupted'. I also wonder how much the ice ages that we have gone through have shaped our 'memory' over the longer term as well. We've been through at least one, most likely multiple. Climate change like that, as we're seeing in more recent discussions, must have a big impact on what we're doing. 10. The Library was run by the "Popular People's Front of Judea", the Museum was run by the "People's Popular Front of Judea" (and had a marvellous little gift shop annex) bad. I knew as I was writing it I'd made a nagging blunder, but thought of this Python post as a recompense and left it as is. lol. Search Youtube for "Monty Python - Life of Brian - PFJ Splitters" (some swearing involved) 11. lols!! Gotta love Python. Cheers mate. :) 12. At Alexandria there was known to be a library which was due to the collection of written scrolls (perhaps 1,000,000+). There was also a museum although not defined properly by the modern definition of a museum. This was a place to meet and muse over the concepts presented by the books (scrolls) and current thnking -- a forum!!! 13. The ancient Mahatbharata talks about atomic explosions. And when they went and excavated the city where this happened the skeletons where on a par with victims of Hiroshima, and Nagasaki, with radiation levels. 14. G'day Bruce, very interesting mate. Do you know where those excavations were? Or a good documentary or website talking about it, I'd like to read/watch more. 15. Mohenjo-daro and Harappa Substantiating the Pakistan/India texts that apparently describe atomic attacks is an amazing find found in the prehistoric Indian cities of Mohenjo-daro and Harappa. [Pakistan's Indus Valley] On the street level were discovered skeletons, appearing to be fleeing, but death came too quick. They were found to be highly radioactive, on a level comparable to Hiroshima and Nagasaki. Yet there are absolutely no indications of volcanic activity, and it appears that both cities were destroyed at virtually the same time. Further, "At Rajasthan in India, radioactive ash covers three square miles not far from Jodhpur. This is an area of high rates of cancer and birth defects and it was cordoned off by the Indian government when radiation readings soared astonishingly high. An ancient city was unearthed which, the evidence indicates, was destroyed by an atomic explosion some 8,000 to 12,000 years ago. It has been estimated that half a million people could have died in the blast and it was at least the size of those that devastated Japan in 1945. Archeologist Francis Taylor stated that etchings in some nearby temples he translated suggested that they prayed to be spared from the great light that was coming to lay ruin to the city."[1] 16. um im thinking via something like vauge statements and your confirmation bias 17. Glad to answer that, It is in 21:30 where it talk about the so recently called "The Big Bang" 18. Sorry Mate, but your interpretation of 21:30 is way biased... What century are you teleporting your thoughts from? Just so I know which one to avoid when I travel back in time in my deLorean. 14. great 15. YO THIS WEBSITE IS F**KING GREAT. if you like this documentary you should check out WHAT THE BLEEP DO YOU KNOW, and reat the book THE POWER OF NOW BY ECHKART TOLLE. before watching this i knew the fundamentals of what space is but still really good film! Reality is not real, and no one sees it the same. we have five sensory organ which cant even percieve most of the stuff that is going on metaphysically. This movie just reaffirms my beleife in a creator. because time is all relative and our creator is outside of time. Some of these principles apply to god. But everyones defintion of god is different. Still great documentary 16. in my opinion, if this world is how we perceive as reality, then it is our reality. there's nothing too bothering to me. 17. this is so mind boggling. i can't quite wrap my mind around this, being able to be in the past present and future. 18. The hologram thing bothers me. So if we aren't the real deal here then why even bother? Sounds like something a child would wonder like "is this all just a dream"? Otherwise interesting material. 1. I love the smell of nihilism in the morning 2. The problem, the "scientist" are all ways dealing with matter, stuff one can hold in ones hands, so to speak. They take it to a sub atomic level and at that point there all running into no walls; Like rubbing out the chemical trail of ants....................... 19. Idk if any of you have seen Leonard Susskind's youtube cosmology lectures, but every time I hear the man talk now, I expect him to stop for a couple seconds and take a bite of cookie. 20. i would love to be able to watch this but simply cannot due to brian greens annoying, condescending voice. is it just me?? maybe i should try his books instead 1. It's just you:) 21. i would love to be able to watch this but simply cannot due to the overly loud, constant and totally unnecessary background music..Why is it there???? 1. Its being called background music, you should listen to the documentary itself instead of listening what's behind it.... 22. It would be closer, the nearest Galaxy is Andromeda and it is heading for a collision course with the Milky Way. Although the Universe is expanding overall, the force of Gravity still pulls nearby glaxies together. 23. can any body tell me if we see starlight from millions of years ago today. were are they know like the nearest galaxy we see the light from it is 2 million light year away is the galaxy still there or has it moved 1. Well you would just have to wait another 2 million years to be sure of what's going on with it today... But predicting it, and since the Universe is expanding and all, it will most probably be a lot farther way by now, and it has most Likely changed has well. 24. Enjoyed every minute...furr sure we'll have a Q computer, if we don't destroy our environment and ourselves (all living things) before then... then, 'maybe' we get the computing brain we need to help with some of the big questions - how do we feed 10 Billion people, make peace, manage a fair political system/industrial complex, etc. As well as better predict the weather & help us build teleport machines (okay maybe humans will be hesitant at first)- just imagine teleports as transportation for cargo however, now were talkin about something useful. p.s. I am beginning to understand, probability & entanglement (i am likely delusional), however, no matter how many ways, my brain will not deal with the multi-verse - as science fiction, sure, as science fact-I need my mommy.... Who needs spook films when we've got theoretical physics :) 25. lol....Brian really likes bread loaf analogies, he explained M Theory's Branes as slices of bread as well in the "Elegant Universe" series 1. he likes dough too...$$ 26. this moment never ends! There is only one moment! This one 27. Perhaps someone could explain the following to me. Newton's idea of the "attraction" of objects to one another has been proven wrong and replaced by Einstein' s idea of warped space. In other words, the moon and earth are not attracted to one another; rather the moon travels in curved spaced around the earth. Why, then, as in this documentary, do astronomers keep talking about the attraction of gravity? In this case they talk about having to invent dark energy to explain the faster and faster expansion of the universe, as common sense tells us gravity's attraction should slow it down. Seems to me there might be no need for dark energy, because there is no gravitational force attracting objects to one another, just the curved space created around them. Apologize if this is a stupid question; however, I am very much a layman but have wondering about this for quite some time. 1. You should address this to the one and only @Achems, he loves that kind of stuff and he is soooo patient with laymen and lay LOL or at least that's my impression. 2. Only some can observe, the use of language and word, of which these are tools, to confound thoughts of fools, to set a trap out, that some will trip, no doubt. There are more ways then one, to quiet the ignorance of some. docoman 2012 :) I like the way your mind works :) The cryptic One. :) Edit- scratch that. The Poetic One. ;) 3. "Tell me more my little drug'es tell me more" Little Alex from "Clockwork Orange" 4. That nadsat's time has passed along with the droog it was about, Mr. Burgess. 5. Neva, Newtons theory of gravity was not wrong, still being in use today, it was just further extrapolated on by Einstein. And then if it could be married with quantum gravity, which seems impossible right now, could be further extrapolated on. You do not have to be rotating around a body to feel the effects of gravity, we can feel the earths gravity because we are trying to go through the earth by the force of earths gravity, by the earth bending spacetime, and we would go right through the earth just like a knife through butter if it were not for the electromagnetic force which is way, way, way, stronger. 6. Neva ... I see your point... but the expansion of the Universe it's not only not slowing down it´s in fact speeding up... so there should be a force causing the acceleration ... all galaxies are moving apart from one another faster and faster... Gravity even if not like Newton described... it still warps space... like a sheet being deformed by a heavy object... smaller thing tend to fall inwards... so has matter interacts... gravity should slow down the expansion put in motion by the big bang Picture it like this... if you throw a ball a long a plane surface, it will roll nice and easy for a long time... but if you imagine a irregular bumpy surface... it will slow down as it bumps and falls in the little holes and irregularities... In the cosmos matter creates the "bumps" in the fabric of space... thus it should slow it self down... but in fact it isn't... dark energy is a mere speculation... maybe it's only just a consequence of the geometry of the Universe and not a force at all... no one knows yet 7. Well.. this universe is not run by "common sense". Quantum reality is way beyond our common sense, it is nearly impossible to fully understand or imagine it. We are simply not evolved to see or imagine forces or events smaller or bigger than our average space-time-scale. If we would like to understand our universe we have to chuck common sense and rely on facts, probabilities and imagination. 8. This is 4 months after you asked it, but perhaps this will help. Newton's law of 'attraction' was coined because he didn't know what caused gravity, he could only mathematically describe it's appearance and strength. Einstein defined gravity as the warping of space due to massive objects. As explained in the documentary, Einstein created a 'cosmological constant' to prevent gravity from collapsing the universe. Today, dark energy, and dark matter are it's equivalent. With that said, if it is true that the movement of the universe is accelerating, it is far more likely that the universe is already collapsing in on itself under the force of gravity. Draw it out on a piece of paper, and everything would still appear to be moving apart, even if every galaxy is collapsing, or shrinking under it's own gravity. Since most physicists have 'chosen' to accept the idea of expansion, then dark energy and dark matter become a necessity, even if the idea is wrong. (which I believe it is) I guess, only time will tell! (excuse the pun) Live long and prosper Neva. 9. You didn't answer my question. 10. In a nutshell, the term 'attraction' is still used by physicists because of it's mathematical properties. (Newtons's law of attraction) As I stated in my previous blog, Einstein defined what Newton could not. For the layman, it is easier to understand what is visually obvious, as opposed to understanding the abstract concept of general relativity. Any questions? Take care Neva! 28. Hi there I may be missing some point as im not a physisist or anything like that. Hendrick Kasimir's experiment with the 2 metal plates suggests to me more then what they let on. To have an effect on surfaces that small and to actually be mesureable. What happens when you scale that up to fit the empty space in between matter in space? Is that not a logical source for dark energy? He has the plates close enough to exclude some of the effect of empty space. So what he is showing is empty space's property of expansion. So i guess im just wondering why ive never heard of anyone making that connection? 1. That's a good point. I suspect that no ones linked the math together. I've played with the derivation of Casimir (yeah it's spelled with a 'C', made the same mistake myself once) Effect solving Schrödinger equation with the assumption of an empty box. From there I saw nothing to indicate the behavior they attribute to dark energy, that is, I found the Casimir effect to be attractive, not repulsive... though I do see what you mean. Maybe different boundary conditions (another shape other than a box) is in order. 2. well what I got out "Casimir's : )" experiment was that the effect was akin to something like the way atmospheric pressure will compress a sealed container with a vacuum inside. The expansive force was concentrated on the outside of the plates forcing them together because some of the force was excluded between due to the small gap And really any expansive force that we can measure at our small scale should have huge implications whan applyed to the vast emptyness of intersteller and intergalactic space 29. Can anyone explain to me what they mean by disorder? I can't see how all things go from order to disorder, and the examples in the documentary with books and wine glasses are very naive. The pages are in order from OUR perspective, since numbers have meaning to us and they become scattered. Particles doesn't think that way. Think about the formation of crystals, the rise of synchronicity in nature, in ecosystems, self assembling materials, life, architecture. These are all examples of nature making orderly structures. Atoms can't know what they look like in our scale, they just react with their environment. Tell me how entropy works on the atomic scale, on the sub atomic scale - then I will listen. Any thoughts? 1. Entropy is an thermodynamic measure... although it can be associated with the term "disorder" that we usually use in common sense... you got to be careful, cuz it´s not the same thing... According to the second law of thermodinamics, kinetic energy can be completely converted into thermic energy, but thermic energy can´t be completely converted into kinetic energy... with entropy you try to measure the part of the energy in a system that no longer can be converted into kinetic energy in thermodynamic transformations. this only apply´s in closed systems... without influence from the out side... so in fact the only real closed system is the Universe it self If you take the Earth for instance it´s not a closed system... you have the sun (with low entropy) giving energy all the time, so in fact the sun lowers earth´s entropy constantly. So thermodynamic transformation never run out, and equilibrium is never found, (eventually though it will) think of a car... with a full tank... you burn the fuel to get movement, but when you run out of fuel you can´t get the movement to turn into fuel again.... so entropy is maximum in that system... and the only way to get that car running again, is to put more fuel in the tank... so you are influencing the system from the outside... if that car was the entire Universe it would stop for ever... so eventually the Universe will run out of fuel an entropy is maximum and nothing will happen... you´ll get total equilibrium in thermodynamic transformations Why is this related to "disorder"? According to statistical physics the disorder of a system can be associated with (not directly but through a logarithm function) the number of accessible microstates the system can take once fulfilled the restrictions imposed by the same. Practical constraints common to thermodynamic systems usually connect to the value of the internal energy U and volume V available to the system. So increase disorder of a system means increasing the number of microstates (configurations) available to the particles of the system. so think of an ice cube... it has a solid configuration and the particles are arranged in that structure... so to maintain that structure you have to maintain the temperature in the environment or it will fall apart (maintain the internal restrictions)... if you let it be... temperature will fluctuate and dissipate from hot areas to cold areas so you´re breaking those restrictions.... trying to find equilibrium... if temperature increases so does the movement of particles thus increasing the way those particles can be arranged in... with a total melt down of that ice cube you have maximum entropy to that system... So the state of an isolated system is always the state of maximum entropy when you maintain the internal restrictions... if you break those restrictions it will only increase entropy or eventually maintain it... never lower it... So entropy as discussed only applys to the entire Universe... cuz every other system if part of it... influencing each other... so entropy varies from area to area.... but the overall entropy of the Universe will always increase Hope that counted for something :) 30. Thanks Kliment,one more. If our bodies could somehow withstand traveling the speed of light, would we be able to age slower than the population? 1. That´s not how it works... you would age the same... but if you went on a voyage at the speed of light for a few days... years would have passed on the earth... so even though you would look the same, every one else would have aged several decades... but for you only a few days would have passed non the less... time is just slown down to the perspective of someone not travelling at the speed of light... 31. Dumb question. If the Earth moved faster, would we live longer? 1. No! For us the time would pass the same. But if somebody is moving slower than us from his perspective we would live longer but slower. 32. good series .. fun to watch I have a question... sorry it´s off topic but since this is one of the most recent docs I might get an answer from one of the brains :) ... (Imagine this... just bare with me...) Imagine you have a really large road, with a really large vehicle... and you accelerate it to a certain amount... then inside that vehicle you have another one and you accelerate it to the same amount has the previous one... so in fact the second vehicle has the double the amount of speed for an observer out side the system... right?! now imagine you repeat this process over and over... giving exactly the same amount of energy to accelerate the new vehicle that is always inside the previous one... wouldn't that take us to the speed of light without gaining infinite mass? since it's always a different vehicle with the same amount of energy relative to the previous vehicle... the sum of all velocities at some point should add up to the speed of light to someone sitting out side the system... theoretically of curse... I'm probably saying something stupid... but it puzzled me ... if anyone could give me an answer ... that would be awesome 1. @Ricardo Rodrigues: No matter how many times you try to jump-start the speed, like using another planets gravity to gain speed etc: you have to take time dilation into account from the observers relative perspective, and no resting mass can attain the speed of light, otherwise it would be an infinite singularity, will be static. Light will always travel at 186,000 miles a second no matter at what source and sum of velocities speed it is originating from. 2. thanks Razor... but let me just point out this.. I'm talking of different objects being accelerated, not adding speed to the same object like the gravity pull of another planet like you said... it's a vehicle inside a vehicle inside a vehicle inside a vehicle ... get my point?... think of it has a system of wheels... you accelerate a wheel to a certain amount... and inside there is another wheel... rotating along with the first one, but since it´s inside the previous one it´s stationary to the perspective of an observer inside the first wheel (like a person standing on the earth, it seems still but it is rotating with the earth) ... then you move the second wheel with the same amount of energy... then inside you have another wheel and you do the same... and so on... so every time you move a wheel it is stationary to the perspective of that environment... sorry I can't explain myself very well... it's hard to make a serious point in English :) ... hope I got the message right this time 3. It is good question. I had fun trying to figure it out :). This is what I think, although, of course, I might be wrong. As Einstein's equations prove mathematically, you can not reach speed of light by mechanical acceleration because that would require infinite energy. You can not go round this theoretical obstacle by having multiple vehicles inside one another because every accelerating vehicle has to expend energy to accelerate, and that energy is then imparted onto the vehicle that it is in/on (think an accelerating car for example- the wheels transfer energy to the ground or whatever the car is sitting on). Therefore you could say that the energy of the last vehicle in such a system would equal the sum of the energies of all the vehicles. There is no way to reach an infinite amount of energy, no matter how many times and in what way you add up any real amounts of it. Some member of the system must have infinite energy so that the sum of all energies could be infinite. You can not solve this problem by any mechanical means whatsoever. 4. Thank you Dangis so if I'm walking on a train wagon... the energy I spend for walking is imparted to the train... ? and that influences the entire system...? Just want to make something clear here... I'm not saying that the first vehicle should jump start the second one and so on... I'm saying that the first one is accelerated and stays whit that speed constantly with all the others inside... that' s why I talked of wheels... so they all accelerate independently but still attached to the previous one ... I don't no if I'm explaining myself properly... but your answer made sense anyway :) one other view... what would happen if you had a planet or star rotating at 299,792,000 metres per second, and I accelerated a car to 500 meters per second on it's surface...? (dispite all the obvious impossibilities) could I do that...? and if so ... what would be the overall speed of that car ...? observing from the earth and thinking back to a system of vehicles inside vehicles... even if you reduce the mass of every new vehicle eventually one had to have no mass... so it doesn't work has well right? 33. The idea of a two dimensional reality at the surface of our universal sphere is incredibly significant. Pondering wave-particle duality and the infinite relevance of Pi, you begin to realize that anything (and everything) is possible. 34. The simplest comparison between Newton and quantum physics is in understanding that Newton reality assumes that if you have enough information about a particular event, (i.e. a baseball leaving a bat, the speed, the trajectory), you can predict the out come of that event, (i.e. point of impact). Therefore, theoretically, if you had enough information of all events occurring in the Universe at any given moment, you should be able to predict the outcome of all events simultaneously, and in essence predict the future. In a Newtonian world, all reality is therefore predestined, unfolding in a predictable manner, suggesting all of our fates are therefore predetermined regardless of how we behave. The refreshing aspect of quantum theory is that by simply observing subatomic particles, we can alter their behavior. Heisenberg discovered that you can only detect the position or momentum of a subatomic particle, not both. Once you recognized one aspect, the other was lost through observation. On the sub-atomic level, nothing is predetermined, eternal randomness leaves everything to chance. We aren't locked into a Newton blueprint of destiny after all. Rather we are governed by alternative outcomes in every choice we make and therefore are truly the rulers of our own fate. I don't believe that this was mentioned in the "Fabric of the Cosmos" series, but to me is one of the most relevant aspects of Quantum vs Newtonian theory. 1. ok, the main philosophical difference is that quantum physics gives us the chance for world that is not predetermined and not govern by destiny. But that applies only for microscopic world, everything bigger then that, the world around us we interact with, the actions we make can be predicted.. microscopic world is world of possibilities, it is uncertain if some subatomic action will appear here or there, but in the bigger picture it has no effect on macroscopic world. Example: if u decide to go and get a cup of tea, on your way to do that actually will happen many quantum physics events with all uncertainties and possibilities in your body, in the cup of tea, in the environment u going through, but that will not affect the result at all.. that you gonna get and drink that cup of tea. thus we still have no free will.. It's just the destiny:) (until we find a place for "soul" in ourselfs :D 2. No one is really sure how the macroscopic and the microscopic worlds really interact and no one reached a theory of everything. I believe that there is a theory of everything that we haven't realised yet. We might very well be off the right track but I hope one day, possibly during my lifetime, there will be a breakthrough in relation to how quantum physics and astrophysics interact... Before that happens, we will all be here expressing our mere opinions, guessing... 35. I like the first episode, talking about the non-emptimess of space: fascinating stuff that I haven't seen in documentaries before. Almost as good as Jim Al-Khalili's stuff! 36. Happy New Year Az, Iz, Razor, Epic, Vlatko, Biblelover, and Pysmythe! They're already setting off fireworks here! Gotta go ---- can't trade "real life" for this box all the time. Charles B. 37. If enough scientists of this world want multiverse, they will get multiverse and when they get that, they will still be looking for the container of it all. May be we live in a glass bowl, a sort of crystal ball filled with bubbling black champagne. It's encouraging to see that scientists are as crazy as they wish, can or allows us to do the same. Will 2012 be exciting, existing or exiting....we shall live to see! Happy New Year to all members of TDF and a special toast to Vlatko, Epicurus and Achems_Razor and to Brian Cox who is quite possibly the cutest of them all on a night with the stars. 1. you too Az! I hope you cheer up. you seem to be a little down lately. hang in there and stiffen that upper lip. lol 2. life's beating me... or is it the other way around. I am at the airport in Calgary, long wait for a flight...thanks to TDF, time will fly by. 3. @AZ: Calgary, no way... you live here or just visiting ? 4. I spent just a few hours at the airport on my way to Quebec. 5. Az, Happy new year to you also! No booze for me though, booze makes me happy. 6. That's one thing i always took in moderation with very very few exceptions. Now that i am off green leaves... 38. and wow - thanks for the tip on the brian cox vid... fantastic.. look up symphonyofscience we are all connected. 39. disregard dmxi - I found tons of info already - wikipedia has a nice article on multiverse. guess I was just disappointed that effect of multiverse gravity wasn't mentioned in this doc, unless I missed it... seems to me that over time it could have been a catalyst for the big bang. I should stick with what we know, but questions lead to more knowledge no matter how silly they seem. 1. I have watched the 1st 2 parts of it and am also cant help but notice no mention of string theory, and where is Mr. Michio Kaku, not that he has made less documentaries :) 40. @dmxi - pls post link - I've been wondering about this for quite some time. 41. could dark energy simply be gravity from other universes (multiverse) pulling ours outwardly, rather than a force pushing from within ours? 1. that theory is being considered & i loved that notion to be true but i do not dare of dreaming it's possibility!wish i could give a link but don't have one at hand at the moment. 42. Only 2:00 (minutes) into this, but I suddenly wonder about the entire notion that everything rests inside Space; as if Space itself were like a box and Matter is something that is contained within it. What about the idea that the Matter is the Space? I guess i should finish the entire documentary first, but so far it's got me thinking. 43. I think if no religion had ever spoiled the mind of people through wars and division, we would all be spiritual scientists searching for the meaning of life. 1. dear az,that is where evolution will with out a doubt lead us .the climax of our journey will be mind over matter & this is not a religious belief nor statement but just the unavoidable transition of entropy into it's purest form.spiritiuality is a surrogate for perfectly vibrating with ones surrounding (consciousness not measurable )& misused terminology misdemeanors scientific scrutiny will be ensured!so long we survive ourselves,of course! 2. Sounds a little bit like some of the latter parts of 'Childhood's End'. 3. had to google 'childhood's end' & was surprised to have found an A.C.Clarke novel.i possess a couple of clarke's tv related books but as i found out ,this seems to be his own favourit work.must give it a go.thanks for the hint . 4. It's a GREAT book, even though it's, what, 50 years old now? One of the very best sci-fi books ever, for sure. 5. I am reading a book suggested by @Pysmythe, A Beautiful Mind (I have seen the movie and the documentary on Nash). I am at page 100 or so....i am amazed to read the intricacy of how the brightest mind of those days were mainly use for military advancement. The camarades came up with a game named "So long sucker_F*** your Buddy", somewhat appropriate for the environment they were creating in order to pay for the pursuit of their personal dream of accomplishment and recognition above one an other...It is without a question that science (or at least a large part of it) is still used to pay and to advance military power. The computer we write on are here because the military needed a better way to connect....and then the digital world expanded to expand their control over all of us. 6. 100 pages already?! You don't mess around, do you? edit- In addition to changing ourselves, all those extremely competitive people had better get a lot more busy than they are figuring out how we can live within the planet's means, instead of figuring out how to destroy it, or all this great science is just going to be an echo in the cosmic void. edit-edit- I know you know this...but we DO need to keep preaching it as much as possible! 7. yep...when i like something i immerse. Cosmic void taken as comic void by the ones holding the strings. 8. Not sure if preaching works....but exemplifying does least to the one doing it. 9. I like preaching! People say I do a good job at it (I just need to always fallow my own sometimes)! ;-) P.S. Az, I changed my avatar just for you so you can see my real face, as you mentioned once in one of your posts. Here I am with my litte buddy (age 2 at the time) . . . . and a butterfly of course! Gotta keep my icon of rebirth and new life in there! ;-) 10. Preaching is talking to others about what we think they should think....when i talk (of my beliefs) i don't care if people agree or not, i just talk so i can hear myself and see if i still believe what i am saying. ....some times i don't and most times i do. I really like your new avatar....i wish i could make it bigger and see your eyes and see his eyes. 11. Az, you do know you can hold down Ctrl and tap the + key to make it bigger, right? The image gets a bit fuzzy the larger it gets, but it might work well enough for you. (don't recall if I ever mentioned this before) edit- Just don't do it with mine, please, thank you very much! 12. you don't know how good it feels to smile at your message right now! i was in need of diversion.... 13. think of religion as fertiliser then it all makes sense 44. I can predict the weather for the next 10 years = artificially cloudy 45. re: plates with an extremely narrow gap colliding... dark matter, the venturi effect re: acceleration of universal expansion... electromagnetism trumps gravitational attraction re: red shifts.... everything loses energy over time, one of our basic physical laws. light, however seems forbidden to decelerate. an object loses energy by decelerating, or by increasing period of vibration. a red shift is an increase in period of vibration in light re: time, perception, and the unbreaking of wine glasses: if, as proposed by the learned folks featured in this segment, time is an entropic motion, then the perception of past, now, future is immutable, and inviolable.. our future is in a less entropic state until we enter, and disturb it, thus decreasing its "orderliness"... our past, having been disturbed both by our entry and exit , has decayed to a more entropic state than our now, and our entering it can only disturb it again, thus increasing entropy rather than the decrease that would be required to re-perceive the moment re: string theory, the multiverse, and 10 to the 500th: the huge number of imperceptible potential variables of state for the 10 dimensional strings required by the theory can only be resolved by one of three conclusions. multiverse, got the concept, but the math is wayyy off, or, though there are 10 to the 500th potential variables, a far fewer number are actually viable particles(if a variant calls for one or more simultaneous opposite, or contadictory actions, then it probably could be potentially possible, but extremely unlikely, just as a single example). the only way to really find out i guess is to sift through a few millenia of equations, and see which set actually resolve themselves to our "reality", if any might be quicker to try and rework the initial variables, and find a theory that works within the framework of our 4 dimensional plane, which would pare down the list of potential variables quite markedly (note:editted thoughout viewing as i finished episodes to allow for my short term memory 46. check out Prof Brian Cox: A Night With the Stars, bbc 2. very nice :) 47. Episode 4 – The Origin of the Universe and everything in it. There used to be a fundamental believe that the universe was created in a Big Bang Event 14 billion years ago. The reasoning for the Big Bang Theory came up when it was observed that distant galaxies had an increasing red shift value. This observation was then interpreted as the galaxies moving apart at a faster and faster rate the farther away they were from Earth. The natural conclusion from this observation was that if the arrow of time was reversed that all of the galaxies must have started from a single point in Space-Time. The time estimate for this event was 14 billion years ago. The Earth is 4.5 billion years old. The only problem with the Big Bang Theory is that the original galaxy red shift has been misinterpreted and misunderstood. Astronomical observations from nearby galaxies have shown that they may contain 2 or more Red Shift values. This information has been conveniently been ignored so that the Big Bang Theory and constantly expanding universe theory holds up. The problem is that the Big Bang Theory is incorrect and is NOT the origin of our universe. Problems with the Big Bang Theory. It has been suggested from the Red Shift evidence that the farther away a galaxy is from Earth the faster it is moving away in the expanding universe. This universal expansion will never end. So what happens when galaxy expansion reaches the speed of light? The expansion of the universe theory breaks down right here. The expansion of the universe theory however broke down as soon as multiple red shifts were found coming from the same galaxy. This meant that there was another explanation for the Red Shift. Our universe is filled with hydrogen gas and other dust particles. The longer that light from a distant galaxy is travelling through space the more likely that the smaller wave lengths of light will be reflected away. This is called the Red Shift Sunset Theory. Over a certain distance of interstellar space only the longer wavelengths of red light will be able to make it through without being reflected away. The more distant the galaxy the more red shifted the light. Meaning that only the longest wavelengths of red light can make it through. In the case of a galaxy showing multiple red shifts, this is an indicator of various levels of dust and gas that the light has had to travel through. The more red shifted the light the more dust and gas that the light has to travel through. The best example to see this in real life is to 1) View the sky on a sunny day at noon. It is blue. 2) View a sunny sky at sunset. The sun's rays are now shifted to the orange red because they have to pass through more atmosphere. The sky is NOT red due to the Earth accelerating away from the sun. Therefore the universe is not expanding with regards to a Big Bang explosion. Therefore our universe did not originate in a Big Bang explosion. So where did all the matter in the universe come from? The matter in our universe comes from the basic function of Space-Time. The function of Space-Time is to provide a location for the creation matter from pure energy. Space-Time is dynamically active EVERYWHERE with the creation and destruction of matter at the Quantum level. This is a proven fact of Quantum mechanics and Space-Time. Matter is simply another form of bundled energy created at the quantum level. Albert Einstein's equation E = mcc gives us a clue. Where E = Energy, m = mass, c = speed of light. At any given time matter is being created and destroyed at the Quantum level in Space-Time. More matter is being created at the quantum level than is being destroyed at any given time. The proof of this is our planet in our solar system powered by our sun that is part of our galaxy that is part of our local group of galaxies. 1. Matter is created out of empty space at the Quantum level of Space-Time. Particles are created that become electrons and neutrons that then combine to form hydrogen the basic element of the universe. 2. Hydrogen atoms accumulate through electrostatic forces into hydrogen clouds. 3. The mass of the hydrogen clouds then warp space, resulting in a hydrogen gas ball that under the extreme force of gravity or other compressive outside force then begins a process of nuclear fusion becoming a sun. 4. The life of the sun then produces the other atoms of the periodic table. 5. The universe can NEVER run out of hydrogen atoms to create hydrogen gas clouds to enable star creation, because the very nature of Space-Time is to create particles from pure energy at the Quantum level. Quantum energy particles with a mass dependent on the energy input at the quantum level. The process of particle creation is occurring everywhere in Space-Time. Electrostatic forces and gravitation determine the location of galaxies and galaxy groups throughout our universe. It is therefore not surprising that there is a web of interconnectedness between galaxies and galaxy groups throughout the universe. The very nature of Space-Time to create particles at the Quantum level from pure energy explains the existence of EVERYTHING we see in our universe without the need of a Big Bang Theory. 1. @Arnie: Don't really want to look stuff up so will attempt to answer from the top of my head. The expansion from inflation is already exceeding the speed of light, it is the space itself that is expanding dragging the galaxies with it. Similar to blowing up a balloon. Space is very, very diffused with hydrogen gas, dust and others, still considered a vacuum almost devoid of particles. You cant class your sunlight scenario on earth as anything remotely similar with the red-shift from galaxies from space. We live in an atmosphere of 14.7 lbs. per inch in cross section atmosphere high, on our bodies. You don't have a clue as how red-shift is measured from the galaxies in space. "Matter is being created and destroyed at the quantum level all the time"?? tell us how this is happening? (references)!! The rest of what you are saying is something you better site some references for, or otherwise it is stuff you seem to be making up! 2. All true. But the human mind thins in point source to point source. In other words... every explanation requires a point source as a beginning and a point source as an end. So, the idea that we came from a central point and will inevitably go back to a central point is the only explanation the mind can handle. 48. Episode 2 - Critical comment on the concept that the world around us moves from an organized state to a more disorganized state. This is the law of thermodynamics and it is fundamentally wrong. The examples shown in the episode 2 are of pages of a book being scattered, an egg breaking and so forth. However the exact opposite is true in our universe. Our universe moves continuously from a disorganized state to a more organized state. How can I say this? Because life would not exist on the planet Earth, in our solar system, in our galaxy in our local group of galaxies if the law of thermodynamics was true. Meaning it would be impossible for life to exist on the planet Earth, orbiting our sun, within our solar system, within our galaxy within our local group of galaxies if the true fundamental law of the universe was not to move from a disorganized state to a more organized state. Real life observations of the universe moving from a disorganized state to a more organized state. 1. Space-Time transforms energy into particles with mass. 2. These particles become hydrogen atoms. 3. Hydrogen atoms condense into hydrogen clouds. 4. Hydrogen clouds become suns. 5. The fusion process in a sun creates the other elements of the periodic tables. 6. A sun goes super nova and in the process a new sun and planets are formed. 7. The newly created elements allow for the creation of life on a planet. 8. That life evolves to become more and more complicated. 9. That life transforms the elements around it into more complex structure and tools. 10. I am a human being the most advanced animal on the planet Earth, using a laptop computer to watch a program created by other human beings, stored on a hard drive and sent across the internet the most advanced communications network on the planet Earth. Conclusion: The fundamental law of the universe is to move from a disorganized state to a more organized state. One only has to look around and realize that this is the case. The universe DOES NOT as a fundamental rule move from a organized state to a disorganized state. If it did, hydrogen atoms would not be formed in Space-Time, hydrogen gas clouds would not be formed, suns would not be formed, other elements would not be formed, planets would not be formed, life could not exist and this program would NEVER have been created. Therefore based on the evidence that surrounds all of us in every day life the fundamental law of the universe is to move from a disorganized state to an organized state. 1. Others better equipped to do so will be along shortly to explain how you've misapprehended the laws of thermodynamics. Me? I'm just one of the local boys, lol. 2. while you have somewhat correctly elaborated the origin of life on earth, you fail to disprove the second law of thermodynamics. While we may move to a more philosophically "organized" state, the entropy of the universe has always been increasing. Entropy is not exactly order to disorder, but rather an orderly state expands from a tight mass to a gaseous, sparse plain. the examples used in the documentary are only metaphors. 3. I think your confused. When things get more complicated, they do not get more orderly, rather more chaotic. Chaos however is not fundamentally bad, as it paves the way for new more complicated orders: as you explained in the formation of the Universe, our Sun, Earth, Evolution; All these these have changed throughout time to increasingly more complicated formations. However I do not see how any of this is progressing towards more perfect order? The opposite is probably more true, as the universe prior to the big bang in its simplicity was the order. however simplistic order that is unchanging is disadvantageous; disorder in contrasts, and the evolution of events creates the illusion of time as progressive... Therefore Disorder creates new order, which then through time becomes disorder for a new order. Hope that helps 4. kind of like my sphere shaped multi universe idea that i created four hours ago, i liked your conclusion, i've never done any science classes before but i find the subject like an endless debate filled with unlimited questions a journey of thought! and i was wondering who would i send a copy of my book of 100 new inventions to when its finished im at 42 at the minute, from designs to unknown new inventions? 5. i agree with you. I don't want to sound like Sherlock Holmes or anything, but this seems so obvious... it's elementary. lol 6. How do you define "organised"?, or disorganised for that matter? Whether something is organised or not is simply a self-projected image of the mind. Without words and thought and human perception it's all just action; - it just "is"... 7. No, the second law of thermodynamics is correct as per present understanding of physical laws. Secondly the universe moves from lower entropy to higher entropy. One of the key reasons why we can not create a perpetual motion machine or why we cant harness energy with exact conversion, and many other examples. The march towards higher entropy is also the reason for emergent properties. The examples of the hydrogen clouds to suns are accurate but a scientific treatment and understanding would tell you that, this and all the other examples you mentioned, really do support the march of the second law of increasing entropy. One of the major research questions still being investigated is that why did the universe start at a state of extremely low entropy as compared to what entropy we observe today. This is a valid question but in no way does it question the onward march of entropy. Also our perception biases us towards a more particle nature of our reality, but for a deeper understanding we must look at the field nature of the universe and reality. 49. @Epicurus...will certainly put his mark here. Be back tomorrow. 1. to be honest, when Waldo is around, im not needed. lol 2. no one can replace you....but i agree Waldo is a master. 3. No, no, no... No master, trust me. You guys are making me blush here, seriousely though I am just relayng what other men figured out, not me. The key to understanding it, I mean really developing a feel for it, is mathematics. I don't mean memorizing the equations or learning how to correctly calculate them, that is important of course but it doesn't explain what the equation means to the real world we live in. You have to study the mechanics of the equation, play with the variables and see what results you can get. Still, I have a long way to go, trust me. 4. You are always needed Epi, everyone has there own unique way of explaining things, there own unique insight into the standard equations and laws that regulate our world. I am a far cry from an expert, I am just relaying what my professors have taught me really. Thanks for the compliment though, it means a lot coming from you. 50. @waldo wow, that's a lot of detail on entropy & thanks, but that wasn't really what i was getting at. simply put, whether the bias entropy places on how things 'tend' to go is all there is to the matter of time directionality. nature [as we experience it] occurs forward, the processes only have a phenomenological coherence forward. the prospect of a ball lifting off the ground, flying into a bat i'm unswinging backward which then accelerates into the pitcher's hand is beyond merely 'improbable'. such a prospect only has relevance in our universe as the inverse of the prospect we know. you seem to be saying a new alternative universe is spinning off of every permutation not taken in the quantum space. which seems, if i understand the concept, a rather profligate use of universes. the past is not, as we experience it, perhaps as indefinite as the theory of quantum physics would imply, according to your remarks. if i have eggs for breakfast, that observation will be the same no matter who reliably observes the fact. at any rate, for a cosmos with so many potential careening alternatives, ours seems to be tolerably consistent and isolate, however seemingly arbitrary. 1. @RileyRampant: Yes others can observe you having eggs now, but what of one year from now, unless you took a picture with the date on it, just by thinking back one year hence what did I have for breakfast, probabilities come into play for the (unobserved) past that effectively changes the past, and that changes the future, from Hawking "Grand Design" The past is not real, can a person grab a hold of it in his hands? no, only fleeting or not fleeting memories that by itself may not hold all the nuances of what you think happened in the past, things may be forgotten or even added. And not new universes forming in the present but from the unlimited probability field that forms new universes for you every Planck second from all the choices that are offered, should I do this or that, this direction or that direction etc: that you yourself taking into consideration other interactions that form your own reality. And keep in mind the choices you did not take that were offered are still just as real and viable, they exist in alternate realities, from your picture book of snapshots, Re; "Julian Barbour" and his "End of Time" theory. 2. I see what you wrote....the only present that is part of that comment is in the spaces between the words. Made me think...there is a way to think that the past and the future are the only times to exist in the mind. The present may be like a higgs...we can never grasp is sandwiched between and pushed flat like by those two plates. I have felt the present but i can't explain it...all i can say is, it was while holes into black holes. Weird? Ok...i don't mind that. 51. interesting that multiverses provide theoretical support to : constant inflation string theory dimensional contour 'random' appearance dark energy value 'random' appearance in the sense of showing, at least, that these 'arbitrary' values might be understood as instances distributed amongst other universes although i didnt catch the basis for asserting that such quantities WOULD distribute randomly, merely that the incidence of multiverses would provide an OPPORTUNITY under which such a variance might occur. the treatment of time directionality as a mere consequence of entropy seemed incomplete. there are causal relations linking before & after which this treatment seems to neglect, unless we consider that the entropy discussion is a short-hand or abstraction, for these relations. most of this stuff is over my head, too. the fabric of the cosmos is wilder & woolier than most would, could, or would perhaps even choose to imagine. 1. @RileyRampant: Time directionality? there is no time directionality, that is, according to Einstein theory of special relativity, that requires that all of spacetime be present at once. And entropy, which entropy? which reality? which universe? the one that we are apparently in? the one that we are in is flipped every Planck second by our now's, new universe, our now's that are instantly transposed into our past giving us our flow of time, our past is not even real. The universe, according to quantum physics, has no single past, or history. The fact that the past takes no definite form means that observations you make on a system in the present affect its past." (Stephen Hawking) "Richard Feynman" in his "sum over histories" demonstrated that subatomic particles traverse infinite paths through spacetime, implicating infinite histories for any particle, which of course means many worlds, multiiverses. In Brian Greenes "Elegant Universe" book... there are quilted...inflationary...Brane...Cyclic...Landscape...Quantum...Holographic...Simulated...Ultimate. (Multiverses) 2. I agree with the idea wholeheartedly that the past has a symmetry with the future as in there are multiple pasts and futures. It makes an interesting case in religion as well, as from the standpoint that most religions involve time travelling beings, all religions can simultaneously be right and wrong. 3. You got it exactly, entropy is the essence of those causal relationships, unless humans intervene that is. You see I can push a uphill or nonspontaneous reaction to completion and actually reduce the amount of entropy instead of increasing it. Check out the gibbs free energy equation and you will see what I mean. The catch is that after that reaction occurrs the products will eventually spontaneously decay into the lowest energy configuration, or zero point energy. This decay from a higher to a lower energy configuration is entropy and causes us to see time as moving in one direction, it also is what drives spontaneous reactions that happen in nature. Methane igniting is a great example, all one need do is provide enough energy to reach whats called activation complex and the reaction will then take off. Methane combines with oxygen to produce carbon dioxide and water plus about 211 joules of energy per mole of methane. Now we have turned one compound, which stored a great amount of potential energy, into two that have simpler structures and store much less potential energy that can be easily accessed, we have increased the amount of entropy in the universe. This is why you never see water and co2 spontaneously reacting to become methane, this would break the second law of thermodynamics as the atoms would be moving to a more ordered, higher energy state. The same goes for why you never see a bunch of bricks spontaneously assemble into a wall, unless some energy or force intervenes like a brick layer. Now, what really freaks us out is that quantum mechanics doesn't seem to follow these rules. Instead it follows the rules of probability, it is improbable that the bricks will self assemble in any reasonable amount of time, but if we could wait for billions and billions of years, the probability works out that it could happen. Feynman gave us an equation to work out that probability and it makes excellent predictions. Due to the nature of the equation, last operation is to divide by Planck's constant, the larger the object and area you are dealing with the longer you have to wait for it to exhibit quantum characteristics. If it is small as say an electron it will continuously pop in and out of existence, here then way over there. If it is say the size of a basebell it takes billions and billions of years to do the seemingly impossible, this is why we never see it. I reccommended a lecture to Jack below, it explains all of this in great detail, you should check it out. 52. smthing i dont get , if empty space causes the two square metal plate to touch each other how can it be the principal actor causing the expansion of the univers ? isnt supposed to contract instead? is there anthing im missing here? 1. Its not the prime motivator of expansion, dark energy is. Vaccuum energy, which is the energy that exists in all empty space, is what pushes the plates together. These are two different things. Unfortunately they don't know much about dark energy other than it must exist or gravity would have stopped/slowed expansion by now. Instead expansion is speeding up, with no apparent cause. 53. Brilliant documentary! 1. OI wife where you been!? 2. Been here all the time, but quiet :) 54. documentry is partial with higgs-boson particle...not a single word about S.Bose who was also responsible for this particle... 55. That's bad math, Jack. 56. But how many smithereens make up a single Quark? 57. Watched the first hour. Honestly?... If you've seen a number of these kinds of docs before, you could skip the first part of this as a rehash of background material you're already bound to be only too familiar with: Billiards...Space as a taut fabric with a feckless bowling ball rounding out an aesthetic butt cheek...Little satellites of cue ball paparazzi... Boring, right? Pretty much the same old illustrations and explanations seen and heard a hundred times, because there really isn't any better way to lay it all out for the People of the Word, which is the vast majority of us. But precisely at minute 25 it started to get mighty interesting. I'm glad I stuck with it, because from that point on, even though most of the material was more or less familiar to me, they actually managed to approach it in ways I don't, for the most part, ever remember seeing before. Clear and brief, too, which I really appreciated -not overly complicated with all those tangential metaphors, which is a real risk when programs like this get rolling. The explanation for the Higgs was the best I've ever seen, for example, and I realize now I had pretty much completely the wrong idea about how it must go about doing what it does. The info here at least gave me enough to take a better swipe at it, since it's impossible for me to get a grip on it. But most especially, the section on the possibilities with information stored on the "surfaces" of Black Holes, and the extension of that concept to the entire Universe, was mind-bending and inspiring. Obviously, I'm not a scientist, but I got the impression watching this that, somehow, these deeply mysterious objects may be the real key to everything, particularly insofar as the future is concerned, and not just because of the fact they will be the last things left in the Universe, quadrillions of years from now. I got the impression, to be blunt about it, that they may be involved ultimately in keeping things going in cycles, perhaps elsewhere for the time being, and perhaps "here" in the very far future. They save information, nothing of it is ever lost, the entire Universe will eventually be swallowed up by them, so where does this information end up? Is it possible it can be "assembled" somewhere as a new Big Bang? As more than one? Are they the Trashcans of the Gods, providing ways and materials for recyclable Cosmos, or just really long-lasting vacuum cleaners that the plug is finally, irrevocably gonna be pulled on one day? Anyway, whatever may be the case with all that, after the typical start, this one turns out remarkably focused and succinct, and didn't leave me feeling nearly as muddleheaded as docs of this sort are prone to. Moving on to the second hour. 1. @P: The doc was good but the book of course is many times better, goes into much, much, more detail. 2. I didn't think black holes would be the last thing in the universe, I saw a documentary that talked about all the stars burning out and leaving these balls of carbon that radiate left over heat until finally even that energy decays due to entropy and the universe goes to absolute zero, dead. Then I saw another one that said no it will all rip to pieces because of expansion driven by dark energy and, another that says no expansion will stop and gravity will win as the universe is crushed into a singularity only to expand in a big bang again. But, I missed the theory that blck holes would swallow it all eventually, not saying it doesn't exist just that I missed it. Could you lay out the logic in more detail or offer a link please, I am curiose. I know of at least one paper that would disagree with that theory, you can find it arxiv dot org, which is the Cornell University library online. The paper is titled Dark Matter Accretion into Supermassive Black Holes and is specifically at arXiv. org > astro-ph > arXiv: 0802. 2041v1 (no spaces of course). It explains that less than ten percent of the accretion disk of super massive black holes can exist of dark matter, in other words black holes would have a very hard time swallowing it all, remember there is more of it than regular matter by far. The way they work this out is by measuring the x-ray emmisions and comparing that to the amount of matter being consumed, the x-rays should account for ten percent of the mass crossing the event horizon, if it is all regular matter, and thats exactly what they find. Anyay, its a good paper I think you would enjoy reading, merry xmas by the way. 3. I believe it's in one of the docs here that it says the last thing to be going on in the Universe before the absolute heat death will be black holes emitting Hawking Radiation over trillions of years (barring the Big Rip thing, of course). Can't remember the title, but the one where the narrator is literally walking up a flight of steps one at a time while he delineates the different stages in the life-cycle of the Universe. Long, long after all those dead, regular stars finally wink out for good will just be those gravity wells syphoning off HR for eon after eon, until finally they, too, are gone. Fact is, now that I think about it, I'm sure I misremembered about black holes swallowing up everything, and was actually thinking about the part in the cycle in which you wouldn't even be able to see other stars in the sky because of the distances separating them, and of the inability of new stars to form because of entropy. But this is really to say, when I think about everything in the outside Universe being finally dead and finally cold, there's a part of me that wants to, I suppose, anthropomorphize the nature of black holes, making them into collective gods who have, in some way as yet unforeseen, somehow saved enough of the contents and information of the Universe to allow the Story to begin again, here later, or someplace else now. Do you remember that part about the 2 dimensional information on the surface of the black hole being capable of being rendered 3 dimensionally inside of it? And of our Universe right now potentially being the same thing, writ (or projected) large? Man, what does that really mean? For all of this talk about their ability to suck up nearly everything, including light, doesn't there seem to be a strange Looking Glass quality about them? I just have this sense that there's a whole Universe on the other side of one, and that we may, in essence, be in one right now, but I guess it's just a fantasy born out of a desire to cheat death, which is probably the oldest human story there is, right? I'll read that paper, but I sure don't expect to understand very much of it, lol. A few months ago I went on a Mersini-Houghton kick (she is SO sexy!) and pulled up four of her papers online. Of the four, I was able to understand anything at all of precisely one of them. Of course, never having gotten any farther than algebra and geometry, I certainly didn't expect to, but it was fun to try it anyway. I was giggling the whole time... I suppose I just kept hoping a breast would pop out somewhere. Never did English sound so much like Latin! If I thought it would do any good, I would try and advance more in mathematics, even at my age. In fact, about ten years ago I did try it, but just confirmed that I'm simply not wired for that kind of thinking. If the same information could be compressed into some type of formal musical structure, I'd probably have a real shot at understanding it a lot better. Hope you had a good xmas, too, Wald0. 58. This was awesome, if you have the time ;) 59. The Higgs field. The field that allows sub atomic particles to gain mass. The Haydon collidor has been designed specifically to smash sub atomic particles together at near the speed of light in order to find a Higgs particle / Higgs field that produces gravity. To be honest I think these quantum physicists need to rethink their approach. If the Higgs field is responsible for sub atomic particles obtaining gravity, what on Earth makes them think that they can find the Higgs field by smashing sub atomic particles together. It comes across as being completely silly. I'm sorry. However the rest of the documentary looking at space as something tangible is really quite smart and explains the creation of our universe, without using the Big Bang Theory that is fundamentally wrong. What gives particles their mass? Einstein's equation E = mcc provides the answer m = E/cc It has to do with the energy imparted to a quantum particle in space that then determines its mass value. This is done at the quantum level. There is no Higg's field. There is no god particle. And their is no field that assigns mass to a particle. What there is at the quantum level is a certain amount of energy that is applied to quantum particles / bundles of quantum energy that then transforms them into sub atomic particles with a known mass. Smashing sub atomic particles together in the Haydon Collider can never reveal the Higgs field / Higgs particle because it does not exist. Quantum physicists need to rethink the whole process of quantum particle creation. When this concept is expanded out, it explains the fundamental creation of the entire universe from empty space without a Big Bang. Matter is created out of empty space. Electrons and neutrons are created that then combine to form hydrogen the basic element of the universe. Hydrogen atoms accumulate through electrostatic forces into hydrogen clouds. The mass of the hydrogen clouds then warps space, resulting in a hydrogen gas ball that then begins a process of nuclear fusion becoming a sun. The universe can NEVER run out of hydrogen atoms to create hydrogen gas clouds to anable star creation, because the very nature of space is to create quantum energy and quantum energy particles. Quantum energy particles with a mass dependent on the energy input at the quantum level. 1. Please site all your papers in the literature on all this amazing theory. ....that is what I thought. The internet is available to anyone with a keyboard and a connection. BTW, the sun does not produce 'the other' heavier atoms of the periodic table other than helium, Heavier atoms require supernovae. 2. I agree with you on all counts. I'm not a scientist, or even an amateur scientist - I'm just a hardcore science enthusiast but I have come to the same conclusion myself. I think you put it better than I would though. I just wish I had gone down the science path in Uni instead of the arts. I didn't realise until it was too late what I was missing out on. Listen up kids! Become scientists or you'll regret it! The more you learn the cooler it gets! 3. E=mcc is not Einstein's equation. It is E=mc2. A huge difference. Higg's particles interact with other particles. By smashing particles together they hope to create the Higg's particle, which would be detectable, which would in turn provide evidence of the Higg's field. Scientists believe they were close, at the very limits of its capabilities, before the collider at CERN was shut down in 2000 in order for expansion. There has been indirect evidence of the Higg's particle that has been encouraging. Also the theory explains all facets of the Standard Model which other theories lack. Evidence is there that these particles exist and science must go where the evidence leads them. It would be silly not to. People, like myself, have a deep interest in science. However, my knowledge is limited since I do not have the education in those fields. I must trust those who have worked on these projects for years...the ones who have devoted their entire lives to the study of and the experimentation in particle physics. It would seem unlikely that scientists of different nationalities, political systems and research institutions all seem to agree that the Higg's particle exists and it can be found by colliding sub-atomic particles. I would never assume to know better than these individuals unless I had unimpeachable sources that contradict them. I happen to notice that you do not have any such sources. 4. Actually E=M(CC) would be the same thing as E=MC2, he just forgoot the parenthesis. Also they are not hoping to create the higgs particle but to knock it loose, so to speak. The physics says it is possible to dislodge a piece of the higgs field, think of firing a bullet into water, you may knock one molecule of water loose from the rest when the bullet hits it. All you have to do is apply a force in the right vector orientation and strong enough to over come the attractive forces that hold water molecules together. At least this is how it was explained to me by my physics professor, I don't mind admitting though that the work they are doing at the LHC is way over my head. I can hold my own when it comes to Newtonian or Relativistic physics, but I never got far enough to really delve into quantum mechanics with much detail. Being a chemist however I have studied extensively the quantum nature of atoms and the first level sub atomic critters, electrons, protons, neutrons (electrons mostly). It is the number of protons, nuetrons, and electrons each element has that gives it its own individual characteristics and places it in a family or group. It is how electrons interact with other electrons that decides how compounds form and what structure and charateristics they will have, well that and electrical charge in general. But, that stuff doesn't tell me much about things like the higgs field or quarks, the really tiny, odd stuff. There is a documentary out right now, well it isn't really a documentary it is Brian Cox doing a presentation on quantum mechanics at the Royal Society lecture hall, in London. He does such a fabulous job of explaining both the Pauli exclusion principle and Heisenberg's uncertainty principle, along with the wave nature of electrons, probability, etc. Its got to be one of the best lectures I have seen. You should check it out, everyone should in fact, it is called, Professor Brian Cox: A Night With the Stars. Its not only educational it's halarious at some points, a lot of comedians and actors are there helping him with his demonstrations. Merry xmas, hope you enjoy it. 5. Now that you mention it I believe that is how it was explained in the documentary, also. I had been reading in a Fermilab site and the word "created" was used in the explanation. I have been trying to get a handle on this stuff after watching the documentary about CERN that was posted recently on this site. It is still way above my head but it is slowly becoming a little clearer. These documentaries, which prompts me to investigate further through reading material, and the insights that I get from people like yourself has been a great help. I found the presentation by Brian Cox that you recommended and thoroughly enjoyed it. Thank you very much and a Happy Holiday Season to you too. 6. jack, multiplication works no matter the order of multiplication... MCC is equal to M(CC), or C(MC), or C(CM)... try it with simple numbers and see... if C equals 5, and M equals 8 M(CC)= 200... C(MC) or C(CM) both also resolve to an answer of 200....thus, E=MCC is indeed a valid mode of scribing einsteins equation, even if not altogether "correct" from an accepted notation view (one might also note that because of the shortcomings of our english keyboards, the typed version of the equation so often seen (E=MC2) is absolutely INCORRECT, as the equation reads "ee equals em cee squared", and NOT "ee equals em cee doubled", which is what the "keyboard version" would imply).... i figured actually explaining why is a bit better than just "bad math, jack".... lol 7. You're right. All I saw was the notation and automatically assumed it to be wrong. 8. Thats called the commutative property of multiplication.The reason I still use the parenthesis, and why is consider correct notation, is because once you introduce negative and positive intergers it can get very confusing without them. That never happens in this particular equation, due to all factors being positive numbers, but it is a habit. It also serves to emphasize that you are squaring the constant which means you are working with a quadratic equation, which becomes important when graphing fission reactions. The reason I like to see it emphasized though is because when you are explaining to someone how much energy is contained in a very small amount of mass pointing out that the speed of light is squared gives them a good impression of the huge numbers you can potentially come up with. 9. absolutely agree, waldo, that the "correct" notation is really essential once you wander out of the realm of simple multiplication of positive numbers, and the parenthises make it a bit clearer even with the simple stuff, but i can also understand not wanting to use the popular internet chat notation, which can just serve to muddy the waters for somebody just beginning to grasp relativity (and results in much less impressive potential 10. though i do like your theory, your math to explain your point has a fundamental flaw... though potential total energy of an atom that exists in its "particle bonds" (for lack of a better term in my lexicon, lol) is expressed very well by einsteins equation, that is all energy stored far ABOVE the quantum level... the relativity equation (and the rest of einsteins work) is based on the "tiny 3" (electrons, neutrons, protons), but none of the "subcompact models" of the quantum world...thus, your equation is just the "proof step" of einstein's, and CANT prove the "cause" of mass, unless of course carried and proven down to those miniscule scales, step by step until one arrives at a point where there are no more particles to "break down"... i also have to point out that something has to impart said energy to "get the ball rolling"(cause the first level of particle to excite into existance) and begin the various energy transfers that would result in the formation of more complex, larger particles... to illustrate, let's suppose our atom is a spinning flywheel.. einsteins equation illustrates only the energy stored in the flywheel's motion, but has no way of quantifying the energy used to produce the flywheel when it was manufactured, and gives no clue as to how the bessemer furnace was ignited for smelting the ore... 11. Thanks for your insights. I was under the same misapprehention as Arnie and I found your explanation very clear. 60. to infinity and beyond 61. I like that scientists are continually debunking current scientific theory. Therefore creating more bunk. 1. It's still better than Dogma. 62. Great series based on a great book. 63. ok, so here's my theory; what im about to say is not backed by any science what so ever: before the big bang, on a scale unimaginably large compared to ours, they were trying to identify the smallest particules of their univers with their unimaginably large hydron collider and created what is to us the big-bang, only to find out there was a whole periodic table of new elemental smaller particules. we are living in the particules that appear for a trillionth of their second during their experiment, but on our scale it seems like trillions of years of expantion, during which we have all the time to develop ever bigger hydron colliders, to identify smaller and smaller particules... perhaps as we identify quarks and what makes up quarks, on a scale unimaginably small compared to ours, people are identifying the elusive smallest particule, at last! :{D 1. that sounds freakin sweet! and also would mean that all these events, getting infinitely smaller and smaller and smaller would (or has) happened almost simultaneously to the most "unimaginably large" which really makes the buddhist belief that everything has already been achieved sound really appealing. monkeys with typewriters man! lovely thought 64. Amazing, am half way through Brian Greene's book "The Fabric of the Cosmos" This is a much watch! 1. I recently watched a movie. I think it was called "The watchmen" I imagine "Doctor Manhattan" would be your favourite ever super hero. (Mr Quantum) 2. You betcha! 65. I watched this under a different name a few days ago, great series. 1. In your alto ego within another multiverse? 66. Awesome series. Awesome, no need for more words.
309a20214eccbe31
 Convective Schrödinger Equation: Insights on the Potential Energy’s Role to Wave Particle Decay Journal of Electromagnetic Analysis and Applications Vol.07 No.09(2015), Article ID:59954,7 pages Altair S. de Assis1, Hector Torres-Silva2, Göran T. Marklund3 1Universidade Federal Fluminense (IM-GMA), Niterói, Brasil 2Universidad de Tarapacá, Arica, Chile 3Kungliga Tekniska Högskolan―KTH, Stockholm, Sweden Email: altair@vm.uff.br, htorres@uta.cl, goran.marklund@ee.kth.se Copyright © 2015 by authors and Scientific Research Publishing Inc. Received 18 May 2015; accepted 22 September 2015; published 25 September 2015 In this paper, we coupled the Quantum Mechanics conventional Schrödinger’s equation, for the particles, with the Maxwell’s wave equation, in order to study the potential’s role on the conversion of the electromagnetic field energy to mass and vice versa. We show that the dissipation (“conductivity”) factor and the particle implicit proper frequency are both related to the potential energy. We have also derived a new expression for the Schrödinger’s Equation considering the potential energy into this equation not as an ad hoc term, but also as an operator (Hermitian), which has the scalar potential energy as a natural eigenvalue of this operator. Schrödinger’s Equation, Klein-Gordon’s Equation, Maxwell’s Wave Equation, Convection Displacement Current 1. Introduction The conversion of electromagnetic energy to mass and vice versa is an old but still a very interesting problem to be studied in physics. One of the problems that still remain open is to explain in a more clear way the role of the potential energy on all those quantum wave-particle dissipation mechanisms addressed by quantum field theory―QFT/quantum mechanics. Here we study afresh this wave-particle dissipation problem using a simple alternative model, but considering the modification of the Schrödinger’s Equation due to the effect of the vacuum zero point energy due to vacuum electromagnetic fluctuations, as described by Bo Lehnert [1] . 2. Method In order to study the quantum wave and particle Eigen modes excitation properties and the static potential role as a dissipation mechanism, we couple the Schrödinger’s wave equation, which models particles as waves, to the Maxwell’s wave equation for the electromagnetic field, since they are compatible mathematical structures. Here, we can view this alternative model as a plasma “wave-wave” interaction approach, where, instead of having only waves, we have particles and field interaction, using a linear wave model. We start here with the non-relativistic Schrödinger’s equation, which is given by the expression below: All the relevant terms present in Equation (1) are already defined in any basic quantum mechanics textbook () [2] . For the relativistic case, we have to use the Klein-Gordon equation given below: Expanding the squares: This equation is valid for zero spin particles, such as mesons, and photons with spin = 1. The term (=) that can be obtained from Equation (2), should be considered as the particle intrinsic “Eigen frequency” (De Broglie frequency) via the dispersion relation: Now, we will derive a fundamental equation relating the energy of an impinging electromagnetic wave to particles mass, which are created at a transition region defined below. Let us now split the interaction space in three relevant regions, described as follows [3] -[5] . Region 1: Field region (Maxwell’s region). Region 2: Transition region (fuzzy wave-particle region where Maxwell’s equation and Schrödinger’s equation couple). Region 3: Particle region (Schrödinger’s region). For the coupling to be consistent, we model the field region using the Klein-Gordon equation for non-massive photons,. Then, we obtain The basic equation at the transition region is obtained by coupling region 1 to region 2, which is done just using the “momentum conservation” condition: Equation (4) is the basic equation to model the region―boundary layer―where particles and fields are not distinct―called here fuzzy region. We have assumed that at the transition region a full energy and momentum conservation happens, so that the probability functions fully match. To solve Equation (4), we consider the ansatz:; We can then find easily the two proper frequencies: Equation (5) can be re-written as, The first root of (5’) just states that it is possible to convert mass into energy and vice-versa. For zero external potential (V = 0), we get the “resonance” (photon-particle) conditions: For plus sign, (two particles conversion) and for minus signω = 0, no conversion, the trivial solution is obtained. For non zero external potential, we can get another “resonance” (photon-particle) conditions: If we consider the reflection of positively charged with spin = 0 particle from a barrier V, where this potential is zero to the left of the barrier and a constant V to the right, Equations (2) and (5) with m = 0 (massless boson) show that the mass of the particle to the right of the barrier V is having a mode conversion in this case. However, it is necessary with a “dissipation” mechanism, which in our case related to V (no V, no dissipation, no energy-mass conversion). We will show now that the factor plays exactly the role of a “dissipation term”. It “permits” the va- cuum to dissipate energy from photons. Where is the “natural” proper frequency, as stated above. Of course, must be fulfilled, in order to guarantee energy conservation. This shows explicitly why the particle production must be in the presence of a potential V, it makes it possible, somehow, that fields could be dissipated into the form of matter. This result also indicates that the energy conservation at the transition region (mode conversion region) “requests” that the photon energy is at least twice the created particle rest energy―a well known old result. To consider the spin dynamics it is necessary to use the Dirac’s relativistic quantum mechanics, but here we just want to see how V plays role in the field-mass conversion. Now, for the sake of clarification, we will go further with the physical interpretation of the parameter and its role in quantum mechanics. In order to gain more insight on this dissipation mechanism, we go back once more to Equation (2), assuming, we have the dispersion relation: We can rewrite this equation as, , and. Comparing now these results with the classical RLC circuit dispersion relation: where L is the circuit inductance, C is the capacitance, and R the resistance. Thus, the particle “natural” Eigen frequency can be written in this case as: Therefore, for a pumped “circuit”, the resonance condition is: , or Here ω is the pumping (photon) energy/frequency. The equivalent dissipation factor R/L for our case is. Concluding, we can consider the “mode conversion” regions as a resistance-impedance-capacitance-source circuit; the resonant region is driven by the impinging photons. Also, due to the Equation (2), we have . (2’) And finally (), This is the relation used for the total energy to model the motion of particles in a static external potential. So we have the two possible particle energy modes, V can be positive or negative. It is possible to have zero energy mode taking the plus sign and, this shows how mass can become potential energy and vice-versa. Adding the momentum contribution into (2’), one gets, So we get For nonrelativistic particle conversion, the usual relation is, For V = 0, we have the known conventional relation, L. Brillouin has also addressed the problem of relativity and potential energy using a different approach [6] . He considered an implicit mass content into the external potential energy. Note that at high temperatures/energies T, it is possible to observe electron-positron and neutrino-antineutrino pairs, (i.e., T > 1 MeV) [7] [8] and so the Eigen mode model presented here makes full sense. The potential V may be just a few eV, but similar results can be obtained with a different formalism [9] . 3. Results As a consequence of the above discussion, we will modify the original Schrödinger’s equation just adding a constructed potential operator in a way that this equation will have a convective term, and we will call this novel (modified) Schrödinger’s equation as Convective Schrödinger Equation. By convective Schrödinger Equation, here, we just mean the conventional Schrödinger’s Equation where we introduce a term that is related to the vacuum zero point energy (relating it to the potential) [1] . In doing so, we get a modified Schrödinger Equation, where the potential energy is not anymore introduced in an ad hoc manner into this equation, but it is considered as an Hermitian operator, such as the total energy and the kinetic energy, and where the eigenvalues related to it are exactly the scalar potential. Therefore, the Schrödinger Equation is now only left with one ad hoc term―the particle mass―that might be further treated as an operator also, by say, using the Higgs’s Boson theory, for instance. In order to move further with this study, it is important to say that both mass and potential, even if being observables quantities, are introduced ad hoc into Schrödinger’s equation and not postulated as Hermitian operators in the same way that the total energy and the kinetic energy are. That is, there are no formal Hermitian operators associated to those physical quantities, which would aloud to obtain real eigenvalues related to both, such as the ones we have for energy and momentum. This more realistic model would permit the access to the mass particle content creation from a more fundamental entity (complete Schrödinger’s function) and not just to access a probability density to find it, already massive, somewhere in a bounded or free state. For the potential case, there are some discussions about its operator nature, but for mass this is still an open problem, in spite of Higgs’s explanation for particle mass origin via Higgs Boson. We leave the mass operator problem for a future work and address afresh the problem of the operator nature of the potential and try to find an expression that could be useful to possibly model that. We saw above that the scalar potential could be understood as a dissipation “inducing” factor for wave―particle interaction as pair production, but now we move further to understand the operator nature of this physical quantity. Therefore, the question to be answered now here is: Can the potential in Schroedinger’s equation be modeled as an Hermitian operator? If it is so, which form it would have? Can we get a measurable scalar potential as an eigenvalue of this operator? Has this question any physical meaning whatsoever? There are people that consider the potential to be an operator just due to its modification of the original wave function via. Therefore, becomes via the “operation”. But is that so? There are also some/many other discussions about the operator nature of the potential, but we stop here, knowing that this is indeed a very interesting though controversy subject and therefore deserves further deeper discussion. In order to proceed with this issue, let us consider Maxwell’s wave equation for pure field, but with dissipation: F is the wave field and s is the material conductivity. By analogy, using the Klein-Gordon equation for massless photons, we have, Examining Equations (6) and (2’), we can identify the wave function dissipation factor for a photon in vacuum, but treated as particle. The factor “is” Which means that particles wave function can grow in the presence of electrostatic potentials and photons. This phenomenon is related to the ZPE―Zero Point Energy [2] -[4] . Therefore, the term plays the role of vacuum “energy to mass dissipation”, for photon. Where is the fine structure constant. If V = 0, there is no dissipation. It is interesting to mention that if we consider the potential V as an observable, not included ad hoc at the Schrödinger’s equation, as it is in the conventional quantum mechanics, a “potential operator” could be understood as: The important point is now to known the physical meaning of the operator. We consider at this point the “convection” displacement current (using a revised electromagnetic theory), that is valid for a point charged particle, which moves with velocity v. The “operator condutivity” can be writen as [3] [4] : Therefore, it might well be that the operator potential could be written as: The second term in (7) is related to the vacuum longitudinal Eigen modes. They do not appears (exist) in the conventional Maxwell’s electromagnetic theory. This factor is considered a convection term related to a convection displacement current, as already mentioned it describes the contribution to the displacement current of a single point-shaped charged particle moving with velocity v [3] -[5] . We apply now this operator (7) on the particle standing wave function, The above solution, with obtained satisfying the boundary conditions, describes standing waves of the time-dependent equation, which are the states with definite energy, those standing waves are called energy (“frequency”) Eigen state. We have then, of course, for a particle (electron for instance) in the presence of a potential V. Considering here the wave function of a free particle, but under the influence of a potential V, we have: The potential “double-eigenvalue” is obtained via, It means that E is the total energy eigenvalue via the time operator and is the kinetic energy eigenvalue via the convection term. So, V is type “bilinear” operator. So, from (7’), we can write a new type of Schrödinger’s equation, “convection” Schrödinger equation―which contains in its structure the energy equipartion (factor 1/2) for particle and anti-particle: Now, we get a real formal “full” energy operator Schrödinger’s equation, self-consistently: E = T + V e V = E − T, eigenvalues of and. 4. Conclusions The results presented in this paper show that the potential, combined with the magnetic permeability/electric permittivity (implicit vacuum entities) and the Planck’s constant, plays the role of dissipation factor for the conversion of electromagnetic fields to particles and vice-versa. Also, considering the contribution of the time convective derivative term (convective displacement current) [3] [4] , for a motion particle, we have derived a novel modified Schrödinger’s equation, which we have called here the “Convective” Schrödinger Equation. This equation takes into consideration, in a formal way, the potential as an operator related to the observable potential energy, and not adding it in a ad hoc manner as done in the usual Schrödinger’s Equation. In the past, the study of relativistic particles has been in the exclusive domain of high-energy and particle physics. In grapheme theory, nonetheless, the linear electronic band dispersion near the Dirac points gave rise to charge carriers (electrons or holes) that propagate as if they were massless fermions with speeds of the order of 106 m/s rather than the speed of light. Hence, charge carriers in this structure had to be described by the massless Dirac equation or the Schrödinger’s equation when the compaction of graphene device generates mass proportional to conductivity. The physics of relativistic electrons is thus now experimentally accessible in graphene based solid-state devices, whose behavior differs drastically from that of similar devices fabricated with usual semiconductors. Consequently, new unexpected phenomena have been observed while other phenomena that were well-understood in common semiconductors such as the quantum Hall effect and weak-localization, exhibited surprising behavior in graphene. Thus, the graphene devices enabled the study of relativistic dynamics in controllable nano-electronic circuits (relativistic electrons on-a-chip). The Convective Schrödinger Equation (convection displacement current) allows basic understanding of electronic processes in graphene devices, which have conductivity in presence of electromagnetic fields (see Equations (10)-(12) of ref. [8] ). It also allowed for the observation of some subtle effects, previously accessible only to high-energy physics, such as Klein tunneling and vacuum breakdown [10] -[12] . A further and deeper discussion on novel insights of field, matter (particles and anti-particles) and vacuum dynamics can be found in references [1] [3] -[5] . The authors would like to thank Bo Lehnert for general discussions and suggestions considered in this work. Cite this paper Altair S.de Assis,HectorTorres-Silva,Göran T.Marklund, (2015) Convective Schrödinger Equation: Insights on the Potential Energy’s Role to Wave Particle Decay. Journal of Electromagnetic Analysis and Applications,07,225-232. doi: 10.4236/jemaa.2015.79024 1. 1. Lehnert, B. (2014) Some Consequences of Zero Point Energy. Journal of Electromagnetic Analysis and Applications, 6, 319-327. Lehnert, B. (2015) Zero Point Energy Effects on Quantum Electrodynamics. Journal of Modern Physics, 6, 448-452. 2. 2. Cohen-Tannoudji, C., Diu, B. and Laloe, F. (1977) Quantum Mechanics, Volumes 1 and 2. Hermann and John Wiley & Sons, New York. 3. 3. Lehnert, B. (1990) Complementary Aspects on Matter-Antimatter Boundary Layers. TRITA-EPP-90-04, Royal Institute of Technology, Stockholm, Sweden, and References Therein. 4. 4. Lehnert, B. (2013) Revised Quantum Electrodynamics. NOVA Publishers, New York. 5. 5. Lehnert, B. (2008) A Revised Electromagnetic Theory with Fundamental Applications, Swedish Physics Archive. 6. 6. Brillouin, L. (1965) The Actual Mass of Potential Energy, A Correction to Classical Relativity. Proceedings of the National Academy of Sciences, 53, 475-482. 7. 7. Torres-Silva, H. (2013) Physical Interpretation of the Dirac Neutrino with Electromagnetic Mass. Journal of Electromagnetic Analysis and Applications, 5, 294-301. 8. 8. Torres-Silva, H. and Cabezas, D.T. (2012) Chiral Current in a Graphene Battery. Journal of Electromagnetic Analysis and Applications, 4, 426-431. 9. 9. Arbab, A. (2011) A New Wave Equation of the Electron. Journal of Modern Physics, 2, 1012-1016. 10. 10. Novoselov, K.S., Geim, A.K., Morozov, S.V., Jiang, D., Zhang, Y., Dubonos, S.V., Grigorieva, I.V. and Firsov, A.A. (2004) Electric Field Effect in Atomically Thin Carbon Films. Science, 306, 666. 11. 11. Zhang, Y., Small, J.P., Amori, M.E.S. and Kim, P. (2005) Electric Field Modulation of Galvanomagnetic Properties of Mesoscopic Graphite. Physical Review Letters, 94, Article ID: 176803. 12. 12. Castro, A.H., Guinea, F., Peres, N.M.R., Novoselov, K.S. and Geim, A.K. (2009) The Electronic Properties of Graphene. Reviews of Modern Physics, 81, 109.
d5f6addb0bf17a38
Disclaimer: I am not a chemist by any means, and I only have knowledge limited to what I learned in my university's Chemistry III course. Basic understanding of everything up to valence electron orbitals. Why is there no set of rules to follow which can predict the product of chemical reactions? To me, it seems that every other STEM field has models to predict results (physics, thermodynamics, fluid mechanics, probability, etc) but chemistry is the outlier. Refer to this previous question: How can I predict if a reaction will occur between any two (or more) substances? The answers given state that empirical tests are the best way we've gotten to predict reactions, because we can discern patterns or "families" of reactions to predict outcomes. Are we only limited to guessing at "family" reactions? In other words, why am I limited to knowing my reactants and products, then figuring out the process? Can I know the reactants, hypothesize the process, and predict the product? If the answer is "It's complicated", I would enjoy a push in the right direciton - like if valence orbitals actually do help us predict, or any laws of energy conservations etc, please give me something which I can go research. • 3 $\begingroup$ Your best bet would be to look at the field of theoretical chemistry which has various branches of study as highlighted in the above link. Basically, if you are given a set of reactants and products, then you can have a huge number of combinations of atoms simply by looking at all possible arrangments of atoms. But, you can significantly narrow down your options by looking at lower potential paths (which are made from ab initio calculations), simulating movements of the nuclei (molecular dynamics), theoretical kinetics and so on. $\endgroup$ – Yusuf Hasan Oct 19 '20 at 16:02 • 13 $\begingroup$ BTW, just a digression: You said chemistry is the "outlier" in STEM. I have a gut feeling that maybe biology (which is also a part of STEM) may not be "predictive" enough for you as well, at least when you start out with the subject. Biology and chemistry together form an integral part of STEM, so it may be worth pondering over their seemingly "empirical" nature ;) $\endgroup$ – Yusuf Hasan Oct 19 '20 at 16:11 • 10 $\begingroup$ Chemistry is neither a set of easy rules, neither set of empirical knowledge to memorize. Or, it it both, intertwined in a wild mixture of both extremes. Most non chemists are too impatient to recognize many patterns, that are hybrids of rules and evidence. $\endgroup$ – Poutnik Oct 19 '20 at 16:17 • 2 $\begingroup$ doi.org/10.1016/j.drudis.2018.02.014 $\endgroup$ – Yusuf Hasan Oct 19 '20 at 16:24 • 5 $\begingroup$ It's not unpredictable. It is just complex... sometimes to a point it is easier to just try (many times) and see instead of calculating something. $\endgroup$ – fraxinus Oct 20 '20 at 7:30 10 Answers 10 First of all, I'd ask: what do you admit as "chemistry"? You mentioned thermodynamics as being a field where you have "models to predict results". But thermodynamics is extremely important in chemistry; it wouldn't be right if we classified it as being solely physics. There is a large amount of chemistry that can be predicted very well from first principles, especially using quantum mechanics. As of the time of writing, I work in spectroscopy, which is a field that is pretty well described by QM. Although there is a certain degree of overlap with physics, we again can't dismiss these as not being chemistry. But, I guess, you are probably asking about chemical reactivity. There are several different answers to this depending on what angle you want to approach it from. All of these rely on the fact that the fundamental theory that underlies the behaviour of atoms and molecules is quantum mechanics, i.e. the Schrödinger equation.* Addendum: please also look at the other answers, as each of them bring up different excellent points and perspectives. (1) It's too difficult to do QM predictions on a large scale Now, the Schrödinger equation cannot be solved on real-life scales.† Recall that Avogadro's number, which relates molecular scales to real-life scales, is ~$10^{23}$. If you have a beaker full of molecules, it's quite literally impossible to quantum mechanically simulate all of them, as well as all the possible things that they could do. "Large"-ish systems (still nowhere near real-life scales, mind you — let's say ~$10^3$ to $10^5$) can be simulated using approximate laws, such as classical mechanics. But then you lose out on the quantum mechanical behaviour. So, fundamentally, it is not possible to predict chemistry from first principles simply because of the scale that would be needed. (2) Small-scale QM predictions are not accurate enough to be trusted on their own That is not entirely true: we are getting better and better at simulating things, and so often there's a reasonable chance that if you simulate a tiny bunch of molecules, their behaviour accurately matches real-life molecules. However, we are not at the stage where people would take this for granted. Therefore, the ultimate test of whether a prediction is correct or wrong is to do the experiment in the lab. If the computation matches experiment, great: if not, then the computation is wrong. (Obviously, in this hypothetical and idealised discussion, we exclude unimportant considerations such as "the experimentalist messed up the reaction"). In a way, that means that you "can't predict chemistry": even if you could, it "doesn't count", because you'd have to then verify it by doing it in the lab. (3) Whatever predictions we can make are too specific There's another problem that is a bit more philosophical, but perhaps the most important. Let's say that we design a superquantum computer which allowed you to QM-simulate a gigantic bunch of molecules to predict how they would react. This simulation would give you an equally gigantic bunch of numbers: positions, velocities, orbital energies, etc. How would you distil all of this into a "principle" that is intuitive to a human reader, but at the same time doesn't compromise on any of the theoretical purity? In fact, this is already pretty tough or even impossible for the things that we can simulate. There are plenty of papers out there that do QM calculations on very specific reactions, and they can tell you that so-and-so reacts with so-and-so because of this transition state and that orbital. But these are highly specialised analyses: they don't necessarily work for any of the billions of different molecules that may exist. Now, the best you can do is to find a bunch of trends that work for a bunch of related molecules. For example, you could study a bunch of ketones and a bunch of Grignards, and you might realise a pattern in that they are pretty likely to form alcohols. You could even come up with an explanation in terms of the frontier orbitals: the C=O π* and the Grignard C–Mg σ. But what we gain in simplicity, we lose in generality. That means that your heuristic cannot cover all of chemistry. What are we left with? A bunch of assorted rules for different use cases. And that's exactly what chemistry is. It just so happens that many of these things were discovered empirically before we could simulate them. As we find new theoretical tools, and as we expand our use of the tools we have, we continually find better and more solid explanations for these empirical observations. Let me be clear: it is not true that chemistry is solely based on empirical data. There are plenty of well-founded theories (usually rooted in QM) that are capable of explaining a wide range of chemical reactivity: the Woodward–Hoffmann rules, for example. In fact, pretty much everything that you would learn in a chemistry degree can already be explained by some sort of theory, and indeed you would be taught these in a degree. But, there is no (human-understandable) master principle in the same way that Newton's laws exist for classical mechanics, or Maxwell's equations for electromagnetism. The master principle is the Schrödinger equation, and in theory, all chemical reactivity stems from it. But due to the various issues discussed above, it cannot be used in any realistic sense to "predict" all of chemistry. * Technically, this should be its relativistic cousins, such as the Dirac equation. But, let's keep it simple for now. † In theory it cannot be solved for anything harder than a hydrogen atom, but in the last few decades or so we have made a lot of progress in finding approximate solutions to it, and that is what "solving" it refers to in this text. • 1 $\begingroup$ I want to make a trivial edit in order to fix my vote, but that seems impossible as this answer is perfect; you even use the * first and the † second! $\endgroup$ – uhoh Oct 20 '20 at 6:25 • 2 $\begingroup$ @uhoh (Forgive me for bringing this up again, as I have the "power" to see deleted comments, and often curiosity gets the better of me.) I think I understand what you mean, and I actually do agree, as sometimes I worry about this sort of thing creeping into my answers too. I don't want to end up blaming the OP for not understanding stuff, for example, and although I don't actually try to do that, I feel like sometimes my writing style can come off that way. I'll keep bearing it in mind and maybe try using "we" more instead of "you". $\endgroup$ – orthocresol Oct 20 '20 at 16:12 • 1 $\begingroup$ No problem, I left the second comment to address/explain the first, and knew they were still visible to beings in higher dimensions. The sequence of events was skimming/misreading, comment #1 checking other answers, scrolling back up and seeing your familliar user name, thinking "wait, they're nice, huh?" scrolling up to the top of your post, reading more slowly, recognizing that it was different than what I'd thought, thinking "oh crap!", deleting my first comment knowing it was still visible to you folks, so writing the second comment, then realizing it wouldn't make sense to others... $\endgroup$ – uhoh Oct 20 '20 at 19:09 • 1 $\begingroup$ so deleted it to still knowing it was visible, then changing my vote to up and moving on. Then later I added the comment elsewhere on this page about "unstable manifolds" then noticed my vote change didn't "take", faceplaming -- of course it wouldn't! -- then fretting over how to do a trivial edit to a moderators "perfect" post, pulling it off, then left the final comment above. Anyway, it's too late to make a long story short, but I've always found your moderation exemplary and welcoming. $\endgroup$ – uhoh Oct 20 '20 at 19:12 • $\begingroup$ ...but I digress. Thanks for the feedback! $\endgroup$ – uhoh Oct 21 '20 at 3:32 Parts of chemistry have predictability but the combinatorial complexity of what is possible leaves a large amount of space for things that don't follow the rules Some of the ways chemistry differ from physics in unpredictability are an illusion. Take gravity, for example. There is a strong rule–sometimes described as a law–that all objects near the surface of the earth fall with the same acceleration. That is a cast iron rule isn't it? Apparently not. Flat pieces of paper and feathers don't fall as fast as cannon balls and the exact way they fall is very unpredictable. "But we know why that is, don't we?" Yes, a bit, it is air resistance. But that doesn't enhance the predictability at all as any useful prediction would have to solve the equations for fluid flow and there is a $1m prize for even proving that those basic equations even have a solution all the time. Arguably, physics is only predictable in school where only idealised versions of real problems are considered. And it is unfair that chemistry is completely unpredictable. A good deal of physical chemistry is quite like physics in its laws and predictions. I suspect that you are talking about general organic and inorganic chemistry where there are many predictable properties of compounds but a dictionary full of exceptions to even simple rules. Or synthetic chemistry where reactions sometimes work but often don't. But, there are plenty of chemical reactions that work fairly reliably (Grignard reactions make C-C bonds fairly reliably with many compounds; Diels Alder reactions create two at once with predictable stereochemistry.) But this predictability is limited by a fundamental problem: the unfathomably large variety of possible compounds that could be made. Take a ridiculously small subset of possible compounds: all those that can be made just from carbon and hydrogen using only single bonds and disallowing any rings. For simple compounds where the 3D nature of the compounds does not interfere by constraining their existence in real space (atoms have finite volumes in 3D space and can't overlap in real structures) these are mathematically equivalent to simple trees (or the carbon skeleton is: we assume the hydrogens fill out the remaining bonds so each carbon ends up with 4). at the point where 3D space becomes a constraint on which can exist, there are already about 25k distinct possible compounds and by the time you get to 25 there are more possibilities than all the chemicals that have ever been characterised in the history of chemistry. And this is for very constrained rules for making the compounds that use only two elements and deny a huge variety of interesting structures. The real issue making chemistry apparently complex is that unfathomably large combinatorial variety of possible chemicals that might exist. In such a large space there is very little possibility that simple rules will always work. And this complexity is just about the possible structures. There are a very large number of reactions that get you from one structure to another and those add another midbuggeringly large layer of complexity. And this, I think, is the reason why many find chemistry so hard to generalise about. There are simply too many possible things that can exist and even more possible ways to make them for any simple set of rules to always work. And I thought physicists had a problem not being able to fully solve the Navier Stokes equations. • 8 $\begingroup$ "physics is only predictable in school where only idealized versions of real problems are considered." - Reminds me of a joke I created back when I was taking physics, "How many physicists does it take to change a light bulb? None, because neglecting the force of friction, it can't be done!" $\endgroup$ – Glen Yates Oct 20 '20 at 19:25 Let me contribute two more reasons which make chemistry hard to analyse from a purely theoretical standpoint. The first one is that, viewed very abstractly, chemistry essentially relies on the study of geometry in very high-dimensional spaces, and even from a purely mathematical point this can be extremely difficult. An important part of chemistry is bond breaking and bond formation, which is behind most reactions. This turns out to require knowledge of the vibrational modes of a molecule. For a general molecule with $\mathrm{N}$ atoms, there are $\mathrm{3N-6}$ vibrational modes. Each of these vibrational modes are a "spatial dimension" in what is called phase space. In principle, if we knew the potential energy in every point of the phase space for a molecule, we would know virtually everything there is to know about how it might react. For an idea of what this looks see, see the figure below: Source: https://www.chemicalreactions.io/fundamental_models/fundamental_models-jekyll.html Unfortunately, there is simply too much space to explore in very high-dimensional objects, so it's very difficult to get a picture of it as a whole. Also disappointingly, almost all of this space is "tucked away in corners", so it is also very difficult to get a reliable picture of the whole space by looking at small bits of it at a time. This has been called "the curse of dimensionality". Something as simple as benzene ($\ce{C6H6}$) has a $\mathrm{3 \times 12-6 = 30}$-dimensional vibrational phase space (though this particular phase space is highly symmetric, as benzene itself has a high symmetry). Now consider a general reaction which requires two reagents, and forms one product: Each of the three molecules has its own phase space, and combining them all together means adding all the number of dimensions of each. In this view, a chemical reaction is nothing but a particular set of trajectories of points (for each atom) in the combined phase space of all molecules, such that the potential energy of the system is locally minimised throughout the trajectory. As such, one would easily find themselves trying to describe trajectories in objects with over 100 dimensions. Few people talk about chemistry at this level of abstraction because it is so complex, but it is a conceptual hurdle in describing chemistry "exactly". Thankfully, there is research into it, such as the CHAMPS collaboration. The second complication is that, while many important reactions are direct reactions like the one shown above, in the general case, what really exists is a network of reactions, potentially forming a complicated, highly interconnected graph with dozens or even hundreds of intermediates and possible products (graph vertices) and as many reaction arrows connecting them (graph edges). The field of chemical reaction network theory uses graph theory to study these networks. It appears that some of the problems they grapple with are $\mathrm{NP}$-hard. Source: https://www.mis.mpg.de/stadler/research/chemical-reaction-networks.html Of course, this second issue compounds on the first! So given these two dizzyingly complex problems, even from a purely mathematical standpoint, how can we do chemistry at all? Well, with enough experimental parametrization (e.g. equilibrium constants, rate constants, enthapies and entropies of formation, etc.) and approximations, you can drastically simplify the description of a system. Fortunately, even after throwing away so much detailed information, we can still make decent predictions with what is left. You really should count ourselves lucky! • 2 $\begingroup$ Interesting, I have not seen such an abstract description of chemistry earlier! As an undergrad, how can I start working on phase space in chemistry and chemical reaction network theory? References, resources or project suggestions would be welcome :) $\endgroup$ – Yusuf Hasan Oct 20 '20 at 2:54 • $\begingroup$ @YusufHasan I honestly have no clue. These are not my topics of research by a million miles, I just happen to vaguely grasp enough mathematics to be able to say they exist! I've probably already given the most assistance I could, in the form of some keywords and the links to research groups above - look at their publication records, collaborators, etc. It'd be great if anyone else can give you some input. $\endgroup$ – Nicolau Saker Neto Oct 20 '20 at 3:04 • 2 $\begingroup$ This is a really insightful and absolutely fantastic answer and extremely helpful to those outside of or not immersed in chemistry. I see stable manifolds in satellite trajectories and in papers about them in Chaos but never stopped to think of how general the concept was. Instead of doing what I was supposed to today I will go hide somewhere and (try to) read all of your linked sources :-) $\endgroup$ – uhoh Oct 20 '20 at 3:44 • 1 $\begingroup$ @uhoh There are a number of articles studying chemical oscillators (which can display mathematical chaos) from a reaction network theory perspective. That alone is probably quite a rabbit hole to go down. You have my blessing! $\endgroup$ – Nicolau Saker Neto Oct 20 '20 at 3:57 • $\begingroup$ @YusufHasan If you're interested in such a topic for its own sake, then go ahead. If you're interested in it because you think the additional theory will give you a leg up on your other chemistry studies, I'd advise not even trying. I put in quite a lot of time in my early Chemistry education to such matters, only to learn that it's not where the actual difficulties lie. $\endgroup$ – Ingolifs Oct 22 '20 at 3:34 Predictabilty is essentially determined by the level of detail you need in your model to make a reliable prediction. Models that require little detail to capture the phenomenon of interest typically can give reliable predictions, while those requiring enormous detail typically cannot. This is true for all the sciences—biology, chemistry, physics, and geology. Thus, in this fundamental way, they all have the same predictability. I.e., there is no fundamental difference in the nature of prediction among these fields. Allow me to illustrate: 1. Bending of light from a distant star by the sun's gravitational field. Predictable. Requres very little detail to model the phenomenon accurately: Just the mass of the sun, and the assumption that the distant star is a point particle at a distance much greater than the earth-sun distance. 2. The temperature of the sun's corona. Not yet predictable. This problem requires far more detail to model correctly. The system is so complex that we don't have a model to predict the temperature of the sun's corona, and thus can't explain why the corona is far hotter than the surface of the sun. 1. Osmotic presure of a highly dilute solution. Predictable. Requires very little detail to model the phenomenon accurately: Just the concentration of the solute. 2. Folding of long (1000's of nucleotides) RNAs. Not yet predictable, at least at the level of being able to predict the ensemble-average structure at the level of individual base pairs. 1. Possible blood types (O, A, B, AB) of offspring, and their odds. Predictable. Requires only the blood type of each parent. 2. Size at which cells divide. Not yet predictable. A model capable of predicting this would require enormous detail about the operation of cells, and cells are so complex that we don't have a model to predict the size at which they will to divide. Thus we can't yet explain why cells divide at a certain size. Granted, there is a practical difference among the fields, in that physics has more phenomena that can be predicted with simple models than chemistry, and chemistry more than biology, because as one goes from physics → chemistry → biology, one is typically studying successively higher levels of organization of matter. But I regard that as practical difference rather than a fundamental one. • 2 $\begingroup$ Reminds me of Rumsfeld's famous "known unknowns and unknown unknowns". Another classic example: the weather. In some cases it is not detail that matters (nonlinear dynamics), in fact accurate simple models can fail to provide useful predictions even when all parameters are known exactly. $\endgroup$ – Buck Thorn Oct 20 '20 at 7:11 This is only partially true, but there are areas of all of those fields where predictive power is difficult in practice due to the complexity of the system and convolution of features. In simplified cases, yes, we can do quite well, but once the systems grow in size and complexity, we do less well. Physics is a good example of this. The laws of mechanics are quite well-understood. But how well can you handle a chaotic 3-body system? There may be features that are predictable, but not probably not the entire system. With thermodynamics, how well do we handle mesoscopic systems? Computationally, they can be quite difficult. In thermodynamics, we're able to deal with this complexity by discarding features that we don't care about to focus on bulk properties that rapidly converge in ever-larger systems, but we can't handle the entire system. Fluid mechanics. OK. We have Navier-Stokes. Have you tried solving Navier-Stokes? Entire volumes have been written about how to deal with Navier-Stokes, and we still don't have great understanding of all of its features. Probability. This is trickier to talk about, but I think the difficulty and complexity is building an underlying probabilitistic model. When you build your machine learning model, there are generally hyper-parameters to set. What makes a good hyper-parameter and how do you pick one? Just the one that works? The thing with chemistry is that real-life examples are already incredibly complex. Pick any reaction you want. Liquids or solids? You're already dealing with bulk properties, phase interfaces, and boundary effects. Or solutions and solution effects. Gases? Once you have non-trivial reactions, how many atoms are there? How many electrons? Now, consider the fact that your typical organic reaction involves compounds with tens or hundreds of atoms in solution. There may be multiple models of reactivity, some productive, some not. And in the laboratory, reactions can be quite sensitive to any number of reaction conditions, which a generalized reactivity model does not begin to account for. But in chemistry, as with the other disciplines, we aim to find simplifications that allow us to deal with complexity. We've been able to find patterns of reactivity, which are somewhat general but don't capture the full complexity of the system. There are some great answers to this question already, but I'd like to provide a more practical boots-on-the-ground answer from my own perspective as an organic chemistry PhD who did computational chemistry on the side. Most fields at their cutting edge are unpredictable It has been my observation that when you come up against the frontier of what is possible, progress comes by and large only through a long grinding process of Trial and Error. When a breakthrough in understanding is made and the process all of a sudden becomes easy, rapid progress is made until things become hard again. This is true of all sorts of complex projects. The theory helps you along so far, but at some point you have to go off the beaten track and make your own way. Chemistry gets mathematically hard, fast Someone who's completing their undergraduate in a STEM field is likely to have a bit of a skewed impression of the first fact, because they will have already reached that point of unpredictability with chemistry but not with physics. It takes a long time to learn the differential equations associated with things like mechanics, stress-strain, heat transfer, fluid dynamics, electromagnetism and quantum fields. These topics often have solutions for idealised situations that are amenable to being written on paper in closed form. The (comparative) simplicity of these solutions along with the difficulty of learning the necessary maths along the way may give the undergrad physicist the mistaken idea that this is what all physics is like. Hard-but-tractable differential equations that yield elegant solutions. In reality once you get past the idealised conditions physics becomes much more about computer simulation and experimentation. In contrast, the equations that describe what happens in the flask (kinetics and thermodynamics) go from trivial to mind-bendingly difficult with only a little bit of added complexity. Other answerers have gone into this part in more detail, so I won't talk but about it here. Suffice to say I spent many fruitless hours of my education trying to find a generalised maths approach to the problems I was facing. In practice, at least for organic chemistry, the main objective is to synthesise compounds from other compounds, typically complex ones from simple ones. The theory sort of devolves into a broad, massive decision tree. Want to make intermediate A? try reaction B, if that doesn't work try reaction C. C normally works for this kind of thing, so if it's not working, check that your reagents are pure. You could try D but that's likely to deprotect the other side of A. Systematic studies of certain reaction patterns exist, and they can certainly be helpful. Take the substitution patterns of aromatic rings, for instance. Using a bit of orbital theory, you can predict the outcome of reactions on aromatic rings based on what's already on the ring and in what position. But again, these studies were done on simple substrates and may not necessarily apply to whatever synthetic behemoth you're working on. Lab work is hard Finally on to the practical aspect. Chemical reactions may fail for any number of reasons that aren't theoretical. There's basic stuff like the cleanliness of your equipment and the purity of your reagents. You can lose heaps of your material by choosing the wrong solvents to work up (extract the product from the reaction mixture) with. Most of the material has gone into the aqueous layer without you knowing it, and you've either discarded it (rookie mistake) or it's degraded or turned into something else before you realised. Then there's the more subtle stuff. The reaction might only work with one particular stir bar because it was impregnated by a palladium catalyst at some point. Reactions often need rigorous exclusion of oxygen and water to work, but occasionally you actually need some oxygen present to make it go, and the only way you'd ever find this out is by noticing the sloppily set up reactions always seem to perform better than the rigorous ones. You have one bottle of reagent from the sixties from a company that no longer exists, and once that's used up, the new bottle of the same reagent just doesn't work (happened to me). The surface of your glassware is slightly too acidic for your reaction, and you need to silanise it to get it to work (also happened to me). Some reactions don't work because your country is just too damn humid. The procedure you're following was written by a student desperate to impress/placate their adviser, and the yields are inflated. Your current lot of acetonitrile solvent is lower quality because China shut down their polluting acrylonitrile plants in order to improve the air quality in preparation for the Olympics. Chemistry as a subject is very bitsy and messy. The best chemists I've known often had excellent memories. But all subjects tend to be messy and bitsy once you get past the basic theory and into the fine details. What about structural engineering? Within that field, it's quite easy to predict the strength of a beam of known material and dimension, like a steel I-beam or dimensional lumber. But what about some new material, like a composite of toothpicks embedded in Elmer's glue? Whether the material is steel or toothpick-glue composite, couldn't one "just" predict the strength from more basic physical properties? Well yes, but that would be very complex. But I think more importantly, that wouldn't be structural engineering anymore. It would be some more basic field of physics. You argue chemistry is "unpredictable" because reactions are described by rules and patterns rather than being derived from first principles. I posit these rules and patterns are chemistry. Without them, you no longer have chemistry. So chemistry is "unpredictable" (in your sense) by definition. This isn't unique to chemistry, really. Most fields of study are based on the application of more pure fields, adding their own rules and patterns to enable higher-level reasoning about more complex systems: enter image description here The answer is dimensionality reduction: a reaction has billions and billions of atoms interacting with one another, but we create analogies to the interactions using just a few symbols which we manipulate using rules; a symbolic analogy of countless atoms interacting, but this process implies loss of information about reality. The simpler the analogy, the higher the loss of information and the less accurate the analogy. The results of symbol manipulation will differ from the reality of the reaction. The average of a set of numbers is a good example: you reduce a set of n dimensions onto a single dimension. There is a loss in information. Another example: Newtonian physics did not predict what scientist saw with the famous Double Slit experiment. The moment that happens, the rules and symbols you use to make predictions(like the yield of a chemical reaction) become useless. So, it's not that Chemistry is unpredictable, the symbols we use to make predictions about chemistry are not good enough. The only way to make 100% accurate predictions is to simulate every single atom and subatomic particle and be certain the rules we use to define the interactions are 100% analogous to what happens in reality. We know this to be impossible due to the uncertainty principle. Quantum chemistry has much more complex models that are a better analogy to a reaction, thus it is a *better predictor, but never 100% accurate. Any basic text in Organic Chem has a table of contents.So for a given transformation such as reduction it will list reagents( the chemicals or conditions,eg heat ,light,that appear above the arrow connecting reactants with products) For a simple reactions such as the sodium borohydride reduction of acetone to isopropanol I have absolute faith that if I carry this out in the laboratory it will work.If it didn't work I would check the labels on the reagent bottles and confirm the identity/purity of the chemicals used.If these checked out and the reaction still failed ,it would be in the category of dropping an apple and watching it ascend up toward the sky.It is not a matter of a failed opinion.Now if I change the substrate to a large polyfunctional molecule,the analogous reduction may not occur at all or yield a highly rearranged product.Retroanalysis may provide a rationale but for the bench chemist doing the reduction,it is an opinion as to whether it is worth trying the reaction in the first place. Simply put, it is because we don't have complete or near complete understanding of the forces that drive chemical reactions, every few atoms added to the structure of the compounds will add new forces and layers of complexity that we haven't accounted for in our simple 300 years of chemistry knowledge. You can sense this when you learn the theories show their limitation at some point where complexity surfaces (for instance Lewis, Huckel ... etc). Your Answer
b00bc6a1fa65dfb1
Rice University logo Physics 521: Quantum Mechanics I Course Outline Introduction: course overview, history of quantum mechanics Mathematical foundation of quantum mechanics: quantum states and Hilbert spaces, observables and operators, commutation relations and Heisenberg's uncertainty principle, pure and mixed states, density operator Quantum dynamics: time evolution and the Schrödinger equation,  Schroedinger and Heisenberg pictures, quantization of harmonic oscillator, propagators and Feynman path integrals, potential and gauge transformation Theory of angular momentum:  rotation and angular momentum operator, spin and  SU(2) group, orbital angular momentum, solution of the hydrogen atom ( Schrödinger equation for central potential),  addition of angular momenta and Clebsch-Gordan coefficients, tensor operators and Wigner-Eckart theorem Symmetry in quantum mechanics: conservation laws and degeneracies, parity (space inversion), time-reversal symmetry Typical Organization Lectures T Th 1:00 - 2:15 PM Homework (30%) Midterm Exam (30%) Final exam (40%) Main Text:  J.J. Sakurai, Modern Quantum Mechanics (Addison-Wesley, 2010) Other Texts: R. Shankar, Principles of Quantum Mechanics, Springer, 1994 (2nd Ed.) E. Merzbacher, Quantum Mechanics, Wiley, 1997. A. Messiah, Quantum Mechanics, Dover, 1999.
ee5922abc894a703
Quantum theory of the atom From Wikiversity Jump to: navigation, search The simplest atoms to consider are hydrogenic atoms, such as H and He+, because there are no interactions between electrons to complicate the problem. Atom I INTRODUCTION Water MoleculeA water molecule consists of an oxygen atom and two hydrogen atoms, which are attached at an angle of 105°.© Microsoft Corporation. All Rights Reserved. Atom, tiny basic building block of matter. All the material on Earth is composed of various combinations of atoms. Atoms are the smallest particles of a chemical element that still exhibit all the chemical properties unique to that element. A row of 100 million atoms would be only about a centimeter long. See Chemical Element. Understanding atoms is key to understanding the physical world. More than 100 different elements exist in nature, each with its own unique atomic makeup. The atoms of these elements react with one another and combine in different ways to form a virtually unlimited number of chemical compounds. When two or more atoms combine, they form a molecule. For example, two atoms of the element hydrogen (abbreviated H) combine with one atom of the element oxygen (O) to form a molecule of water (H20). Since all matter—from its formation in the early universe to present-day biological systems—consists of atoms, understanding their structure and properties plays a vital role in physics, chemistry, and medicine. In fact, knowledge of atoms is essential to the modern scientific understanding of the complex systems that govern the physical and biological worlds. Atoms and the compounds they form play a part in almost all processes that occur on Earth and in space. All organisms rely on a set of chemical compounds and chemical reactions to digest food, transport energy, and reproduce. Stars such as the Sun rely on reactions in atomic nuclei to produce energy. Scientists duplicate these reactions in laboratories on Earth and study them to learn about processes that occur throughout the universe. Throughout history, people have sought to explain the world in terms of its most basic parts. Ancient Greek philosophers conceived of the idea of the atom, which they defined as the smallest possible piece of a substance. The word atom comes from the Greek word meaning “not divisible.” The ancient Greeks also believed this fundamental particle was indestructible. Scientists have since learned that atoms are not indivisible but made of smaller particles, and atoms of different elements contain different numbers of each type of these smaller particles. Atoms are made of smaller particles, called electrons, protons, and neutrons. An atom consists of a cloud of electrons surrounding a small, dense nucleus of protons and neutrons. Electrons and protons have a property called electric charge, which affects the way they interact with each other and with other electrically charged particles. Electrons carry a negative electric charge, while protons have a positive electric charge. The negative charge is the opposite of the positive charge, and, like the opposite poles of a magnet, these opposite electric charges attract one another. Conversely, like charges (negative and negative, or positive and positive) repel one another. The attraction between an atom’s electrons and its protons holds the atom together. Normally, an atom is electrically neutral, which means that the negative charge of its electrons is exactly equaled by the positive charge of its protons. The nucleus contains nearly all of the mass of the atom, but it occupies only a tiny fraction of the space inside the atom. The diameter of a typical nucleus is only about 1 × 10-14 m (4 × 10-13 in), or about 1/100,000 of the diameter of the entire atom. The electron cloud makes up the rest of the atom’s overall size. If an atom were magnified until it was as large as a football stadium, the nucleus would be about the size of a grape. A Electrons Electrons are tiny, negatively charged particles that form a cloud around the nucleus of an atom. Each electron carries a single fundamental unit of negative electric charge, or –1. The electron is one of the lightest particles with a known mass. A droplet of water weighs about a billion, billion, billion times more than an electron. Physicists believe that electrons are one of the fundamental particles of physics, which means they cannot be split into anything smaller. Physicists also believe that electrons do not have any real size, but are instead true points in space—that is, an electron has a radius of zero. Electrons act differently than everyday objects because electrons can behave as both particles and waves. Actually, all objects have this property, but the wavelike behavior of larger objects, such as sand, marbles, or even people, is too small to measure. In very small particles wave behavior is measurable and important. Electrons travel around the nucleus of an atom, but because they behave like waves, they do not follow a specific path like a planet orbiting the Sun does. Instead they form regions of negative electric charge around the nucleus. These regions are called orbitals, and they correspond to the space in which the electron is most likely to be found. As we will discuss later, orbitals have different sizes and shapes, depending on the energy of the electrons occupying them. B Protons and Neutrons Protons carry a positive charge of +1, exactly the opposite electric charge as electrons. The number of protons in the nucleus determines the total quantity of positive charge in the atom. In an electrically neutral atom, the number of the protons and the number of electrons are equal, so that the positive and negative charges balance out to zero. The proton is very small, but it is fairly massive compared to the other particles that make up matter. A proton’s mass is about 1,840 times the mass of an electron. Neutrons are about the same size as protons but their mass is slightly greater. Without neutrons present, the repulsion among the positively charged protons would cause the nucleus to fly apart. Consider the element helium, which has two protons in its nucleus. If the nucleus did not contain neutrons as well, it would be unstable because of the electrical repulsion between the protons. (The process by which neutrons hold the nucleus together is explained below in the Strong Force section of this article.) A helium nucleus needs either one or two neutrons to be stable. Most atoms are stable and exist for a long period of time, but some atoms are unstable and spontaneously break apart and change, or decay, into other atoms. Unlike electrons, which are fundamental particles, protons and neutrons are made up of other, smaller particles called quarks. Physicists know of six different quarks. Neutrons and protons are made up of up quarks and down quarks—two of the six different kinds of quarks. The fanciful names of quarks have nothing to do with their properties; the names are simply labels to distinguish one quark from another. Quarks are unique among all elementary particles in that they have electric charges that are fractions of the fundamental charge. All other particles have electric charges of zero or of whole multiples of the fundamental charge. Up quarks have electric charges of +’. Down quarks have charges of -€. A proton is made up of two up quarks and a down quark, so its electric charge is ’ + ’ - €, for a total charge of +1. A neutron is made up of an up quark and two down quarks, so its electric charge is ’ - € - €, for a net charge of zero. Physicists believe that quarks are true fundamental particles, so they have no internal structure and cannot be split into something smaller. Atoms have several properties that help distinguish one type of atom from another and determine how atoms change under certain conditions. A Atomic Number Each element has a unique number of protons in its atoms. This number is called the atomic number (abbreviated Z). Because atoms are normally electrically neutral, the atomic number also specifies how many electrons an atom will have. The number of electrons, in turn, determines many of the chemical and physical properties of the atom. The lightest atom, hydrogen, has an atomic number equal to one, contains one proton, and (if electrically neutral) one electron. The most massive stable atom found in nature is bismuth (Z = 83). More massive unstable atoms also exist in nature, but they break apart and change into other atoms over time. Scientists have produced even more massive unstable elements in laboratories. B Mass Number The total number of protons and neutrons in the nucleus of an atom is the mass number of the atom (abbreviated A). The mass number of an atom is an approximation of the mass of the atom. The electrons contribute very little mass to the atom, so they are not included in the mass number. A stable helium atom can have a mass number equal to three (two protons plus one neutron) or equal to four (two protons plus two neutrons). Bismuth, with 83 protons, requires 126 neutrons for stability, so its mass number is 209 (83 protons plus 126 neutrons). C Atomic Mass and Weight Scientists usually measure the mass of an atom in terms of a unit called the atomic mass unit (abbreviated amu). They define an amu as exactly 1/12 the mass of an atom of carbon with six protons and six neutrons. On this scale, the mass of a proton is 1.00728 amu and the mass of a neutron is 1.00866 amu. The mass of an atom measured in amu is nearly equal to its mass number. Scientists can use a device called a mass spectrometer to measure atomic mass. A mass spectrometer removes one or more electrons from an atom. The electrons are so light that removing them hardly changes the mass of the atom at all. The spectrometer then sends the atom through a magnetic field, a region of space that exerts a force on magnetic or electrically charged particles. Because of the missing electrons, the atom has more protons than electrons and hence a net positive charge. The magnetic field bends the path of the positively charged atom as it moves through the field. The amount of bending depends on the atom’s mass. Lighter atoms will be affected more strongly than heavier atoms. By measuring how much the atom’s path curves, a scientist can determine the atom’s mass. The atomic mass of an atom, which depends on the number of protons and neutrons present, also relates to the atomic weight of an element. Weight usually refers to the force of gravity on an object, but atomic weight is really just another way to express mass. An element’s atomic weight is given in grams. It represents the mass of one mole (6.02 × 1023 atoms) of that element. Numerically, the atomic weight and the atomic mass of an element are the same, but the first is expressed in grams and the second is in atomic mass units. So, the atomic weight of hydrogen is 1 gram and the atomic mass of hydrogen is 1 amu. D Isotopes Hydrogen IsotopesThe atomic number of an atom represents the number of protons in its nucleus. This number remains constant for a given element. The number of neutrons may vary, however, creating isotopes that have the same chemical behavior, but different mass. The isotopes of hydrogen are protium (no neutrons), deuterium (one neutron), and tritium (two neutrons). Hydrogen always has one proton in its nucleus. These illustrations are not to scale—the nucleus is approximately 10,000 times smaller than the average orbital radius of the electron, which defines the overall size of the atom.© Microsoft Corporation. All Rights Reserved. Atoms of the same element that differ in mass number are called isotopes. Since all atoms of a given element have the same number of protons in their nucleus, isotopes must have different numbers of neutrons. Helium, for example, has an atomic number of 2 because of the two protons in its nucleus. But helium has two stable isotopes—one with one neutron in the nucleus and a mass number equal to three and another with two neutrons and a mass number equal to four. Scientists attach the mass number to an element’s name to differentiate between isotopes. Under this convention, helium with a mass number of three is called helium-3, and helium with a mass number of four is called helium-4. Helium in its natural form on Earth is a mixture of these two isotopes. The percentage of each isotope found in nature is called the isotope’s isotopic abundance. The isotopic abundance of helium-3 is very small, only 0.00014 percent, while the abundance of helium-4 is 99.99986 percent. This means that only about one of every 1 million helium atoms is helium-3, and the rest are all helium-4. Bismuth has only one naturally occurring stable isotope, bismuth-209. Bismuth-209’s isotopic abundance is therefore 100 percent. The element with the largest number of stable isotopes found in nature is tin, which has ten stable isotopes. All elements also have unstable isotopes, which are more susceptible to breaking down, or decaying, than are the other isotopes of an element. When atoms decay, the number of protons in their nucleus changes. Since the number of protons in the nucleus of an atom determines what element that atom belongs to, this decay changes one element into another. Different isotopes decay at different rates. One way to measure the decay rate of an isotope is to find its half-life. An isotope’s half-life is the time that passes until half of a sample of an isotope has decayed. The various isotopes of a given element have nearly identical chemical properties and many similar physical properties. They differ, of course, in their mass. The mass of a helium-3 atom, for example, is 3.016 amu, while the mass of a helium-4 atom is 4.003 amu. Usually scientists do not specify the atomic weight of an element in terms of one isotope or another. Instead, they express atomic weight as an average of all of the naturally occurring isotopes of the element, taking into account the isotopic abundance of each. For example, the element copper has two naturally occurring isotopes: copper-63, with a mass of 62.930 amu and an isotopic abundance of 69.2 percent, and copper-65, with a mass of 64.928 amu and an abundance of 30.8 percent. The average mass of naturally occurring copper atoms is equal to the sum of the atomic mass for each isotope multiplied by its isotopic abundance. For copper, it would be (62.930 amu x 0.692) + (64.928 amu x 0.308) = 63.545 amu. The atomic weight of copper is therefore 63.545 g. E Radioactivity Alpha DecayOne of the ways in which an unstable radioactive atom can decay is by emitting an alpha particle. Alpha particles consist of two protons and two neutrons, and are identical to the nucleus of a helium atom. When an atom’s nucleus emits an alpha particle, the atom transmutes into an atom of a different element.© Microsoft Corporation. All Rights Reserved. About 300 combinations of protons and neutrons in nuclei are stable enough to exist in nature. Scientists can produce another 3,000 nuclei in the laboratory. These nuclei tend to be extremely unstable because they have too many protons or neutrons to stay in one piece for long. Unstable nuclei, whether naturally occurring or created in the laboratory, break apart or change into stable nuclei through a variety of processes known as radioactive decays (see Radioactivity). Some nuclei with an excess of protons simply eject a proton. A similar process can occur in nuclei with an excess of neutrons. A more common process of decay is for a nucleus to simultaneously eject a cluster of 2 protons and 2 neutrons. This cluster is actually the nucleus of an atom of helium-4, and this decay process is called alpha decay. Before scientists identified the ejected particle as a helium-4 nucleus, they called it an alpha particle. Helium-4 nuclei are still sometimes called alpha particles. The most common way for a nucleus to get rid of excess protons or neutrons is to convert a proton into a neutron or a neutron into a proton. This process is known as beta decay. The total electric charge before and after the decay must remain the same. Because protons are electrically charged and neutrons are not, the reaction must involve other charged particles. For example, a neutron can decay into a proton, an electron, and another particle called an electron antineutrino. The neutron has no charge, so the charge at the beginning of the reaction is zero. The proton has an electric charge of +1 and the electron has an electric charge of –1. The antineutrino is a tiny particle with no electric charge. The electric charges of the proton and electron cancel each other, leaving a net charge of zero. The electron is the most easily detected product of this type of beta decay, and scientists called these products beta particles before they identified them as electrons. Beta DecayBeta decay can occur in two ways. As shown on the left, a neutron turns into a proton by emitting an antineutrino and a negatively charged beta particle. As shown on the right, a proton turns into a neutron by emitting a neutrino and a positively charged beta particle. Positive beta particles are called positrons and negative beta particles are called electrons. After the decay, the nucleus of the atom contains either one less or one more proton. Beta decay changes an atom of one element into an atom of a new element.© Microsoft Corporation. All Rights Reserved. Beta decay also results when a proton changes to a neutron. The end result of this decay must have a charge of +1 to balance the charge of the initial proton. The proton changes into a neutron, an anti-electron (also called a positron), and an electron neutrino. A positron is identical to an electron, except the positron has an electric charge of +1. The electron neutrino is a tiny, electrically neutral particle. The difference between the antineutrino in neutron-proton beta decay and the neutrino in proton-neutron beta decay is very subtle—so subtle that scientists have yet to prove that a difference actually exists. While scientists often create unstable nuclei in the laboratory, several radioactive isotopes also occur naturally. These atoms decay more slowly than most of the radioactive isotopes created in laboratories. If they decayed too rapidly, they wouldn’t stay around long enough for scientists to find them. The heavy radioactive isotopes found on Earth formed in the interiors of stars more than 5 billion years ago. They were part of the cloud of gas and dust that formed our solar system and, as such, are reminders of the origin of Earth and the other planets. In addition, the decay of radioactive material provides much of the energy that heats Earth’s core. The most common naturally occurring radioactive isotopes are potassium-40 (see Potassium), thorium-232 (see Thorium), and uranium-238 (see Uranium). Atoms of these isotopes last, on average, for billions of years before undergoing alpha or beta decay. The steady decay of these isotopes and other, more stable atoms allows scientists to determine the age of minerals in which these isotopes occur. Scientists begin by estimating the amount of isotope that was present when the mineral formed, then measure how much has decayed. Knowing the rate at which the isotope decays, they can determine how much time has passed. This process, known as radioactive dating (see Dating Methods), allows scientists to measure the age of Earth. The currently accepted value for Earth’s age is about 4.5 billion years. Scientists have also examined rocks from the Moon and other objects in the solar system and have found that they have similar ages. In physics, a force is a push or pull on an object. There are four fundamental forces, three of which—the electromagnetic force, the strong force, and the weak force—are involved in keeping stable atoms in one piece and determining how unstable atoms will decay. The electromagnetic force keeps electrons attached to their atom. The strong force holds the protons and neutrons together in the nucleus. The weak force governs how atoms decay when they have excess protons or neutrons. The fourth fundamental force, gravity, only becomes apparent with objects much larger than subatomic particles. A Electromagnetic Force The most familiar of the forces at work inside the atom is the electromagnetic force. This is the same force that causes people’s hair to stick to a brush or comb when they have a buildup of static electricity. The electromagnetic force causes opposite electric charges to attract each other. Because of this force, the negatively charged electrons in an atom are attracted to the positively charged protons in the atom’s nucleus. This force of attraction binds the electrons to the atom. The electromagnetic force becomes stronger as the distance between charges becomes smaller. This property usually causes oppositely charged particles to come as close to each other as possible. For many years, scientists wondered why electrons didn’t just spiral into the nucleus of an atom, getting as close as possible to the protons. Physicists eventually learned that particles as small as electrons can behave like waves, and this property keeps electrons at set distances from the atom’s nucleus. The wavelike nature of electrons is discussed below in the Quantum Atom section of this article. The electromagnetic force also causes like charges to repel each other. The negatively charged electrons repel one another and tend to move far apart from each other, but the positively charged nucleus exerts enough electromagnetic force to keep the electrons attached to the atom. Protons in the nucleus also repel one other, but, as described below, the strong force overcomes the electromagnetic force in the nucleus to hold the protons together. B Strong Force Protons and neutrons in the nuclei of atoms are held together by the strong force. This force must overcome the electromagnetic force of repulsion the protons in a nucleus exert on one another. The strong force that occurs between protons alone, however, is not enough to hold them together. Other particles that add to the strong force, but not to the electromagnetic force, must be present to make a nucleus stable. The particles that provide this additional force are neutrons. Neutrons add to the strong force of attraction but have no electric charge and so do not increase the electromagnetic repulsion. B1 Range of the Strong Force The strong force only operates at very short range—about 2 femtometers (abbreviated fm), or 2 × 10-15 m (8 × 10-14 in). Physicists also use the word fermi (also abbreviated fm) for this unit in honor of Italian-born American physicist Enrico Fermi. The short-range property of the strong force makes it very different from the electromagnetic and gravitational forces. These latter forces become weaker as distance increases, but they continue to affect objects millions of light-years away from each other. Conversely, the strong force has such limited range that not even all protons and neutrons in the same nucleus feel each other’s strong force. Because the diameter of even a small nucleus is about 5 to 6 fm, protons and neutrons on opposite sides of a nucleus only feel the strong force from their nearest neighbors. The strong force differs from electromagnetic and gravitational forces in another important way—the way it changes with distance. Electromagnetic and gravitational forces of attraction increase as particles move closer to one another, no matter how close the particles get. This increase causes particles to move as close together as possible. The strong force, on the other hand, remains roughly constant as protons and neutrons move closer together than about 2 fm. If the particles are forced much closer together, the attractive nuclear force suddenly turns repulsive. This property causes nuclei to form with the same average spacing—about 2 fm—between the protons and neutrons, no matter how many protons and neutrons there are in the nucleus. The unique nature of the strong force determines the relative number of protons and neutrons in the nucleus. If a nucleus has too many protons, the strong force cannot overcome the electromagnetic repulsion of the protons. If the nucleus has too many neutrons, the excess strong force tries to crowd the protons and neutrons too close together. Most stable atomic nuclei fall between these extremes. Lighter nuclei, such as carbon-12 and oxygen-16, are made up of 50 percent protons and 50 percent neutrons. More massive nuclei, such as bismuth-209, contain about 40 percent protons and 60 percent neutrons. B2 Pions Particle physicists explain the behavior of the strong force by introducing another type of particle, called a pion. Protons and neutrons interact in the nucleus by exchanging pions. Exchanging pions pulls protons and neutrons together. The process is similar to two people having a game of catch with a heavy ball, but with each person attached to the ball by a spring. As one person throws the ball to the other, the spring pulls the thrower toward the ball. If the players exchange the ball rapidly enough, the ball and springs become just a blur to an observer, and it appears as if the two throwers are simply pulled toward one another. This is what occurs in the nuclei of atoms. The protons and neutrons in the nucleus are the people, pions act as the ball, and the strong force acts as the springs holding everything together. Pions in the nucleus exist only for the briefest instant of time, no more than 1 × 10-23 seconds, but even during their short existence they can provide the attraction that holds the nucleus together. Pions can also exist as independent particles outside of the nucleus of an atom. Scientists have created them by striking high-speed protons against a target. Even though the free pions also live only for a short period of time (about 1 × 10-8 seconds), scientists have been able study their properties. C Weak Force The weak force lives up to its name—it is much weaker than the electromagnetic and strong forces. Like the strong force, it only acts over a short distance, about .01 fm. Unlike these other forces, however, the weak force affects all the particles in an atom. The electromagnetic force only affects the electrons and protons, and the strong force only affects the protons and neutrons. When a nucleus has too many protons to hold together or so many neutrons that the strong force squeezes too tightly, the weak force actually changes one type of particle into another. When an atom undergoes one type of decay, for example, the weak force causes a neutron to change into a proton, an electron, and an electron antineutrino. The total electric charge and the total energy of the particles remain the same before and after the change. Scientists of the early 20th century found they could not explain the behavior of atoms using their current knowledge of matter. They had to develop a new view of matter and energy to accurately describe how atoms behaved. They called this theory quantum theory, or quantum mechanics. Quantum theory describes matter as acting both as a particle and as a wave. In the visible objects encountered in everyday life, the wavelike nature of matter is too small to be apparent. Wavelike nature becomes important, however, in microscopic particles such as electrons. As we have discussed, electrons in atoms behave like waves. They exist as a fuzzy cloud of negative charge around the nucleus, instead of as a particle located at a single point. A Wave Behavior In order to understand the quantum model of the atom, we must know some basic facts about waves. Waves are vibrations that repeat regularly over and over again. A familiar example of waves occurs when one end of a rope is tied to a fixed object and someone moves the other end up and down. This action creates waves that travel along the rope. The highest point that the rope reaches is called the crest of the wave. The lowest point is called the trough of the wave. Troughs and crests follow each other in a regular sequence. The distance from one trough to the next trough, or from one crest to the next crest, is called a wavelength. The number of wavelengths that pass a certain point in a given amount of time is called the wave’s frequency. In physics, the word wave usually means the entire pattern, which may consist of many individual troughs and crests. For example, when the person holding the loose end of the rope moves it up and down very fast, many troughs and crests occupy the rope at once. A physicist would use the word wave to describe the entire set of troughs and crests on the rope. When two waves meet each other, they merge in a process called interference. Interference creates a new wave pattern. If two waves with the same wavelength and frequency come together, the resulting pattern depends on the relative position of the waves’ crests. If the crests and troughs of the two waves coincide, the waves are said to be in phase. Waves in phase with each other will merge to produce higher crests and lower troughs. Physicists call this type of interference constructive interference. Sometimes waves with the same wavelength and frequency are out of phase, meaning they meet in such a way that their respective crests and troughs do not coincide. In these cases the waves produce destructive interference. If two identical waves are exactly half a wavelength out of phase, the crests of one wave line up with the troughs of the other. These waves cancel each other out completely, and no wave will appear. If two waves meet that are not exactly in phase and not exactly one-half wavelength out of phase, they will interfere constructively in some places and destructively in others, producing a complicated new wave. See also Wave Motion. B Electrons as Waves Wave Aspect of ElectronsThis pattern is produced when a narrow beam of electrons passes through a sample of titanium-nickel alloy. The pattern reveals that the electrons move through the sample more like waves than particles. The electrons diffract (bend) around atoms, breaking into many beams and spreading outward. The diffracted beams then interfere with one another, cancelling each other out in some places and reinforcing each other in other places. The bright spots are places where the beams interfered constructively, or reinforced each other. The dark spots are areas in which the beams interfered destructively, or cancelled each other out.Science Source/Photo Researchers, Inc. Electrons behave as both particles and waves in atoms. This characteristic is called wave-particle duality. Wave-particle duality actually affects all particles and collections of particles, including protons, neutrons, and atoms themselves. But in terms of the structure of the atom, the wavelike nature of the electron is the most important. As waves, electrons have wavelengths and frequencies. The wavelength of an electron depends on the electron’s energy. Since the energy of electrons is kinetic (energy related to motion), an electron’s wavelength depends on how fast it is moving. The more energy an electron has, the shorter its wavelength is. Electron waves can interfere with each other, just as waves along a rope do. Because of the electron’s wave-particle duality, physicists cannot define an electron’s exact location in an atom. If the electron were just a particle, measuring its location would be relatively simple. As soon as physicists try to measure its location, however, the electron’s wavelike nature becomes apparent, and they cannot pinpoint an exact location. Instead, physicists calculate the probability that the electron is located in a certain place. Adding up all these probabilities, physicists can produce a picture of the electron that resembles a fuzzy cloud around the nucleus. The densest part of this cloud represents the place where the electron is most likely to be located. C Electron Orbitals and Shells Electron Configuration of NickelElectrons surround the nucleus of an atom in patterns of shells and subshells. In this table showing the electron configuration of a nickel atom, the large numbers (1, 2, 3, 4) indicate shells of electrons (shown as small spheres), the letters (s, p, d) indicate subshells within these shells, and the exponents indicate the number of electrons present in each subshell. Subshells may be further divided into orbitals. Each orbital can contain two electrons, and orbitals are designated in the table by horizontal bars connecting pairs of electrons. The small up and down arrows indicate the direction of each electron’s spin. Electrons that occupy the same orbital always have opposite spins. If all the electrons were stripped away from an atom of nickel (that is, the atom was totally ionized) and electrons were allowed to return one at a time, the electrons would fill up the slots indicated on the chart from left to right, top to bottom. Electrons do not always fill all the subshells of a shell before beginning to fill the next shell. The s subshell of shell 4, for example, actually fills before the d subshell of shell 3 (shown as the lowest row in this chart).© Microsoft Corporation. All Rights Reserved. Physicists call the region of space an electron occupies in an atom the electron’s orbital. Similar orbitals constitute groups called shells. The electrons in the orbitals of a particular shell have similar levels of energy. This energy is in the form of both kinetic energy and potential energy. Lower shells are close to the nucleus and higher shells are farther from the nucleus. Electrons occupying orbitals in higher shells generally have more energy than electrons occupying orbitals in lower shells. C1 Differences Between Orbitals Atomic Radius Variation in the Periodic TableThe size of the atoms of an element varies in a regular way across the periodic table, increasing down the groups (columns), and decreasing along the periods (rows) from left to right. The size of an atom is largely determined by its electrons. The electrons are arranged in shells surrounding the nucleus of each atom. The top elements of every group have only one or two electron shells. Atoms of elements further down the table have more shells and are therefore larger in size. Moving across a period from left to right, the outermost electron shell fills up but no new shells are added. At the same time, the number of protons in the nucleus of each atom increases. Protons attract electrons. The greater the number of protons present, the stronger the attraction that holds the electrons closer to the nucleus, and the smaller the size of the shells.© Microsoft Corporation. All Rights Reserved. The wavelike nature of electrons sets boundaries for their possible locations and determines what shape their orbital, or cloud of probability, will form. Orbitals differ from each other in size, angular momentum, and magnetic properties. In general, angular momentum is the energy an object contains based on how fast the object is revolving, the object’s mass, and the object’s distance from the axis around which it is revolving. The angular momentum of a whirling ball tied to a string, for example, would be greater if the ball was heavier, the string was longer, or the whirling was faster. In atoms, the angular momentum of an electron orbital depends on the size and shape of the orbital. Orbitals with the same size and shape all have the same angular momentum. Some orbitals, however, can differ in shape but still have the same angular momentum. The magnetic properties of an orbital describe how it would behave in a magnetic field. Magnetic properties also depend on the size and shape of the orbital, as well as on the orbital’s orientation in space. The orbitals in an atom must occur at certain distances from the nucleus to create a stable atom. At these distances, the orbitals allow the electron wave to complete one or more half-wavelengths (y, 1, 1y, 2, 2y, and so on) as it travels around the nucleus. The electron wave can then double back on itself and constructively interfere with itself in a way that reinforces the wave. Any other distance would cause the electron to interfere with its own wave in an unpredictable and unstable way, creating an unstable atom. C2 Principal and Secondary Quantum Numbers Quantum Description of ElectronsScientists describe the properties of an electron in an atom with a set of numbers called quantum numbers. Electrons are a type of particle known as a fermion, and according to a rule of physics, no two fermions can be exactly alike. Each electron in an atom therefore has different properties and a different set of quantum numbers. Electrons that share the same principal quantum number form a shell in an atom. This chart shows the first three shells. The two electrons that share the principal quantum number 1 form the first shell. One of these electrons has the quantum numbers 1, s, 0, 1/2, and the other electron has the quantum numbers 1, s, 0, -1/2.© Microsoft Corporation. All Rights Reserved. Physicists call the number of half-wavelengths that an orbital allows the orbital’s principal quantum number (abbreviated n). In general, this number determines the size of the orbital. Larger orbitals allow more half-wavelengths and therefore have higher principal quantum numbers. The orbital that allows one half-wavelength has a principal quantum number of one. Only one orbital allows one half-wavelength. More than one orbital can allow two or more half-wavelengths. These orbitals may have the same principal quantum number, but they differ from each other in their angular momentum and their magnetic properties. The orbitals that allow one wavelength have a principal quantum number of 2 (n = 2), the orbitals that allow one and a half wavelengths have a principal quantum number of 3 (n = 3), and so on. The set of orbitals with the same principal quantum number make up a shell. Atomic Orbital ShapesAtomic orbitals are mathematical descriptions of where the electrons in an atom (or molecule) are most likely to be found. These descriptions are obtained by solving an equation known as the Schrödinger equation, which expresses our knowledge of the atomic world. As the angular momentum and energy of an electron increases, it tends to reside in differently shaped orbitals. This description has been confirmed by many experiments in chemistry and physics, including an actual picture of a p-orbital made by a scanning tunneling microscope. © Microsoft Corporation. All Rights Reserved. Physicists use a second number to describe the angular momentum of an orbital. This number is called the orbital’s secondary quantum number, or its angular momentum quantum number (abbreviated l). The number of possible values an orbital can have for its angular momentum is one less than the number of half-wavelengths it allows. This means that an orbital with a principal quantum number of n can have n-1 possible values for its secondary quantum number. Physicists customarily use letters to indicate orbitals with certain secondary quantum numbers. In order of increasing angular momentum, the orbitals with the six lowest secondary quantum numbers are indicated by the letters s, p, d, f, g, and h. The letter s corresponds to the secondary quantum number 0, the letter p corresponds to the secondary quantum number 1, and so on. In general, the angular momentum of an orbital depends on its shape. An s-orbital, with a secondary quantum number of 0, is spherical. A p-orbital, with a secondary quantum number of 1, resembles two hemispheres, facing one another. The possible combinations of principal and secondary quantum numbers for the first five shells are listed below. C3 Subshells More than one orbital can allow the same number of half-wavelengths and have the same angular momentum. Physicists call orbitals in a shell that all have the same angular momentum a subshell. They designate a subshell with the subshell’s principal and secondary quantum numbers. For example, the 1s subshell is the group of orbitals in the first shell with an angular momentum described by the letter s. The 2p subshell is the group of orbitals in the second shell with an angular momentum described by the letter p. Orbitals within a subshell differ from each other in their magnetic properties. The magnetic properties of an orbital depend on its shape and orientation in space. For example, a p-orbital can have three different orientations in space: one situated up and down, one from side to side, and a third from front to back. C4 Magnetic Quantum Number and Spin Physicists describe the magnetic properties of an orbital with a third quantum number called the orbital’s magnetic quantum number (abbreviated m). The magnetic quantum number determines how orbitals with the same size and angular momentum are oriented in space. An orbital’s magnetic quantum number can only have whole number values ranging from the value of the orbital’s secondary quantum number down to the negative value of the secondary quantum number. A p-orbital, for example, has a secondary quantum number of 1 (l = 1), so the magnetic quantum number has three possible values: +1, 0, and -1. This means the p-orbital has three possible orientations in space. An s-orbital has a secondary quantum number of 0 (l = 0), so the magnetic quantum number has only one possibility: 0. This orbital is a sphere, and a sphere can only have one orientation in space. For a d-orbital, the secondary quantum number is 2 (l = 2), so the magnetic quantum number has five possible values: -2, -1, 0, +1, and +2. A d-orbital has four possible orientations in space, as well as a fifth orbital that differs in shape from the other four. Together, the principal, secondary, and magnetic quantum numbers specify a particular orbital in an atom. Electrons are a type of particle known as a fermion. Austrian-American physicist Wolfgang Pauli discovered that no two fermions can have the exact same quantum numbers. This principle is called the Pauli exclusion principle, which states that two or more identical electrons cannot occupy the same orbital in an atom. Scientists know, however, that each orbital can hold two electrons. Electrons have another property, called spin, that differentiates the two electrons in each orbital. An electron’s spin has two possible values: +y (called spin-up) or -y (called spin-down). These two possible values mean that two electrons can occupy the same orbital, as long as their spins are different. Physicists call spin the fourth quantum number of an electron orbital (abbreviated ms). Spin, in addition to the other three quantum numbers, uniquely describes a particular electron’s orbital. C5 Filling Orbitals When electrons collect around an atom’s nucleus, they fill up orbitals in a definite pattern. They seek the first available orbital that takes the least amount of energy to occupy. Generally, it takes more energy to occupy orbitals with higher quantum numbers. It takes the same energy to occupy all the orbitals in a subshell. The lowest energy orbital is the one closest to the nucleus. It has a principal quantum number of 1, a secondary quantum number of 0, and a magnetic quantum number of 0. The first two electrons—with opposite spins—occupy this orbital. If an atom has more than two electrons, the electrons begin filling orbitals in the next subshell with one electron each until all the orbitals in the subshell have one electron. The electrons that are left then go back and fill each orbital in the subshell with a second electron with opposite spin. They follow this order because it takes less energy to add an electron to an empty orbital than to complete a pair of electrons in an orbital. The electrons fill all the subshells in a shell, then go on to the next shell. As the subshells and shells increase, the order of energy for orbitals becomes more complicated. For example, it takes slightly less energy to occupy the s-subshell in the fourth shell than it does to occupy the d-subshell in the third shell. Electrons will therefore fill the orbitals in the 4s subshell before they fill the orbitals in the 3d subshell, even though the 3d subshell is in a lower shell. D Atomic Properties The atom’s electron cloud, that is, the arrangement of electrons around an atom, determines most of the atom’s physical and chemical properties. Scientists can therefore predict how atoms will interact with other atoms by studying their electron clouds. The electrons in the outermost shell largely determine the chemical properties of an atom. If this shell is full, meaning all the orbitals in the shell have two electrons, then the atom is stable, and it won’t react readily with other atoms. If the shell is not full, the atom will chemically react with other atoms, exchanging or sharing electrons in order to fill its outer shell. Atoms bond with other atoms to fill their outer shells because it requires less energy to exist in this bonded state. Atoms always seek to exist in the lowest energy state possible. D1 Valence Shells Physicists call the outer shell of an atom its valence shell. The valence shell determines the atom’s chemical behavior, or how it reacts with other elements. The fullness of an atom’s valence shell affects how the atom reacts with other atoms. Atoms with valence shells that are completely full are not likely to interact with other atoms. Six gaseous elements—helium, neon, argon, krypton, xenon, and radon—have full valence shells. These six elements are often called the noble gases because they do not normally form compounds with other elements. The noble gases are chemically inert because their atoms are in a state of low energy. A full valence shell, like that of atoms of noble gases, provides the lowest and most stable energy for an atom. Atoms that do not have a full valence shell try to lower their energy by filling up their valence shell. They can do this in several ways: Two atoms can share electrons to complete the valence shell of both atoms, an atom can shed or take on electrons to create a full valence shell, or a large number of atoms can share a common pool of electrons to complete their valence shells. D2 Covalent Bonds Covalent BondsIn a covalent bond, the two bonded atoms share electrons. When the atoms involved in the covalent bond are from different elements, one of the atoms will tend to attract the shared electrons more strongly, and the electrons will spend more time near that atom; this is a polar covalent bond. When the atoms connected by a covalent bond are the same, neither atom attracts the shared electrons more strongly than the other; this is a non-polar covalent bond.© Microsoft Corporation. All Rights Reserved. When two atoms share a pair of electrons, they form a covalent bond. When atoms bond covalently, they form molecules. A molecule can be made up of two or more atoms, all joined with covalent bonds. Each atom can share its electrons with one or more other atoms. Some molecules contain chains of thousands of covalently bonded atoms. Carbon is an important example of an element that readily forms covalent bonds. Carbon has a total of six electrons. Two of the electrons fill up the first orbital, the 1s orbital, which is the only orbital in the first shell. The rest of the electrons partially fill carbon’s valence shell. Two fill up the next orbital, the 2s orbital, which forms the 2s subshell. Carbon’s valence shell still has the 2p subshell, containing three p-orbitals. The two remaining electrons each fill half of the two orbitals in the 2p subshell. The carbon atom thus has two half-full orbitals and one empty orbital in its valence shell. A carbon atom fills its valence shell by sharing electrons with other atoms, creating covalent bonds. The carbon atom can bond with other atoms through any of the three unfilled orbitals in its valence shell. The three available orbitals in carbon’s valence shell enable carbon to bond with other atoms in many different ways. This flexibility allows carbon to form a great variety of molecules, which can have a similarly great variety of geometrical shapes. This diversity of carbon-based molecules is responsible for the importance of carbon in molecules that form the basis for living things (see Organic Chemistry). D3 Ionic Bonds Ionic Bonding: SaltThe bond (left) between the atoms in ordinary table salt (sodium chloride) is a typical ionic bond. In forming the bond, sodium becomes a cation (a positively charged ion) by “giving up” its valence electron to chlorine, which then becomes an anion (a negatively charged ion). This electron exchange is reflected in the size difference between the atoms before and after bonding. Attracted by electrostatic forces (right), the ions arrange themselves in a crystalline structure in which each is strongly attracted to a set of oppositely charged “nearest neighbors” and, to a lesser extent, all the other oppositely charged ions throughout the entire crystal.© Microsoft Corporation. All Rights Reserved. Atoms can also lose or gain electrons to complete their valence shell. An atom will tend to lose electrons if it has just a few electrons in its valence shell. After losing the electrons, the next lower shell, which is full, becomes its valence shell. An atom will tend to steal electrons away from other atoms if it only needs a few more electrons to complete the shell. Losing or gaining electrons gives an atom a net electric charge because the number of electrons in the atom is no longer the same as the number of protons. Atoms with net electric charge are called ions. Scientists call atoms with a net positive electric charge cations (pronounced CAT-eye-uhns) and atoms with a net negative electric charge anions (pronounced AN-eye-uhns). The oppositely charged cations and anions are attracted to each other by electromagnetic force and form ionic bonds. When these ions come together, they form crystals. A crystal is a solid material made up of repeating patterns of atoms. Alternating positive and negative ions build up into a solid lattice, or framework. Crystals are also called ionic compounds, or salts. The element sodium is an example of an atom that has a single electron in its valence shell. It will easily lose this electron and become a cation. Chlorine atoms are just one electron away from completing their valence shell. They will tend to steal an electron away from another atom, forming an anion. When sodium and chlorine atoms come together, the sodium atoms readily give up their outer electron to the chlorine atoms. The oppositely charged ions bond with each other to form the crystal known as sodium chloride, or table salt. See also Chemical Reaction. D4 Metallic Bonds Atoms can complete their valence shells in a third way: by bonding together in such a way so that all the atoms in the substance share each other’s outer electrons. This is the way metallic elements bond and fill their valence shells. Metals form crystal lattice structures similar to salts, but the outer electrons in their atoms do not belong to any atom in particular. Instead, the outer electrons belong to all the atoms in the crystal, and they are free to move throughout the crystal. This property makes metals good conductors of electricity. D5 The Periodic Table Periodic Table of ElementsThe periodic table of elements groups elements in columns and rows by shared chemical properties. Elements appear in sequence according to their atomic number. Clicking on an element in the table provides basic information about the element, including its name, history, electron configuration, and atomic weight. Atomic weights in parentheses indicate the atomic weight of the most stable isotope.© Microsoft Corporation. All Rights Reserved. The organization of the periodic table reflects the way elements fill their orbitals with electrons. Scientists first developed this chart by grouping together elements that behave similarly in order of increasing atomic number. Scientists eventually realized that the chemical and physical behavior of elements was dependant on the electron clouds of the atoms of each element. The periodic table does not have a simple rectangular shape. Each column lists elements that share chemical properties, properties that depend on the arrangement of electrons in the orbitals of atoms. These elements have the same number of electrons in their valence shells. Different numbers of elements have similar valence shells, so the columns of the periodic table differ in height. The noble gases are all located in the rightmost column of the periodic table, labeled column 18 in Encarta’s periodic table. The noble gases all have full valence shells and are extremely stable. The column labeled 11 holds the elements copper, silver, and gold. These elements are metals that have partially filled valence shells and conduct electricity well. E Electron Energy Levels Each electron in an atom has a particular energy. This energy depends on the electron’s speed, the presence of other electrons, the electron’s distance from the nucleus, and the positive charge of the nucleus. For atoms with more than one electron, calculating the energy of each electron becomes too complicated to be practical. However, the order and relative energies of electrons follows the order of the electron orbitals, as discussed in the Electron Orbital and Shell section of this article. Physicists call the energy an electron has in a particular orbital the energy state of the electron. For example, the 1s orbital holds the two electrons with the lowest possible energies in an atom. These electrons are in the lowest energy state of any electrons in the atom. When an atom gains or loses energy, it does so by adding energy to, or removing energy from, its electrons. This change in energy causes the electrons to move from one orbital, or allowed energy state, to another. Under ordinary conditions, all electrons in an atom are in their lowest possible energy states, given that only two electrons can occupy each orbital. Atoms gain energy by absorbing it from light or from a collision with another particle, or they gain it by entering an electric or magnetic field. When an atom absorbs energy, one or more of its electrons moves to a higher, or more energetic, orbital. Usually atoms can only hold energy for a very short amount of time—typically 1 × 10-12 seconds or less. When electrons drop back down to their original energy states, they release their extra energy in the form of a photon (a packet of radiation). Sometimes this radiation is in the form of visible light. The light emitted by a fluorescent lamp is an example of this process. The outer electrons in an atom are easier to move to higher orbitals than the electrons in lower orbitals. The inner electrons require more energy to move because they are closer to the nucleus and therefore experience a stronger electromagnetic pull toward the nucleus than the outer electrons. When an inner electron absorbs energy and then falls back down, the photon it emits has more energy than the photon an outer electron would emit. The emitted energy relates directly to the wavelength of the photon. Photons with more energy are made of radiation with a shorter wavelength. When inner electrons drop down, they emit high-energy radiation, in the range of an X ray. X rays have much shorter wavelengths than visible light. When outer electrons drop down, they emit light with longer wavelengths, in the range of visible light. Physicists and chemists first learned about the properties of atoms indirectly, by studying the way that atoms join together in molecules or how atoms and molecules make up solids, liquids, and gases. Modern devices such as electron microscopes, particle traps, spectroscopes, and particle accelerators allow scientists to perform experiments on small groups of atoms and even on individual atoms. Scientists use these experiments to study the properties of atoms more directly. A Electron Microscopes Atoms Made VisibleIndividual atoms of the element silicon can be seen in this image obtained through the use of a scanning transmission electron microscope. The atoms in each pair are less than a ten-millionth of a millimeter (less than a hundred-millionth of an inch) apart.U.S. Department of Energy, Oak Ridge National Laboratory One of the most direct ways to study an object is to take its photograph. Scientists take photographs of atoms by using an electron microscope. An electron microscope imitates a normal camera, but it uses electrons instead of visible light to form an image. In photography, light reflects off of an object and is recorded on film or some other kind of detector. Taking a photograph of an atom with light is difficult because atoms are so tiny. Light, like all waves, tends to diffract, or bend around objects in its path (see Diffraction). In order to take a sharp photograph of any object, the wavelength of the light that bounces off the object must be much smaller than the size of the object. If the object is about the same size as or smaller than the light’s wavelength, the light will bend around the object and produce a fuzzy image. Atoms are so small that even the shortest wavelengths of visible light will diffract around them. Therefore, capturing photographic images of atoms requires the use of waves that are shorter than those of visible light. X rays are a type of electromagnetic radiation like visible light, but they have very short wavelengths—much too short to be visible to human eyes. X-ray wavelengths are small enough to prevent the waves from diffracting around atoms. X rays, however, have so much energy that when they bounce off an atom, they knock electrons away from the atom. Scientists, therefore, cannot use X rays to take a picture of an atom without changing the atom. They must use a different method to get an accurate picture. Electron microscopes provide scientists with an alternate method. Scientists shine electrons, instead of light, on an atom. As discussed in the Electrons as Waves section of this article, electrons have wavelike properties, so they can behave like light waves. The simplest type of electron microscope focuses the electrons reflected off of an object and translates the pattern formed by the reflected electrons into a visible display. Scientists have used this technique to create images of tiny insects and even individual living cells, but they have not been able to use it to make a clear image of objects smaller than about 10 nanometers (abbreviated nm), or 1 × 10-8 m (4 × 10-7 in). To get to the level of individual atoms, scientists must use a more powerful type of electron microscope called a scanning tunneling microscope (STM). An STM uses a tiny probe, the tip of which can be as small as a single atom, to scan an object. An STM takes advantage of another wavelike property of electrons called tunneling. Tunneling allows electrons emitted from the probe of the microscope to penetrate, or tunnel into, the surface of the object being examined. The rate at which the electrons tunnel from the probe to the surface is related to the distance between the probe and the surface. These moving electrons generate a tiny electric current that the STM measures. The STM constantly adjusts the height of the probe to keep the current constant. By tracking how the height of the probe changes as the probe moves over the surface, scientists can get a detailed map of the surface. The map can be so detailed that individual atoms on the surface are visible. B Particle Traps Studying single atoms or small samples of atoms can help scientists understand atomic structure. However, all atoms, even atoms that are part of a solid material, are constantly in motion. This constant motion makes them difficult to examine. To study single atoms, scientists must slow the atoms down and confine them to one place. Scientists can slow and trap atoms using devices called particle traps. Slowing down atoms is actually the same as cooling them. This is because an atom’s rate of motion is directly related to its temperature. Atoms that are moving very quickly cause a substance to have a high temperature. Atoms moving more slowly create a lower temperature. Scientists therefore build traps that cool atoms down to a very low temperature. Several different types of particle traps exist. Some traps are designed to slow down ions, while others are designed to slow electrically neutral atoms. Traps for ions often use electric and magnetic fields to influence the movement of the particle, confining it in a small space or slowing it down. Traps for neutral atoms often use lasers, beams of light in which the light waves are uniform and consistent. Light has no mass, but it moves so quickly that it does have momentum. This property allows the light to affect other particles, or “bump” into them. When laser light collides with atoms, the momentum of the light forces the atoms to change speed and direction. Scientists use trapped and cooled atoms for a variety of experiments, including those that precisely measure the properties of individual atoms and those in which scientists construct extremely accurate atomic clocks. Atomic clocks keep track of time by counting waves of radiation emitted by atoms in traps inside the clock. Because the traps hold the atoms at low temperatures, the mechanisms inside the clock can exercise more control over the atom, reducing the possibility of error. Scientists can also use isolated atoms to measure the force of gravity in an area with extreme accuracy. These measurements are useful in oil exploration, among other things. A deposit of oil or other substance beneath Earth’s surface has a different density than the material surrounding it. The strength of the pull of gravity in an area depends on the density of material in the area, so these changes in density produce changes in the local strength of gravity. Advances in the manipulation of atoms have also raised the possibility of using atoms to etch electronic circuits. This would help make the circuits smaller and thereby allow more circuits to fit in a tinier area. In 1995 American physicists used particle traps to cool a sample of rubidium atoms to a temperature near absolute zero (-273°C, or –459°F). Absolute zero is the temperature at which all motion stops. When the scientists cooled the rubidium atoms to such a low temperature, the atoms slowed almost to a stop. The scientists knew that the momentum of the atoms, which is related to their speed, was close to zero. At this point, a special rule of quantum physics, called the uncertainty principle, greatly affected the positions of the atoms. This rule states that the momentum and position of a particle both cannot have precise values at the same time. The scientists had a fairly precise value for the atom’s momentum (nearly zero), so the positions of the atoms became very imprecise. The position of each atom could be described as a large, fuzzy cloud of probability. The atoms were very close together in the trap, so the probability clouds of many atoms overlapped one another. It was impossible for the scientists to tell where one atom ended and another began. In effect, the atoms formed one huge particle. This new state of matter is called a Bose-Einstein condensate. C Spectroscopes Electric Discharge in NitrogenIn this discharge tube filled with nitrogen, an electric current excites the nitrogen atoms. Almost instantaneously, these excited atoms shed their excess energy by emitting light of specific wavelengths. This phenomenon of discrete emission by excited atoms remained unexplained until the advent of quantum mechanics in the early 20th century.Yoav Levy/Phototake NYC Spectroscopy is the study of the radiation, or energy, that atoms, ions, molecules, and atomic nuclei emit. This emitted energy is usually in the form of electromagnetic radiation—vibrating electric and magnetic waves. Electromagnetic waves can have a variety of wavelengths, including those of visible light. X rays, ultraviolet radiation, and infrared radiation are also forms of electromagnetic radiation. Scientists use spectroscopes to measure this emitted radiation. C1 Characteristic Radiation of Atoms Characteristic SpectraEvery chemical element has a characteristic spectrum, or particular distribution of electromagnetic radiation. Because of these “signature” wavelength patterns, it is possible to identify the constituents of an unknown substance by analyzing its spectrum; this technique is called spectroscopy. Emission spectrums, such as the representative examples shown here, appear as several lines of specific wavelength separated by absolute darkness. The lines are indicative of molecular structure, occurring where atoms make transitions between states of definite energy.© Microsoft Corporation. All Rights Reserved. Atoms emit radiation when their electrons lose energy and drop down to lower orbitals, or energy states, as described in the Electron Energy Levels section above. The difference in energy between the orbitals determines the wavelength of the emitted radiation. This radiation can be in the form of visible light for outer electrons, or it can be radiation of shorter wavelengths, such as X-ray radiation, for inner electrons. Because the energies of the orbitals are strictly defined and differ from element to element, atoms of a particular element can only emit certain wavelengths of radiation. By studying the wavelengths of radiation emitted by a substance, scientists can identify the element or elements comprising the substance. For example, the outer electrons in a sodium atom emit a characteristic yellow light when they return to lower orbitals. This is why street lamps that use sodium vapor have a yellowish glow (See also Sodium-Vapor Lamp). Chemists often use a procedure called a flame test to identify elements. In a flame test, the chemist burns a sample of the element. The heat excites the outer electrons in the element’s atoms, making the electrons jump to higher energy orbitals. When the electrons drop back down to their original orbitals, they emit light characteristic of that element. This light colors the flame and allows the chemist to identify the element. Flame TestThe flame test is a simple laboratory procedure that can identify the presence of specific elements in a chemical sample. A small amount of the substance to be tested is placed on the tip of a clean rod, and the rod is inserted in the flame of a Bunsen burner. Different elements give different colors to the flame.© Microsoft Corporation. All Rights Reserved. The inner electrons of atoms also emit radiation that can help scientists identify elements. The energy it takes to boost an inner electron to a higher orbital is directly related to the positive charge of the nucleus and the pull this charge exerts on the electron. When the electron drops back to its original level, it emits the same amount of energy it absorbed, so the emitted energy is also related to the nucleus’s charge. The charge on the nucleus is equal to the atom’s atomic number. Scientists measure the energy of the emitted radiation by measuring the radiation’s wavelength. The radiation’s energy is directly related to its wavelength, which usually resembles that of an X ray for the inner electrons. By measuring the wavelength of the radiation that an atom’s inner electron emits, scientists can identify the atom by its atomic number. Scientists used this method in the 1910s to identify the atomic number of the elements and to place the elements in their correct order in the periodic table. The method is still used today to identify particularly heavy elements (those with atomic numbers greater than 100) that are produced a few atoms at a time in large accelerators (see Transuranium Elements). C2 Radiation Released by Radioactivity Atomic nuclei emit radiation when they undergo radioactive decay, as discussed in the Radioactivity section above. Nuclei usually emit radiation with very short wavelengths (and therefore high energy) when they decay. Often this radiation is in the form of gamma rays, a form of electromagnetic radiation with wavelengths even shorter than X rays. Once again, nuclei of different elements emit radiation of characteristic wavelengths. Scientists can identify nuclei by measuring this radiation. This method is especially useful in neutron activation analysis, a technique scientists use for identifying the presence of tiny amounts of elements. Scientists bombard samples that they wish to identify with neutrons. Some of the neutrons join the nuclei, making them radioactive. When the nuclei decay, they emit radiation that allows the scientists to identify the substance. Environmental scientists use neutron activation analysis in studying air and water pollution. Forensic scientists, who study evidence related to crimes, use this technique to identify gunshot residue and traces of poisons. D Particle Accelerators Elementary Particle TracksThese tracks were formed by elementary particles in a bubble chamber at the CERN facility located outside of Geneva, Switzerland. By examining these tracks, physicists can determine certain properties of particles that traveled through the bubble chamber. For example, a particle's charge can be determined by noting the type of path the particle followed. The bubble chamber is placed within a magnetic field, which causes a positively charged particle's track to curve in one direction, and a negatively charged particle's track to curve the opposite way; neutral particles, unaffected by the magnetic field, move in a straight line.Patrice Loiez/CERN/Science Source/Photo Researchers, Inc. Particle accelerators are devices that increase the speed of a beam of elementary particles such as protons and electrons. Scientists use the accelerated beam to study collisions between particles. The beam can collide with a target of stationary particles, or it can collide with another accelerated beam of particles moving in the opposite direction. If physicists use the nucleus of an atom as the target, the particles and radiation produced in the collision can help them learn about the nucleus. The faster the particles move, the higher the energy they contain. If collisions occur at very high energy, it is possible to create particles never before detected. In certain circumstances, energy can be converted to matter, resulting in heavier particles after the collision. Cyclotrons and linear accelerators are two of the most important kinds of particle accelerators. In a cyclotron, a magnetic field holds a beam of charged particles in a circular path. An electric field interacts with the particles’ electric charge to give them a boost of energy and speed each time the beam goes around. In linear accelerators, charged particles move in a straight line. They receive many small boosts of energy from electric fields as they move through the accelerator. Bombarding nuclei with beams of neutrons forces the nuclei to absorb some of the neutrons and become unstable. The unstable nuclei then decay radioactively. The way atoms decay tells scientists about the original structure of the atom. Scientists can also deduce the size and shape of nuclei from the way particles scatter from nuclei when they collide. Another use of particle accelerators is to create new and exotic isotopes, including atoms of elements with very high atomic numbers that are not found in nature. At higher energy levels, using particles moving at much higher speeds, scientists can use accelerators to look inside protons and neutrons to examine their internal structure. At these energy levels, accelerators can produce new types of particles. Some of these particles are similar to protons or neutrons but have larger masses and are very unstable. Others have a structure similar to the pion, the particle that is exchanged between the proton and neutron as part of the strong force that binds the nucleus together. By creating new particles and studying their properties, physicists have been able to deduce their common internal structure and to classify them using the theory of quarks. High-energy collisions between one particle and another often produce hundreds of particles. Experimenters have the challenging task of identifying and measuring all of these particles, some of which exist for only the tiniest fraction of a second. Beginning with Democritus, who lived during the late 5th and early 4th centuries bc, Greek philosophers developed a theory of matter that was not based on experimental evidence, but on their attempts to understand the universe in philosophical terms. According to this theory, all matter was composed of tiny, indivisible particles called atoms (from the Greek word atomos, meaning “indivisible”). If a sample of a pure element was divided into smaller and smaller parts, eventually a point would be reached at which no further cutting would be possible—this was the atom of that element, the smallest possible bit of that element. According to the ancient Greeks, atoms were all made of the same basic material, but atoms of different elements had different sizes and shapes. The sizes, shapes, and arrangements of a material’s atoms determined the material’s properties. For example, the atoms of a fluid were smooth so that they could easily slide over one another, while the atoms of a solid were rough and jagged so that they could attach to one another. Other than the atoms, matter was empty space. Atoms and empty space were believed to be the ultimate reality. Although the notion of atoms as tiny bits of elemental matter is consistent with modern atomic theory, the researchers of prior eras did not understand the nature of atoms or their interactions in materials. For centuries scientists did not have the methods or technology to test their theories about the basic structure of matter, so people accepted the ancient Greek view. A The Birth of the Modern Atomic Theory The work of British chemist John Dalton at the beginning of the 19th century revealed some of the first clues about the true nature of atoms. Dalton studied how quantities of different elements, such as hydrogen and oxygen, could combine to make other substances, such as water. In his book A New System of Chemical Philosophy (1808), Dalton made two assertions about atoms: (1) atoms of each element are all identical to one another but different from the atoms of all other elements, and (2) atoms of different elements can combine to form more complex substances. Dalton’s idea that different elements had different atoms was unlike the Greek idea of atoms. The characteristics of Dalton’s atoms determined the chemical and physical properties of a substance, no matter what the substance’s form. For example, carbon atoms can form both hard diamonds and soft graphite. In the Greek theory of atoms, diamond atoms would be very different from graphite atoms. In Dalton’s theory, diamond atoms would be very similar to graphite atoms because both substances are composed of the same chemical element. While developing his theory of atoms, Dalton observed that two elements can combine in more than one way. For example, modern scientists know that carbon monoxide (CO) and carbon dioxide (CO2) are both compounds of carbon and oxygen. According to Dalton’s experiments, the quantities of an element needed to form different compounds are always whole-number multiples of one another. For example, two times as much oxygen is needed to form a liter of CO2 than is needed to form a liter of CO. Dalton correctly concluded that compounds were created when atoms of pure elements joined together in fixed proportions to form units that scientists today call molecules. A1 States of Matter Scientists in the early 19th century struggled in another area of atomic theory. They tried to understand how atoms of a single element could exist in solid, liquid, and gaseous forms. Scientists correctly proposed that atoms in a solid attract each other with enough force to hold the solid together, but they did not understand why the atoms of liquids and gases did not attract each other as strongly. Some scientists theorized that the forces between atoms were attractive at short distances (such as when the atoms were packed very close together to form a solid) and repulsive at larger distances (such as in a gas, where the atoms are on the average relatively far apart). Scientists had difficulty solving the problem of states of matter because they did not adequately understand the nature of heat. Today scientists recognize that heat is a form of energy, and that different amounts of this energy in a substance lead to different states of matter. In the 19th century, however, people believed that heat was a material substance, called caloric, that could be transferred from one object to another. This explanation of heat was called the caloric theory. Dalton used the caloric theory to propose that each molecule of a gas is surrounded by caloric, which exerts a repulsive force on other molecules. According to Dalton’s theory, as a gas is heated, more caloric is added to the gas, which increases the repulsive force between the molecules. More caloric would also cause the gas to exert a greater pressure on the walls of its container, in accordance with scientists’ experiments. This early explanation of heat and states of matter broke down when experiments in the middle of the 19th century showed that heat could change into energy of motion. The laws of physics state that the amount of energy in a system cannot increase, so scientists had to accept that heat must be energy, not a substance. This revelation required a new theory of how atoms in different states of matter behave. A2 Behavior of Gases In the early 19th century Italian chemist Amedeo Avogadro made an important advance in the understanding of how atoms and molecules in a gas behave. Avogadro began his work from a theory developed by Dalton. Dalton’s theory proposed that a gaseous compound, formed by combining equal numbers of atoms of two elements, should have the same number of molecules as the atoms in one of the original elements. For example, ten atoms of the element hydrogen (H) combine with ten atoms of chlorine (Cl) to form ten gaseous hydrogen chloride (HCl) molecules. In 1811 Avogadro developed a law of physics that seemed to contradict Dalton’s theory. Avogadro’s law states that equal volumes of different gases contain the same number of particles (atoms or molecules) if both gases are at the same temperature and pressure. In Dalton’s experiment, the volume of the original vessels containing the hydrogen or chlorine gases was the same as the volume of the vessel containing the hydrogen chloride gas. The pressures of the original hydrogen and chlorine gases were equal, but the pressure of the hydrochloric gas was twice as great as either of the original gases. According to Avogadro’s law, this doubled pressure would mean that there were twice as many hydrogen chloride gas particles than there had been chlorine particles prior to their combination. To reconcile the results of Dalton’s experiment with his new rule, Avogadro was forced to conclude that the original vessels of hydrogen or chlorine contained only half as many particles as Dalton had thought. Dalton, however, knew the total weight of each gas in the vessels, as well as the weight of an individual atom of each gas, so he knew the total number of atoms of each gas that was present in the vessels. Avogadro reconciled the fact that there were twice as many atoms as there were particles in the vessels by proposing that gases such as hydrogen and chlorine are really made up of molecules of hydrogen and chlorine, with two atoms in each molecule. Today scientists write the chemical symbols for hydrogen and chlorine as H2 and Cl2, respectively, indicating that there are two atoms in each molecule. One molecule of hydrogen and one molecule of chlorine combine to form two molecules of hydrogen chlorine (H2 + Cl2 → 2HCl). The sample of hydrogen chloride contains twice the number of particles as either the hydrogen or chlorine because two molecules of hydrogen chloride form when a molecule of hydrogen combines with a molecule of chlorine. B Electrical Forces in Atoms Evolution of the Model of the AtomAs scientists learned about the structure of the atom through experiments, they modified their models of the atom to fit their data. British physicist Joseph John Thomson understood that atoms contain positive and negative charges, while British physicist Ernest Rutherford discovered that an atom’s positive charge is concentrated in a nucleus. Danish physicist Neils Bohr proposed that electrons orbit only at set distances from the nucleus, and Austrian physicist Erwin Shrödinger discovered that electrons in an atom actually behave more like waves than like particles.© Microsoft Corporation. All Rights Reserved. The work of Dalton and Avogadro led to a consistent view of the quantities of different gases that could be combined to form compounds, but scientists still did not understand the nature of the forces that attracted the atoms to one another in compounds and molecules. Scientists suspected that electrical forces might have something to do with that attraction, but they found it difficult to understand how electrical forces could allow two identical, neutral hydrogen atoms to attract one another to form a hydrogen molecule. In the 1830s, British physicist Michael Faraday took the first significant step toward appreciating the importance of electrical forces in compounds. Faraday placed two electrodes connected to opposite terminals of a battery into a solution of water containing a dissolved compound. As the electric current flowed through the solution, Faraday observed that one of the elements that comprised the dissolved compound became deposited on one electrode while the other element became deposited on the other electrode. The electric current provided by the electrodes undid the coupling of atoms in the compound. Faraday also observed that the quantity of each element deposited on an electrode was directly proportional to the total quantity of current that flowed through the solution—the stronger the current, the more material became deposited on the electrode. This discovery made it clear that electrical forces must be in some way responsible for the joining of atoms in compounds. Despite these significant discoveries, most scientists did not immediately accept that atoms as described by Dalton, Faraday, and Avogadro were responsible for the chemical and physical behavior of substances. Before the end of the 19th century, many scientists believed that all chemical and physical properties could be determined by the rules of heat, an understanding of atoms closer to that of the Greek philosophers. The development of the science of thermodynamics (the scientific study of heat) and the recognition that heat was a form of energy eliminated the role of caloric in atomic theory and made atomic theory more acceptable. The new theory of heat, called the kinetic theory, said that the atoms or molecules of a substance move faster, or gain kinetic energy, as heat energy is added to the substance. Nevertheless, a small but powerful group of scientists still did not accept the existence of atoms—they regarded atoms as convenient mathematical devices that explained the chemistry of compounds, not as real entities. In 1905 French chemist Jean-Baptiste Perrin performed the final experiments that helped prove the atomic theory of matter. Perrin observed the irregular wiggling of pollen grains suspended in a liquid (a phenomenon called Brownian motion) and correctly explained that the wiggling was the result of atoms of the fluid colliding with the pollen grains. This experiment showed that the idea that materials were composed of real atoms in thermal motion was in fact correct. As scientists began to accept atomic theory, researchers turned their efforts to understanding the electrical properties of the atom. Several scientists, most notably British scientist Sir William Crookes, studied the effects of sending electric current through a gas. The scientists placed a very small amount of gas in a sealed glass tube. The tube had electrodes at either end. When an electric current was applied to the gas, a stream of electrically charged particles flowed from one of the electrodes. This electrode was called the cathode, and the particles were called cathode rays. At first scientists believed that the rays were composed of charged atoms or molecules, but experiments showed that the cathode rays could penetrate thin sheets of material, which would not be possible for a particle as large as an atom or a molecule. British physicist Sir Joseph John Thomson measured the velocity of the cathode rays and showed that they were much too fast to be atoms or molecules. No known force could accelerate a particle as heavy as an atom or a molecule to such a high speed. Thomson also measured the ratio of the charge of a cathode ray to the mass of the cathode ray. The value he measured was about 1,000 times larger than any previous measurement associated with charged atoms or molecules, indicating that within cathode rays particularly tiny masses carried relatively large amounts of charge. Thomson studied different gases and always found the same value for the charge-to-mass ratio. He concluded that he was observing a new type of particle, which carried a negative electric charge but was about a thousand times less massive than the lightest known atom. He also concluded that these particles were constituents of all atoms. Today scientists know these particles as electrons, and Thomson is credited with their discovery. C Rutherford’s Nuclear Atom Rutherford ExperimentRutherford studied the structure of the atom by firing a beam of alpha particles at gold atoms. A few alpha particles bounced directly back, indicating that they had struck something massive. Rutherford proposed that most of the mass of atoms was concentrated in their centers. This concentration of mass is now known as the nucleus. Scientists realized that if all atoms contain electrons but are electrically neutral, atoms must also contain an equal quantity of positive charge to balance the electrons’ negative charge. Furthermore, if electrons are indeed much less massive than even the lightest atom, then this positive charge must account for most of the mass of the atom. Thomson proposed a model by which this phenomenon could occur: He suggested that the atom was a sphere of positive charge into which the negative electrons were imbedded, like raisins in a loaf of raisin bread. In 1911 British scientist Ernest Rutherford set out to test Thomson’s proposal by firing a beam of charged particles at atoms. Rutherford chose alpha particles for his beam. Alpha particles are heavy particles with twice the positive charge of a proton. Alpha particles are now known to be the nuclei of helium atoms, which contain two protons and two neutrons. If Thomson’s model of the atom was correct, Rutherford theorized that the electric charge and the mass of the atoms would be too spread out to significantly deflect the alpha particles. Rutherford was quite surprised to observe something very different. Most of the alpha particles did indeed change their paths by a small angle, and occasionally an alpha particle bounced back in the opposite direction. The alpha particles that bounced back must have struck something at least as heavy as themselves. This led Rutherford to propose a very different model for the atom. Instead of supposing that the positive charge and mass were spread throughout the volume of the atom, he theorized that it was concentrated in the center of the atom. Rutherford called this concentrated region of electric charge the nucleus of the atom. In the span of 100 years, from Dalton to Rutherford, the basic ideas of atomic structure evolved from very primitive concepts of how atoms combined with one another to an understanding of the constituents of atoms—a positively charged nucleus surrounded by negatively charged electrons. The interactions between the nucleus and the electrons still required study. It was natural for physicists to model the atom, in which tiny electrons orbit a much more massive nucleus, after a familiar structure such as the solar system, in which planets orbit around a much more massive Sun. Rutherford’s model of the atom did indeed resemble a tiny solar system. The only difference between early models of the nuclear atom and the solar system was that atoms were held together by electromagnetic force, while gravitational force holds together the solar system. D The Bohr Model Danish physicist Niels Bohr used new knowledge about the radiation emitted from atoms to develop a model of the atom significantly different from Rutherford’s model. Scientists of the 19th century discovered that when an electrical discharge passes through a small quantity of a gas in a glass tube, the atoms in the gas emit light. This radiation occurs only at certain discrete wavelengths, and different elements and compounds emit different wavelengths. Bohr, working in Rutherford’s laboratory, set out to understand the emission of radiation at these wavelengths based on the nuclear model of the atom. Using Rutherford’s model of the atom as a miniature solar system, Bohr developed a theory by which he could predict the same wavelengths scientists had measured radiating from atoms with a single electron. However, when conceiving this theory, Bohr was forced to make some startling conclusions. He concluded that because atoms emit light only at discrete wavelengths, electrons could only orbit at certain designated radii, and light could be emitted only when an electron jumped from one of these designated orbits to another. Both of these conclusions were in disagreement with classical physics, which imposed no strict rules on the size of orbits. To make his theory work, Bohr had to propose special rules that violated the rules of classical physics. He concluded that, on the atomic scale, certain preferred states of motion were especially stable. In these states of motion an orbiting electron (contrary to the laws of electromagnetism) would not radiate energy. At the same time that Bohr and Rutherford were developing the nuclear model of the atom, other experiments indicated similar failures of classical physics. These experiments included the emission of radiation from hot, glowing objects (called thermal radiation) and the release of electrons from metal surfaces illuminated with ultraviolet light (the photoelectric effect). Classical physics could not account for these observations, and scientists began to realize that they needed to take a new approach. They called this new approach quantum mechanics (see Quantum Theory), and they developed a mathematical basis for it in the 1920s. The laws of classical physics work perfectly well on the scale of everyday objects, but on the tiny atomic scale, the laws of quantum mechanics apply. E Quantum Theory of Atoms The quantum mechanical view of atomic structure maintains some of Rutherford and Bohr’s ideas. The nucleus is still at the center of the atom and provides the electrical attraction that binds the electrons to the atom. Contrary to Bohr’s theory, however, the electrons do not circulate in definite planet-like orbits. The quantum-mechanical approach acknowledges the wavelike character of electrons and provides the framework for viewing the electrons as fuzzy clouds of negative charge. Electrons still have assigned states of motion, but these states of motion do not correspond to fixed orbits. Instead, they tell us something about the geometry of the electron cloud—its size and shape and whether it is spherical or bunched in lobes like a figure eight. Physicists called these states of motion orbitals. Quantum mechanics also provides the mathematical basis for understanding how atoms that join together in molecules share electrons. Nearly 100 years after Faraday’s pioneering experiments, the quantum theory confirmed that it is indeed electrical forces that are responsible for the structure of molecules. Two of the rules of quantum theory that are most important to explaining the atom are the idea of wave-particle duality and the exclusion principle. French physicist Louis de Broglie first suggested that particles could be described as waves in 1924. In the same decade, Austrian physicist Erwin Schrödinger and German physicist Werner Heisenberg expanded de Broglie’s ideas into formal, mathematical descriptions of quantum mechanics. The exclusion principle was developed by Austrian-born American physicist Wolfgang Pauli in 1925. The Pauli exclusion principle states that no two electrons in an atom can have exactly the same characteristics. The combination of wave-particle duality and the Pauli exclusion principle sets up the rules for filling electron orbitals in atoms. The way electrons fill up orbitals determines the number of electrons that end up in the atom’s valence shell. This in turn determines an atom’s chemical and physical properties, such as how it reacts with other atoms and how well it conducts electricity. These rules explain why atoms with similar numbers of electrons can have very different properties, and why chemical properties reappear again and again in a regular pattern among the elements.
65fc96afbfd536dc
Explore BrainMass A wave function describes a quantum state of particles and how they behave. The laws of quantum mechanics describe how the wave function changes over time. The common system for a wave function is Ѱ. Although Ѱ is a complex number, |Ѱ|² is a real number and corresponds to the probability density of finding a particle in a given place at a given time. The SI units for the wave function depend on the system. For one particle in three dimensions the units are m-3/2. The units are required so that an integral of |Ѱ|2 over a region of three-dimensional space is a unitless probability. The wave function is central to quantum mechanics. It is the fundamental postulate of quantum mechanics. The wave function is a source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics. These topics continue to be in debate today. Free Particles: Wavefunction, Momentum, and Probability At t=0, the wavefunction of a free particle is: psi(x,0) = {(sqrt b/2*pi)sin(bx) for |x|<((2*pi)/b) {0 for |x|> or = ((2*pi)/b)) a) What is the probability of finding the particle in the interval 0 less than or equal to x less than or equal to pi/2b? b) What is the momentum am Fourier transform with complex conjugate Show that integral from - infinity to + infinity of psi_1(x) times psi_2*(x) dx is equal to integral from - infinity to + infinity of phi_1(k) times phi_2*(k) dk. * indicates complex conjugate Dimensional Harmonic Oscillator Consider a 3-dimensional, spherically symmetric, isotropic harmonic oscillator with a potential energy of [see the attachment for full equation]. The Hamiltonian in this case is: [attached] a. Use the trial function [attached] and find the value of the parameter a that the energy, and find that minimum energy. b. Repeat th Bound-state wave functions Why can bound-state wave functions be chosen to be real? The text states that, in one-dimensional problems, the spatial wave function for any allowed state can be chosen to be real-valued. Verify this using the following outline or some other method. Please see the attached document for the questions, listed a) though c). One dimensional infinite square, particle with mass 1- Consider a particle with mass m in the one dimensional infinite square ell potential V(x) with length L at time t=0, the wave function for this particle psi (x) =A sin(pi x/L) cos(pi x/L) a) find the coefficient A so that the wave function is properly normalized. b) what is the probability of finding particle between 0<x A Spin 1/2 Particle A spin 1/2 particle is in the state |Psi> = Sqrt[2/3] | Up > + Sqrt[1/3] | Down > Suppose a measurement is made of the spin in the z direction and the result is m_s = -1/2. Now a second measurement is made to determine the spin in the x - direction. What is the probability the spin will be in the +x direction? So I underst The momentum wave function for the hydrogen atom The hydrogen atom ground state may be described by the spatial wave function as: phi(r)= [(1/(Pi*(a_0^(3)))^(1/2)]*e^(-r/a0) This is where a_0 is the Bohr radius. Using the following equation, please calculate the momentum wave function. integral[(e^(-a*r + i*b*r))*(d^(3))*(r) = 8*Pi*a/((a^(2) + b^(2))^(2)) See the at Fourier Expansion Assume a Fourier expansion of the form [see the attachment for equation] and determine the coefficients b_n(t). The initial conditions are [see the attachment for equations]. Note. This is only half the conventional Fourier orthogonality integral interval. However, as long as only the sines are included here, the Strum-Lio Expected value of momentum and Fourier transform The expected value of the momentum operator in quantum mechanics can be calculated using the spatial wave function. Using that the momentum wave function is the Fourier transform of the spatial wave function we obtain an expression for this expected value in term of the momentum wave function, that is, we prove the Equation 15.6 Momentum Representation, Momentum Space Wave Function We study the relationship between the position space wave function and the momentum space wave function in quantum mechanics. We show that they are related by the Fourier transform, more specifically we show that a momentum wave function given by the Fourier transform of the space wave function satisfies the requirements to be a Problems Involving the Schrodinger Equation (a) The time-independent Schrodinger equation for a free particle moving in one dimension (x) is written as (d^2 Psi(x) / dx^2) + k^2 Psi(x) = 0, where k is a constant. (i) Show that the following function is a solution to this equation: Psi(x) = A sin(k x) (ii) Briefly outline why constraining the particle within Two Problems in Modern Physics.. 28. A wave packet describes a particle having momentum p. Starting with the relativistic relationship E^2 = p^2 c^2 + E0^2, show that the group velocity is Bc and the phase velocity is c/B (where B = v/c). How can the phase velocity physically be greater than c? 44. An electron microscope is designed to resolve objects as sma Problem in Quantum Mechanical Tunneling Please help me stepwise in word / pdf. Given a tunneling barrier with a thickness of 2nm and a barrier height of 5eV, what is the minimum kinetic energy an electron would have to have to have a 50% chance of passing through? (hint it doesn't necessarily have to be less than the barrier height) Eigenvalues and Hamiltonian H I am confused on how to approach this problem on eigenfunctions and eigenvalues with the Hamiltonian H. Would you please include the work and an explanation, please. Please see the attachment for the full problem. Consider the particle in a box problem for a box of length ... Verify the substitution that the solutions ... Eigenfunction Decomposition of 1DHO Wavefunctions A particle of mass m is subject to the one-dimensional harmonic oscillator potential. Write down the first three normalised eigenfunctions ?_n (x) and the corresponding eigenvalues. Initially the wavefunction is in a mixed state of the form ?(x)=(1/(7???))^(1?2) e^(-x^2/(2?)^2 ) ((3x)^2/(?)^2 +(x/?)-(3/2)+?2) where ?=?(??m?). 2D Quantum Mechanical Harmonic Oscillator A particle of mass m moves in two dimensions under the influence of the potential V(x,y)=1/2 m?^2 (((6x)^2)-2xy+(6y)^2 ). Using the rotated coordinates u=(x+y)/?2 and w=(x-y)/?2 show that the Schrödinger equation in the new coordinates (u,w) is -(?^2)/2m ((d^2/du^2) +(d^2/dw^2))?(u,w)+V ?(u,w)?(u,w)=E?(u,w) Where V ?(u,w) sho Tunneling Probability of an Electron in a Square Well Given a tunneling barrier with a thickness of 2nm and a barrier height of 5eV, what is the minimum kinetic energy an electron would have to have to have a 50% chance of passing through? (hint it doesn't necessarily have to be less than the barrier height). I need the step-by-step solution please. Thank You. Density of States Problem Identify the quantum state (i.e. list the requisite quantum numbers) of the lowest energy level in a 3-D quantum well that has a zero probability of finding the electron in the center of the structure. Nano Wire Problem Given a Nanowire with cross sectional dimensions of 10 nm x 10 nm, what momentum would an electron in the ground state need in order to possess the same energy as a stationary electron (zero momentum) in the n=1,2 state? Note that, when it says n=1,2 it means nx=1 and ny=2... In reality its not that restrictive, but it wa Particle Moving in One Dimension A free particle moving in one dimension is known to be at the point x_1 at time t_1. Assume that the wavefunction at this time is psi(x,t_1) = ?(x - x_1). Find the wavefunction psi(x,t_2) at some later time t_2. Calculating the values of Spin Angular Momentum Calculate the value of <S_x>,<&#8710;S_x>,<S_y>,<&#8710;S_y> ,<S_z> ,<&#12310;&#8710;S&#12311;_z> for the wave function 1/&#8730;2 [exp&#8289;(i&#948;/2) |1/2 1/2>+exp&#8289;((-i&#948;)/2)|1/2-1/2>] Note the labels in the kets are |&#12310;sm&#12311;_s>. (Review question in attachement Oscillation of pendulum, inertia, amplitude, wave speed See attached file for proper format. 6. A physical pendulum consists of a uniform rod of mass M and length L. The pendulum is pivoted at a point that is a distance x from the center of the rod, so the period for oscillation of the pendulum depends on x: T(x). (a) What value of x gives the maximum value for T? (Use any va Triplet/singlet split due to perturbation Two identical spin-1/2 particles are confined to an infinite one-dimensional square well of width a with infinite potential barriers at x=0 and x=a. The potential is V(x)=0 for 0 <= x <= a. Suppose that the particles interact weakly by the potential V_1(x)=Kdelta(x_1 - x_2), where x_1 and x_2 are the positions of the two particl Quantum Mechanics: Time dependent perturbation problem A particle with mass m is in a one-dimensional infinite square-well potential of width a, so V(x)=0 for 0 <= x <= a, and there are infinite potential barriers at x=0 and x=a. Recall that the normalized solutions to the Schrodinger equation are psi_n(x) = sqrt(2/a)sin[(n pi x)/a] with energies E_n = (hbar^2 (pi^2 n^2)/ Time Dependent Perturbation An electron is in a strong, uniform, constant magnetic field with magnitude B_0 aligned in the +x direction. The electron is initially in the state |+> with x component of spin equal to hbar/2. A weak, uniform, constant magnetic field of B_1 (where B_1 << B_0) in the +z direction is turned on at t=0 and turned off at t=t_0. Let Perturbation Theory A particle of mass m is in the ground state in the harmonic oscillator potential V(x) = (1/2)Kx^2 A small perturbation (beta)x^6 is added to this potential. a) What are the units of beta? b) How small must beta be in order for perturbation theory to be valid? c) Calculate the first-order change in the energy of the par Energy Change of a Particle A particle of mass m is confined to move in a narrow, straight tube of length a which is sealed at both ends with V=0 inside the tube. Treat the tube as a one-dimensional infinite square well. The tube is placed at an angle theta relative to the surface of the earth. The particle experiences the usual gravitational potential V=
fb7006ae73aeb867
Sunday, August 31, 2008 Third video sequence: Planck’s quantum of action Here are the video and videoscript for the third sequence, which deals about the elementary quantity of action: Planck's quantum of action. Hello, I’m Arjen, the Common Sense Quantum Physicist. My goal is to facilitate the understanding of the fundamentals of Quantum Physics. In the preceding sequences, we saw how a quantum particle could be represented by a spinning arrow-like object [image seq 2], which quantum physicists call a ket or state vector. Spinning arrow-like objects are filtered through regular gratings depending on the orientation and frequency of their spinning motion. So this helps to understand polarization and diffraction effects [image seq 1]. We could also deduce easily a generalized form of the Schrodinger equation [image seq 2] which simply states that the result of an arrow subtraction between two subsequent states of the arrow is always perpendicular to the arrow itself and proportional to the infinitesimal change in angle. We saw that this evolution equation is valid for any spinning arrow-like object, whether a microscopic quantum particle or a macroscopic rod or a needle or a wheel-spoke or a twirling baton or for this spinning mikado stick... So this evolution equation characterizes the rate of change of the orientation of the arrow. The change of orientation of the arrow representing the quantum system is a very important concept in QM. When the orientation of the arrow varies, the arrow acts, it has ‘action’. An arrow whose orientation does not change is inactive. It does not play in the game. This does not necessarily mean that it does not exist but it simply does not act. Like this hanging mikado stick or like an immobile figurant in a movie scene awaiting for an actor to poke him. So in QM, we could see the measure of the action of an object to be the measure of the variation of the orientation of the arrow representing the object. It is therefore analogous to an angle. When this arrow rotates over an angle alpha the action deployed by the arrow is alpha times a constant quantity. So we may express the action in units of an angle. [image]. For exemple, when the arrow has rotated one turn about a fixed axis, we may say that the deployed action during that turn was 2 pi in radians, or 360 in degrees. For elementary particles represented by arrows (like photons or electrons or quarks), the action is generally expressed in a unit called Planck’s quantum of action h. h is the measure of the action deployed by a quantum particle after a cycle in the meter-kilogram-second system [images]. So when you hear about Planck’s constant h, just think of an elementary arrow having rotated about 360°, it’s analogous. The physicist Max Planck [picture] first showed the importance of this quantum of action, in the year 1900, because it showed up in a formula that characterized the thermal energy radiated by a body. So the concept action is at the origin of QM. When a quantum particle acts, it is often more convenient to talk about the energy of a quantum particle, which is just also a quantity of action but measured during a unitary time interval. For a unitary time interval of one second, the photons that are detected by our eyes have an energy a bit less than 10^15 times h. So this just means that the arrow representing the photon nearly accomplish a million of billion cycles during one second. The angle swept by the tip of the arrow representing the photon during one second is therefore a measure of the energy [images]. We could also measure the action of a particle when it travels over a space interval. We then speak about the momentum of the particle. Measuring the momentum is just another way to measure the action of an object. While the energy expresses the action of an arrow during a unitary time interval, the momentum expresses the action of an arrow during some space interval. For example, the momentum of a photon emitted by an object is analogous to the angle swept by the tip of the arrow while the photon travels over a unitary space interval. If distance is expressed in meters, the photons that are detected by your eyes generally have a momentum about a million times h [images]. So remember, the energy and momentum are just quantities of action. It is analogous to the measure of the angle swept by the tip of the rotating arrow, if that arrow represents the quantum particle or the quantum system. It appears that there are various ways to express quantities of action in physics. Besides energy and momentum, angular momentum is also a quantity of action. It is a measure of the quantity of action deployed by a system of arrows if it is rotated over some angle about some axis. Temperature is also a quantity of action, it is a measure of the average action exchanged between arrows composing the environment. And you surely know the formula E=mc^2, which learns us that mass is also a quantity of action but measured over a tinier interval of time than energy. So, you may think of all those familiar physical quantities as measures of angles swept by the arrow (or set of arrows) representing the object. And they all relate to Planck’s quantum of action, which is analogous to the angle swept by an elementary quantum particle during one cycle. So when you analyse a physical system, it helps to see it as a set of very tiny continuously spinning and interacting needles. That's the essence of Quantum Mechanics. The numerous mathematical formulas that characterize physical behaviour just work this idea out. Feynman cast this insight in a famous sentence "Things are made of littler things that jiggle". Next time we'll look again at this mikado stick and at measurements you may perform on it. --- for the next sequence--- There is a specificity in quantum physics with respect to classical physics. You see this hanging stick, you see the whole of the object because the ambient light has been reflected from nearly every point of it and you receive a continuous flux of information via your eyes. The object looks like a continuity of matter. Now in Quantum Physics, you never see the quantum object as a whole. No, you receive the information on the position of the quantum object bit by bit. It works as if the only way to get information about this stick is to let it interact with another stick (or set of sticks), and notice how it affected the system. So, before the interaction, the state of the mikado stick is unknown. We don't know where it is located, we don't know whether it is spinning, etc. It could be at any place depending on the conditions because you have not yet noticed an interaction with the detecting environment. When I throw the second stick (the ‘detecting’ arrow) and that second stick collides with the hanging stick, I get information about the hanging stick. For example, I get information about the location of the hanging stick, because I know that if I throw this stick along the coordinate x1, and it doesn’t show up along the same line, there was a collision. The x-coordinate of the hanging stick therefore was x1. But wait, you’ll say. The coordinate of the hanging stick was not really x1! Both arrows collided at a point away from the geometrical center of the hanging stick. The real coordinate was x2 because the center of the stick was at x2. Well, that’s classical physics. In quantum physics, things work completely differently. Remember CM uses points, QM uses arrows or sticks. And arrows are spread out over their entire length, so there is an intrinsic indeterminacy in every measurement of location even if my quantum measurement gave the result x1. When I detect an arrow at x1, in fact its geometrical centre could be located at + or minus half the length of the arrow. So there’s always an indeterminacy “delta x” equal to the length of the arrow, even if my quantum measurement is very accurate. Besides measurements of location, we may also try to measure the change of orientation of an arrow. Remember this quantity is just an angle, or a phase, analogous to the quantity of action. Just try to find ways to measure the change of orientation of the arrow. We could for example let the spinning arrow travel through a regular grating. If the arrow passes between the N points of the grating, we know that the angle swept is N times pi, with an uncertainty of plus or minus pi, which corresponds to an uncertainty in the action of plus or minus h/2. Whatever the experimental setup, we’ll never be able to determine precisely the action of the arrow better than with an uncertainty of h. We’ll never be able to beat this principle of quantum mechanics: "If the state of the arrow before the measurement is unknown, quantum measurements are always undetermined." This indeterminacy principle was first formulated by the famous physicist Werner Heisenberg. Sunday, July 6, 2008 Second video sequence: Schrödinger equation I just uploaded the second Common Sense Quantum Physics video sequence on youtube, presenting how the Schrödinger equation may be applied to ordinary macroscopic arrows. I have personnally had some difficulties to grasp the physical meaning of the evolution equations of QM when I studied it back in the eighties. I wish someone had tought me QM this way, so I hope it'll help some students. Monday, June 30, 2008 First video sequence of Common Sense Quantum Physics Here is the videoscript: So, how could we explain this ? Let us simulate this polaroid filtering with ordinary objects. [1] photons are best represented by little arrows and Wednesday, April 30, 2008 Common sense thoughts about geometry The month April has nearly come to an end and I haven't posted any line to my blog! These past two months, I used all my spare time to ponder over famous geometric problems like the angle trisection or the doubling of the cube. These are "proven" to be impossible to solve with the classical "euclidean tools" that are the compass and the unmarked straightedge. The more I digg into it, the more I feel that such impossibility proofs are just ways to reassure ourselves about our advanced scientific tools. If an angle exists, the third of an angle also exists. A simple solution hides somewhere beyond scholar hindrances. The same for the double of a cube. If a cube of unit volume exists, a cube with double volume is determined. Or take the squaring of a circle. If a circle has some physical meaning, any other figure may be constructed departing from the area of the circle. Impossibility "proofs" just obstruct the road to a solution. Solutions may be found by playing, playing with real objects, following our intuition. For an intuitive solution of the squaring of the circle, have a look at the tools of dakhiometry originated by Nguyen Tan Tai. Monday, March 31, 2008 What if the LHC won't reveal the Higgs boson? Next sunday, there is an open day at the Large Hadron Collider. If you are in the proximity of Geneva don't miss the chance to visit that pharaonic work! The main goal of that particle collider is to reveal the Higgs boson, the particle that's supposed to give mass to all other massive particles. Hereby an instructive video: But what if we discover no Higgs boson? How do we proceed? What are the plans? I guess we'll find plethora of other particles at those unexperimented energies. We'll need to set up new supermodels, supertheories. That will generate decennies, if not centuries of theoretical work and speculations, which will call for Xtra LHC's, and so on. Before heading enthusiastically towards Xtra LHC's - because an XLHC will not cost billions of dollars, but hundreds of billions of dollars - I vote for a quiet time. Let all theorists and experimentalists take a paid sabbatical year and develop independently their own vision on quantum reality, the simpler the better. Because there are a lot of other mechanisms that make particles gain inertia, especially when you think of particles as having concrete reality, like little rotating needles or hooks or any structured non circular extension. Let us first work out all those alternative paths before taking the XLHC highway, if we'll still be there ;-) Wink at what's awaiting us according to the LHC lawsuit at Honolulu. Tuesday, February 12, 2008 Classical Mechanics vs. Quantum Mechanics In order to illustrate the analogy I presented in my preceding post between Newton's laws and QM laws, I composed the following images (you may view them better at wikiversity). Newtonian mechanics consider relative motions (translational motions or rotational motions with respect to a reference point). Uniform motion takes place when no net forces exert on the body. The dynamical laws of Quantum mechanics concern an absolute motion: the spinning of an object represented by an arrow. Such an object has uniform spinning motion when no force exerts on it. It is often said that Quantum Mechanics comes into play when the scale of the elements of the system is microscopic. This restricted view hides the fact that the fundamental difference between CM and QM is not a difference of scale but a difference of describing the objects and their motions. CM focuses on objects that are located at points and on their relative motions. QM focuses on objects whose orientations evolve absolutely. This allows us to approach QM intuitively, reasoning on how arrow-like objects would behave in real life. Sunday, January 27, 2008 SPQR - Simplify Physics's Quantum Rules In my introduction post, I qualified quantum physics as being nearer to intuition than classical physics. As this is not a widespread opinion, this needs some explanation. Understand me well, I don't say that quantum physics is better understood than classical physics. I merely infer that, because quantum physics deals with elementary particles, its principles should be easier to grasp than the classical principles. But as our reasoning has been formatted since our first physics classes into a classical mould, we are not trained to analyse the ordinary world quantum-mechanically. The framework of classical physics did not emerge easily during the course of history. It took many efforts from men like Newton (represented by Gotlib in the image), Lagrange or Hamilton to formulate classical principles. Newton had the exceptional capacity to put the classical laws into a few comprehensive sentences. Let us remind his three laws: As far as I know, an analogous clear and simple formulation of quantum physics does not exist. There are some tries of physicists like Feynman that are on the good path, see for example his 3 general principles concerning probability amplitudes (in chapter 3 of his Quantum Lectures on Physics) or his explanation of path integrals with rotating arrows (in QED). But we have not yet succeeded to express the quantum laws in an ordinary way like Newton expressed the classical laws. We are very much in need of Simplifying Physics's Quantum Rules, in order to make it more accessible to populusque. Why not take our inspiration from Newton? Let me have a try. Newton considered translational motion. Quantum evolution is about the phase change of arrows, i.e. self-rotational (spinning) motion of arrows. So we could put it in this way: 1. Every arrow-like body continues in its state of rest, or of uniform spinning motion, unless it is compelled to change that state by forces impressed upon it. 2. The change of spinning motion is proportional to the perturbative force impressed. 3. The mutual actions of two spinning arrow-like bodies upon each other are always equal, and directed to contrary parts. That's nearly all about the fundamental laws of quantum physics. The first two laws may be formulated as evolution laws with a constant angular velocity and a position dependent force potential. The complex factor i merely expresses that the orientation of the vector difference between two subsequent orientations of the spinning vector AB is perpendicular to AB. The main difference between quantum and classical mechanics appears then to be that QM addresses spinning (absolute) motion while CM addresses translational (relative) motion. If quantum physics is introduced in such a way to beginners, I guess they would gain faster insight into quantum behaviour without being hindered by classical reasoning. More about it at the related wikiversity project.
ec94d1c528583986
lördag 25 juli 2015 Frank Wilczek: Ugly Answer to Ugly Question In his new book A Beautiful Question: Finding Nature's Deep Design, Frank Wilczek (Nobel Prize in Physics 2004) starts out stating the questions (or paradoxes) which motivated the development of modern physics: In the quantum world of atoms and light, Nature treats us to a show of strange and seemingly impossible feats. Two of these feats seemed, when discovered, particularly impossible: • Light comes in lumps. This is demonstrated in the photoelectric effect, as we’ll discuss momentarily. It came as a shock to physicists. After Maxwell’s electromagnetic theory was confirmed in Hertz’s experiments (and later many others), physicists had thought they understood what light is. Namely, light is electromagnetic waves. But electromagnetic waves are continuous. • Atoms have parts, but are perfectly rigid. Electrons were first clearly identified in 1897, by J. J. Thomson. The most basic facts about atoms were elucidated over the following fifteen years or so. In particular: atoms consist of tiny nuclei containing almost all of their mass and all of their positive electric charge, surrounded by enough negatively charged electrons to make a neutral whole. Atoms come in different sizes, depending on the chemical element, but they’re generally in the ballpark of $10^{-8}$ centimeters, a unit of length called an angstrom. Atomic nuclei, however, are a hundred thousand times smaller. The paradox: How can such a structure be stable? Why don’t the electrons simply succumb to the attractive force from the nucleus, and dive in. • These paradoxical facts led Einstein and Bohr, respectively, to propose some outrageous, half-right hypotheses that served as footholds on the steep ascent to modern quantum theory.  • After epic struggles, played out over more than a decade of effort and debate, an answer emerged. It has held up to this day, and its roots have grown so deep that it seems unlikely ever to topple. Wilczek then proceeds to prepare us to accept the answers offered by the modern physics of quantum mechanics as the result of epic struggles: • The framework known as quantum theory, or quantum mechanics, was mostly in place by the late 1930s.  • Quantum theory is not a specific hypothesis, but a web of closely intertwined ideas. I do not mean to suggest quantum theory is vague—it is not.  • With rare and usually temporary exceptions, when faced with any concrete physical problem, all competent practitioners of quantum mechanics will agree about what it means to address that problem using quantum theory.  • But few, if any, would be able to say precisely what assumptions they have made to get there. Coming to terms with quantum theory is a process, through which the work will teach you how to do it. We learn that quantum mechanics is not built on specific hypotheses or assumptions, but nevertheless is not vague, and instead rather is a process monitored by competent practitioners. In any case, Wilczek proceeds to give us a glimpse of the basic hypothesis: • In quantum theory’s description of the world, the fundamental objects are ....wave functions. • Any valid physical question about a physical system can be answered by consulting its wave function. • But the relation between question and answer is not straightforward. Both the way that wave functions answer questions and the answers they give have surprising—not to say weird—features. OK, so we are now enlightened by understanding that the answers that come out are weird. Wilczek continues: • I will focus on the specific sorts of wave functions we need to describe the hydrogen atom:  • We are interested, then, in the wave function that describes a single electron bound by electric forces to a tiny, much heavier proton. • Before discussing the electron’s wave function, we’ll do well to describe its probability cloud. The probability cloud is closely related to the wave function. The probability cloud is easier to understand than the wave function, and its physical meaning is more obvious, but it is less fundamental. (Those oracular statements will be fleshed out momentarily). • Quantum mechanics does not give simple equations for probability clouds. Rather, probability clouds are calculated from wave functions. • The wave function of a single particle, like its probability cloud, assigns an amplitude to all possible positions of the particle. In other words, it assigns a number to every point in space.  • To pose questions, we must perform specific experiments that probe the wave function in different ways. • You get probabilities, not definite answers. • You don’t get access to the wave function itself, but only a peek at processed versions of it. • Answering different questions may require processing the wave function in different ways. • Each of those three points raises big issues. Wilczek then tackles these issues by posing new questions, or lacking question by retreating to an admirable attitude of humility in a lesson of wisdom • The first raises the issue of determinism. Is calculating probabilities really the best we can do? • The second raises the issue of many worlds. What does the full wavefunction describe, when we’re not peeking? Does it represent a gigantic expansion of reality, or is it just a mind tool, no more real than a dream? • The third raises the issue of complementarity....It is a lesson in humility that quantum theory forces to our attention. To probe is to interact, and to interact is potentially to disturb. • Complementarity is both a feature of physical reality and a lesson in wisdom. We see that Wilczek sells the usual broth of strange and seemingly impossible feats, weird features, and outrageous half-right hypotheses, all raising big issues. Wilczek sums up by the following quote of Walt Whitman under the headline COMPLEMENTARITY AS WISDOM:              Do I contradict myself?              Very well, then, I contradict myself,               I am large, I contain multitudes. But physics is not poetry, and contradictory poetry does not justify contradictory physics. Contradictory mathematical physics cannot be true real physics, not even meaningful poetry. To get big by contradiction is a trade of politics, which is ugly and not beautiful. Nevertheless, Wilczek started his Nobel lecture as follows: • In theoretical physics, paradoxes are good. That’s paradoxical, since a paradox appears to be a contradiction, and contradictions imply serious error. But Nature cannot realize contradictions. When our physical theories lead to paradox we must find a way out. Paradoxes focus our attention, and we think harder. We understand that to Wilczek/modern physicists, contradictions are good rather than catastrophical and the more paradox the better, since it makes physicists focus attention to think harder.  Beautiful. For more excuses, see What Is Quantum Theory. Wilczek here retells the story of the Father (or Dictator) of Quantum Mechanics, Niels Bohr: The paradox presented itself in 1925, but what happened to the hope of progress? Is paradoxical physics the physics of our time? Does light come in lumps? Why are atoms stable? Despite paradoxes, no real progress for 90 years!!?? PS1 Here is the question killing the probability interpretation of the wave function: Since the wave function for the ground state of Hydrogen is non-zero even far away from the kernel, does it mean that there is a non-zero chance of experimentally detecting a Hydrogen ground state electron far away from the kernel it is associated with? Or the other way around, since the wave function is maximal at zero distance from the kernel, does it mean that one will mostly find the electron hiding inside the kernel? PS2 Beauty is an expression of order and deep design, not of disorder and lack of design. An atomistic world ruled by chance can be beautiful only to a professional statistician obsessed by computing mean values. PS3 Not Even Wrong presents the book as follows: Frank Wilczek’s new book, A Beautiful Question, is now out and if you’re at all interested in issues about beauty and the deep structure of reality, you should find a copy and spend some time with it. As he explains at the very beginning: • This book is a long meditation on a single question: • Does the world embody beautiful ideas? PS4 Wilczek expresses a tendency shared by many modern physicists of pretending to know all of chemistry "in principle", simply by writing down a Schrödinger equation on a piece of paper, however without actually being able to predict anything specific because solutions of the equation cannot by computed:  • Wave functions that fully describe the physical state of several electrons occupy spaces of very high dimension. The wave function for two electrons lives in a six-dimensional space, the wave function for three electrons lives in a nine-dimensional space, and so forth. The equations for these wave functions rapidly become quite challenging to solve, even approximately, and even using the most powerful computers. This is why chemistry remains a thriving experimental enterprise, even though in principle we know the equations that govern it, and that should enable us to calculate the results of experiments in chemistry without having to perform them. In this illusion game, the uncomputability of the Schrödinger's many-dimensional equation relieves the physicist from the real task of explaining the actual physics of chemistry, while the physicist can still safely take the role of being in charge of principal theoretical chemistry underlying a "thriving experimental enterprise", which "in principle" is superfluous. Beautiful?  måndag 13 juli 2015 Johan Rockström: CO2 Global Warming May Prevent New Ice Age Johan Rockström, Executive Director of Stockholm Resilience Centre and leading Swedish CO2 global warming alarmist, admits that emission of CO2 may prevent new ice age (1.24 into news program): • Paradoxically this appears to be a positive effect of global warming. This adds another paradox to the already long list of paradoxes of CO2 global warming. lördag 4 juli 2015 Collapse of Modern Physics: Mainau Declaration 2015 on Climate Change The Mainau Declaration 2015 on Climate Change made at the 65th Lindau Nobel Laureate Meeting on Mainau Island at Lake Constance and signed by the following physicists,  among 35 other Laureates, Stephen Chu, Peter Doherty, David Gross, Brian Schmidt and George Smooth, states that (with my numbering an comments added): 1. We believe that our world today faces another threat (global warming) of comparable magnitude to that of nuclear weapons. (Comparable in what sense?) 2. Successive generations of scientists have helped create a more and more prosperous world. (Physicists are helping mankind to prosperity)  3. This prosperity has come at the cost of a rapid rise in the consumption of the world’s resources. (Poor people are consuming more and more) 4. If left unchecked, our ever-increasing demand for food, water, and energy will eventually overwhelm the Earth’s ability to satisfy humanity’s needs, and will lead to wholesale human tragedy. (Ultimate doomsday scenario. Purpose?) 5. Already, scientists who study Earth’s climate are observing the impact of human activity.  (What impact?) 6. In response to the possibility of human-induced climate change, the United Nations established the Intergovernmental Panel on Climate Change (IPCC) to provide the world’s leaders a summary of the current state of relevant scientific knowledge. (Scientists will tell what to do) 7. While by no means perfect, we believe that the efforts that have led to the current IPCC Fifth Assessment Report represent the best source of information regarding the present state of knowledge on climate change. (Best source compared to what?) 8. We say this not as experts in the field of climate change, but rather as a diverse group of scientists who have a deep respect for and understanding of the integrity of the scientific process. (Physicists know nothing about climate) 9. Although there remains uncertainty as to the precise extent of climate change, the conclusions of the scientific community contained in the latest IPCC report are alarming, especially in the context of the identified risks of maintaining human prosperity in the face of greater than a 2°C rise in average global temperature. (Uncertainty as to precise extent? But alarming! Identified risks? Human prosperity to whom?) 10. The report concludes that anthropogenic emissions of greenhouse gases are the likely cause of the current global warming of the Earth. Predictions from the range of climate models indicate that this warming will very likely increase the Earth’s temperature over the coming century by more than 2°C above its pre-industrial level unless dramatic reductions are made in anthropogenic emissions of greenhouse gases over the coming decades. (Effect of dramatic reduction? On climate? On people?) 11. Based on the IPCC assessment, the world must make rapid progress towards lowering current and future greenhouse gas emissions to minimize the substantial risks of climate change. (Rapid progress? Minimize substantial risks?) 12. We believe that the nations of the world must take the opportunity at the United Nations Climate Change Conference in Paris in December 2015 to take decisive action to limit future global emissions. (Decisive actions by whom? Limit future global emissions, for whom?)  13. This endeavor will require the cooperation of all nations, whether developed or developing, and must be sustained into the future in accord with updated scientific assessments. (Physicists will tell the world what to do) 14. Failure to act will subject future generations of humanity to unconscionable and unacceptable risk. (Failure to do what? What is unconscionable and unacceptable risk? The fact that Physics Nobel Laureates sign a political document like this can be seen as a logical consequence of the collapse in modern physics of the rationality of classical physics, a collapse into stupidity which will subject future generations of humanity to unconscionable and unacceptable risks.
6d13c611c2456b5f
Dismiss Notice Join Physics Forums Today! Which algebra vectors satisfy this (Trying to derive Schrödinger) 1. Mar 27, 2010 #1 I was wondering why the wave-function in quantum mechanics is complex. There are a lot of threads in the physics section and I've downloaded a lot of papers, but they seem quite technical. So I'd like to examine the following idea (sorry if I use sloppy terms ;) ): I have an orthonormal basis of vectors/functions which can be labeled with two indices [itex]f_{E,k}[/itex] and which are "two-dimensional". It's not just a column vector, but rather [itex]f_{E,k}(x,t)[/itex]. The vector algebra is undetermined (could be any linear algebra). Now I have two conditions [tex]\forall a: f_{E,k}(x+Ea,t+ka)=f_{E,k}(x,t)[/tex] (btw, the second is in a way equivalent to [itex]E=mc^2[/itex]) Is it now possible to proof that the only orthonormal solution for this is a complex algebra with Please add definitions (for scalar product and so on) as appropriate! Or which other definitions/conditions do I need to get that complex solution uniquely? 2. jcsd 3. Mar 27, 2010 #2 I notice he above factors k^2 and E have to be replaced by operators. Anyway, the task is to find some axioms similar to the above, which yield the complex algebra as the only solution. 4. Mar 29, 2010 #3 What is V(x,t)? Because I have a feeling you mean something specific, not just a placeholder for an arbitrary function. 5. Mar 29, 2010 #4 Basically the above problem comes from the Schrödinger equation. Sometimes I might be missing concepts, but maybe you can add them. The function V(x,t) is supposed to be an arbitrary real-valued function. k^2 and E are a real linear operators. So now I'm wondering which other conditions I need to add to make the vector basis of the algebraic solution isomorphic to the above complex solution. Please formulate this more mathematically correctly whoever can. The aim is to show the vector basis must be complex numbers. 6. Apr 10, 2010 #5 what are you talking about? this barely makes sense. is english your first language? 7. Apr 10, 2010 #6 Ice, If you have trouble with both language and maths, please devote your time to complaining in other forums. 8. Apr 13, 2010 #7 It might just be me, but you have specified anything about your "K2" being a second derivative with respect to position. Also, what you are sort of writing is the time independent Schrodinger equation so having a time dependent potential function V is not correct. Similar Discussions: Which algebra vectors satisfy this (Trying to derive Schrödinger)
8c509f865b1ab6f5
Sunday, August 9, 2015 A very brief introduction to the electron correlation energy RHF is often not accurate enough for predicting the change in energies due to a chemical reaction, no matter how big a basis set we use.  The reason is the error due to the molecular orbital approximation and the energy difference due to this approximation is known as the correlation energy.  Just like we improve the LCAO approximation by including more terms in an expansion, we can improve the orbital approximation by an expansion, in terms of Slater determinants $$\Psi ({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N}) \approx \sum\limits_{i = 1}^L {{C_i}{\Phi _i}({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N})} $$ The “basis set” of Slater determinants $\{\Phi_i \}$ is generated by first computing an RHF wave function $\{\Phi_0 \}$ as usual, which also generates a lot of virtual orbitals, and then generating other determinants with these orbitals.  For example, for an atom or molecule with two electrons the RHF wave function is  $\left| {{\phi _1}{{\bar \phi }_1}} \right\rangle $ and we have $K-1$ virtual orbitals (${\phi _2}, \ldots ,{\phi _K}$ , where $K$ is the number of basis functions), which can be used to make other Slater determinants like $\Phi _1^2 = \left| {{\phi _1}{{\bar \phi }_2}} \right\rangle $ and $\Phi _{11}^{22} = \left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $  (Figure 1). Figure 1. Schematic representation of the electronic structure of some of the determinants used in Equation 3 Conceptually (in analogy to spectroscopy), an electron is excited from an occupied to a virtual orbital: $\left| {{\phi _1}{{\bar \phi }_2}} \right\rangle$ represents a single excitation and $\left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $  a double excitation.  For systems with more than two electrons higher excitations (like triple and quadruple excitations) are also possible.  In general $$\Psi  \approx {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} }  + \sum\limits_a {\sum\limits_b {\sum\limits_r {\sum\limits_s {C_{ab}^{rs}\Phi _{ab}^{rs}} } } }  +  \ldots $$ The expansion coefficients can be found using the variational principle $$\frac{{\partial E}}{{\partial {C_i}}} = 0 \ \textrm{for all} \ i$$ and this approach is called configuration interaction (CI).  The more excitations we include (i.e. increase L in Eq 2.12.1) the more accurate the expansion and the resulting energy becomes.  If the expansion includes all possible excitations (known as a full CI, FCI) then we have a numerically exact wave function for the particular basis set, and if we use a basis set where the HF limit is reached then we have a numerically exact solution to the electronic Schrödinger equation!  That’s the good news … The bad news is that the FCI “basis set of determinants” is much, much larger than the LCAO basis set (i.e. $L >> K$), $$L = \frac{{K!}}{{N!(K - N)!}}$$ where $N$ is the number of electrons.  Thus, an RHF/6-31G(d,p) calculation on water involves 24 basis functions and roughly $\tfrac{1}{8}K^4$ = 42,000 2-electron integrals but a corresponding FCI/6-31G(d) calculation involves nearly 2,000,000 Slater determinants. Just like finding the LCAO coefficients involves the diagonalization of the Fock matrix, finding the CI coefficients (Ci) and the lowest energy also involves a matrix diagonalization. $$\bf{E} = {{\bf{C}}^t}{\bf{HC}}$$ where $\bf{E}$ is a diagonal matrix whose smallest value ($E_0$) corresponds to the variational energy minimum.  While the Fock matrix is a $K \times K$ matrix, the CI Hamiltonian ($\bf{H}$) is an $L \times L$ matrix.  Just holding the 2 million by 2 million matrix for the water molecule using the 6-31G(d,p) basis set requires millions of gigabites! Clever programming and large computers actually makes a FCI/6-31G(d,p) calculation on $\ce{H2O}$ possible, but FCI is clearly not a routine molecular modeling tool.  Using, for example, only single excitations (called CI singles, CIS) $${\Psi ^{CIS}} = {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} } $$ is feasible, however is doesn’t result in any improvement.  The CIS Hamiltonian has three kinds of contributions & \langle \Phi _0\left| {\hat H} \right| \Phi_0 \rangle = E_{RHF}\\ \langle\Phi^{CIS}\left| {\hat H} \right| \Phi^{CIS} \rangle  \rightarrow & \langle \Phi _0\left| {\hat H} \right| \Phi_a^r \rangle = F_{ar} = 0 \\ & \langle \Phi _a^r\left| {\hat H} \right| \Phi_a^r \rangle which means that when this matrix is diagonalized $E_0=E_{RHF}$.  Thus CIS does not give us any correlation energy.  However, CIS is not completely useless.  The second lowest value of $\bf{E}$, $E_1$, represents the energy of the first excited state, at roughly an RHF quality. Thus, we need at least single and double excitations (CISD) to get any correlation energy.  However, in general including doubles already results in an $\bf{H}$ matrix that is impractically large for a matrix diagonalization.  CI, i.e. finding the $C_i$ coefficients using the variational principle, is therefore rarely used to compute the correlation energy. Perhaps the most popular means of finding the $C_i$’s is by perturbation theory, a standard mathematical technique in physics to compute corrections to a reference state (in this case RHF).  Perturbation theory using this reference is called Møller-Plesset pertubation theory, and there are several successively more accurate and more expensive variants: MP2 (which includes some double excitations), MP3 (more double excitations than MP2), and MP4 (single, double, triple, and some quadruple excitations). Another approach is called coupled cluster which has a similar hierarchy of methods, such as CCSD (singles and doubles) and CCSD(T) (CCSD plus an estimate of the triples contributions).  In terms of accuracy vs expense, MP2 is the best choice of a cheap correlation method, followed by CCSD, and CCSD(T).  For example, MP4 is not too much cheaper than CCSD(T), but the latter is much more accurate.  In fact for many practical purposes it is rarely necessary to go beyond CCSD(T) in terms of accuracy, provided a triple-zeta or higher basis set it used.  However, CCSD(T) is usually too computationally demanding for molecules with more than 10 non-hydrogen atoms.  In general, the computational expense of these correlated methods scale much worse than RHF with respect to basis set size: MP2 ($K^5$), CCSD ($K^6$), and CCSD(T) ($K^7$).  These methods also require a significant amount of computer memory, compared to RHF, which is often the practical limitation of these post-HF methods.  Finally, it should be noted that all these calculations also imply an RHF calculation as the first step. In conclusion we now have ways of systematically improving the wave function, and hence the energy, by increasing the number of basis functions ($K$) and the number of excitations ($L$) as shown in Figure 2. Figure 2 Schematic representation of the increase in accuracy due to using better correlation methods and larger basis sets. The most important implication of this is that in principle it is possible to check the accuracy of a given level of theory without comparison to experiment!  If going to a better correlation method or a bigger basis set does not change the answer appreciably, then we have a genuine prediction with only the charges and masses of the particles involved as empirical input.  These kinds of calculations are therefore known as ab initio or first principle calculations.  In practice, different properties will converge at different rates, so it is better to monitor the convergence of the property you are actually interested in, than the total energy.  For example, energy differences (e.g. between two conformers) converge earlier than the molecular energies. Furthermore, the molecular structure (bond lengths and angles) tends to converge faster than the energy difference.  So it is common to optimize the geometry at a low level of theory [e.g. RHF/6-31G(d)] followed by an energy computation (a single point energy) at a higher level of theory [e.g. MP2/6-311+G(2d,p)].  This level of theory would be denoted MP2/6-311+G(2d,p)//RHF/6-31G(d). Finally, the correlation energy is not just a fine-tuning of the RHF result but introduces an important intermolecular force called the dispersion energy.  The dispersion energy (also known as the induced dipole-induced dipole interaction) is a result of the simultaneous excitation of at least two electrons and is not accounted for in the RHF energy.  For example, the stacked orientation of base pairs in DNA is largely a result of dispersion interactions and cannot be predicted using RHF. Monday, August 3, 2015 Computational Chemistry Highlights: July issue The July issue of Computational Chemistry Highlights is out. Simulations of Chemical Reactions with the Frozen Domain Formulation of the Fragment Molecular Orbital Method
034dedf15bdb0162
Open main menu Wikipedia β In quantum mechanics, the uncertainty principle, also known as Heisenberg's uncertainty principle or Heisenberg's indeterminacy principle, is any of a variety of mathematical inequalities[1] asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables, such as position x and momentum p, can be known. Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[2] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[3] later that year and by Hermann Weyl[4] in 1928: (ħ is the reduced Planck constant, h / (2π)). Historically, the uncertainty principle has been confused[5][6] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10][note 1] Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting[12] or quantum optics[13] systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.[14] The uncertainty principle is not readily apparent on the macroscopic scales of everyday experience.[15] So it is helpful to demonstrate how it applies to more easily understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.[16] Wave mechanics interpretationEdit (Ref [10]) Propagation of de Broglie waves in 1d—real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the colour opacity) of finding the particle at a given point x is spread out like a waveform, there is no definite position of the particle. As the amplitude increases above zero the curvature reverses sign, so the amplitude begins to decrease again, and vice versa—the result is an alternating amplitude: a wave. According to the de Broglie hypothesis, every object in the universe is a wave, i.e., a situation which gives rise to this phenomenon. The position of the particle is described by a wave function  . The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is Matrix mechanics interpretationEdit (Ref [10]) In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators  and , one defines their commutator as where Î is the identity operator. Suppose, for the sake of proof by contradiction, that   is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write On the other hand, the above canonical commutation relation requires that Robertson–Schrödinger uncertainty relationsEdit The most common general form of the uncertainty principle is the Robertson uncertainty relation.[17] For an arbitrary Hermitian operator   we can associate a standard deviation where the brackets   indicate an expectation value. For a pair of operators  and , we may define their commutator as In this notation, the Robertson uncertainty relation is given by The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation,[18] where we have introduced the anticommutator, • For position and linear momentum, the canonical commutation relation   implies the Kennard inequality from above: • In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelshtam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows.[26][27] For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator  , the following formula holds: where σE is the standard deviation of the energy operator (Hamiltonian) in the state ψ, σB stands for the standard deviation of B. Although the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters the Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B: In other words, this is the time interval (Δt) after which the expectation value   changes appreciably. A counterexampleEdit Suppose we consider a quantum particle on a ring, where the wave function depends on an angular variable  , which we may take to lie in the interval  . Define "position" and "momentum" operators   and   by where we impose periodic boundary conditions on  . Note that the definition of   depends on our choice to have   range from 0 to  . These operators satisfy the usual commutation relations for position and momentum operators,  .[31] Now let   be any of the eigenstates of  , which are given by  . Note that these states are normalizable, unlike the eigenstates of the momentum operator on the line. Note also that the operator   is bounded, since   ranges over a bounded interval. Thus, in the state  , the uncertainty of   is zero and the uncertainty of   is finite, so that Although this result appears to violate the Robertson uncertainty principle, the paradox is resolved when we note that   is not in the domain of the operator  , since multiplication by   disrupts the periodic boundary conditions imposed on  .[32] Thus, the derivation of the Robertson relation, which requires   and   to be defined, does not apply. (These also furnish an example of operators satisfying the canonical commutation relations but not the Weyl relations.[33]) For the usual position and momentum operators   and   on the real line, no such counterexamples can occur. As long as   and   are defined in the state  , the Heisenberg uncertainty principle holds, even if   fails to be in the domain of   or of  .[34] (Refs [10][19]) Quantum harmonic oscillator stationary statesEdit the variances may be computed directly, The product of these standard deviations is then In particular, the above Kennard bound[3] is saturated for the ground state n=0, for which the probability density is just the normal distribution. Quantum harmonic oscillator with Gaussian initial conditionEdit Position (blue) and momentum (red) probability densities for an initially Gaussian distribution. From top to bottom, the animations show the cases Ω=ω, Ω=2ω, and Ω=ω/2. Note the tradeoff between the widths of the distributions. where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the full time-dependent solution. After many cancelations, the probability densities reduce to From the relations we can conclude the following: (the right most equality holds only when Ω=ω) . Coherent statesEdit A coherent state is a right eigenstate of the annihilation operator, which may be represented in terms of Fock states as Therefore, every coherent state saturates the Kennard bound with position and momentum each contributing an amount   in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. Particle in a boxEdit The product of the standard deviations is therefore Constant momentumEdit such that the uncertainty product can only increase with time as Additional uncertainty relationsEdit Mixed statesEdit The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.[35] The Maccone-Pati Uncertainty RelationsEdit The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Maccone and Pati give non-trivial bounds on the sum of the variances for two incompatible observables [36]. For two non-commuting observables   and   the first stronger uncertainty relation is given by where  ,  ,   is a normalized vector that is orthogonal to the state of the system   and one should chose the sign of   to make this real quantity a positive number. The second stronger uncertainty relation is given by where   is a state orthogonal to  . The form of   implies that the right-hand side of the new uncertainty relation is nonzero unless   is an eigenstate of  . One may note that   can be an eigenstate of   without being an eigenstate of either   or  . However, when   is an eigenstate of one of the two observables the Heisenberg -Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless   is an eigenstate of both. Phase spaceEdit Choosing  , we arrive at or, explicitly, after algebraic manipulation, Systematic and statistical errorsEdit The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation  . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let   represent the error (i.e., inaccuracy) of a measurement of an observable A and   the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Ozawa[6] — encompassing both systematic and statistical errors — holds: Heisenberg uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.[38][39] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors   and  . There is increasing experimental evidence[8][40] [41][42] that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism,[1] it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily[43] unsharp or weak. It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson[1] and Ozawa relations we obtain The four terms can be written as: as the inaccuracy in the measured values of the variable A and as the resulting fluctuation in the conjugate variable B, Fujikawa[44] established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Quantum entropic uncertainty principleEdit For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[24][45][46][47] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[48] This conjecture, also studied by Hirschman[49] and proven in 1975 by Beckner[50] and by Iwo Bialynicki-Birula and Jerzy Mycielski[51] is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where the Shannon information entropies are subject to the following constraint, where the logarithms may be in any base. The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefunction φ(p), the above constraint can be written for the corresponding entropies as where h is Planck's constant. Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then If, instead, x0 p0 is chosen to be ħ, then If x0 and p0 are chosen to be unity in whatever system of units are being used, then where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units. The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities[52] In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as Under the above definition, the entropic uncertainty relation is Here we note that δx δp/h is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Harmonic analysisEdit Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂:[53][54][55] Signal processing Edit Stated alternatively, "One cannot simultaneously sharply localize a signal (function f ) in both the time domain and frequency domain ( ƒ̂, its Fourier transform)". Benedicks's theoremEdit Amrein-Berthier[56] and Benedicks's theorem[57] intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is nonzero cannot both be small. Specifically, it is impossible for a function f in L2(R) and its Fourier transform ƒ̂ to both be supported on sets of finite Lebesgue measure. A more quantitative version is[58][59] One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex. Hardy's uncertainty principleEdit The mathematician G. H. Hardy formulated the following uncertainty principle:[60] it is not possible for f and ƒ̂ to both be "very rapidly decreasing." Specifically, if f in L2(R) is such that   (  an integer), then, if ab > 1, f = 0, while if ab = 1, then there is a polynomial P of degree N such that where P is a polynomial of degree (Nd)/2 and A is a real d×d positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander[61] (the case  ) and Bonami, Demange, and Jaming[62] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.[63] A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref.[64] Theorem. If a tempered distribution   is such that for some convenient polynomial P and real positive definite matrix A of type d × d. Werner Heisenberg and Niels Bohr Kennard[3] in 1927 first proved the modern inequality: Terminology and translationEdit Heisenberg's microscopeEdit The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by utilizing the observer effect of an imaginary microscope as a measuring device.[68] He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.[70]:49–50 Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electrons beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. Critical reactionsEdit Einstein's slitEdit A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.[72] Einstein's boxEdit EPR paradox for entangled particlesEdit In 1964, John Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out Einstein's basic assumption that led him to the suggestion of his hidden variables. These hidden variables may be "hidden" because of an illusion that occurs during observations of objects that are too large or too small. This illusion can be likened to rotating fan blades that seem to pop in and out of existence at different locations and sometimes seem to be in the same place at the same time when observed. This same illusion manifests itself in the observation of subatomic particles. Both the fan blades and the subatomic particles are moving so fast that the illusion is seen by the observer. Therefore, it is possible that there would be predictability of the subatomic particles behavior and characteristics to a recording device capable of very high speed tracking....Ironically this fact is one of the best pieces of evidence supporting Karl Popper's philosophy of invalidation of a theory by falsification-experiments. That is to say, here Einstein's "basic assumption" became falsified by experiments based on Bell's inequalities. For the objections of Karl Popper to the Heisenberg inequality itself, see below. Popper's criticismEdit Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.[79] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".[79][80] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables. In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften,[81] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: Many-worlds uncertaintyEdit Free willEdit Some scientists including Arthur Compton[84] and Martin Heisenberg[85] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.[86] The standard view, however, is that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.[86] See alsoEdit 1. ^ a b c Sen, D. (2014). "The uncertainty relations in quantum mechanics" (PDF). Current Science. 107 (2): 203–218.  2. ^ a b c Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", Zeitschrift für Physik (in German), 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280. . Annotated pre-publication proof sheet of Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, March 21, 1927. 3. ^ a b c Kennard, E. H. (1927), "Zur Quantenmechanik einfacher Bewegungstypen", Zeitschrift für Physik (in German), 44 (4–5): 326–352, Bibcode:1927ZPhy...44..326K, doi:10.1007/BF01391200.  4. ^ Weyl, H. (1928), Gruppentheorie und Quantenmechanik, Leipzig: Hirzel  5. ^ Furuta, Aya (2012), "One Thing Is Certain: Heisenberg's Uncertainty Principle Is Not Dead", Scientific American  6. ^ a b Ozawa, Masanao (2003), "Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement", Physical Review A, 67 (4): 42105, Bibcode:2003PhRvA..67d2105O, arXiv:quant-ph/0207121 , doi:10.1103/PhysRevA.67.042105  7. ^ Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20 8. ^ a b Rozema, L. A.; Darabi, A.; Mahler, D. H.; Hayat, A.; Soudagar, Y.; Steinberg, A. M. (2012). "Violation of Heisenberg's Measurement-Disturbance Relationship by Weak Measurements". Physical Review Letters. 109 (10): 100404. Bibcode:2012PhRvL.109j0404R. PMID 23005268. arXiv:1208.0034v2 . doi:10.1103/PhysRevLett.109.100404.  9. ^ Indian Institute of Technology Madras, Professor V. Balakrishnan, Lecture 1 – Introduction to Quantum Physics; Heisenberg's uncertainty principle, National Programme of Technology Enhanced Learning on YouTube 10. ^ a b c d L.D. Landau, E.M. Lifshitz (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1.  Online copy. 12. ^ Elion, W. J.; M. Matters, U. Geigenmüller & J. E. Mooij; Geigenmüller, U.; Mooij, J. E. (1994), "Direct demonstration of Heisenberg's uncertainty principle in a superconductor", Nature, 371 (6498): 594–595, Bibcode:1994Natur.371..594E, doi:10.1038/371594a0  13. ^ Smithey, D. T.; M. Beck, J. Cooper, M. G. Raymer; Cooper, J.; Raymer, M. G. (1993), "Measurement of number–phase uncertainty relations of optical fields", Phys. Rev. A, 48 (4): 3159–3167, Bibcode:1993PhRvA..48.3159S, PMID 9909968, doi:10.1103/PhysRevA.48.3159  14. ^ Caves, Carlton (1981), "Quantum-mechanical noise in an interferometer", Phys. Rev. D, 23 (8): 1693–1708, Bibcode:1981PhRvD..23.1693C, doi:10.1103/PhysRevD.23.1693  16. ^ Claude Cohen-Tannoudji; Bernard Diu; Franck Laloë (1996), Quantum mechanics, Wiley-Interscience: Wiley, pp. 231–233, ISBN 978-0-471-56952-7  17. ^ a b Robertson, H. P. (1929), "The Uncertainty Principle", Phys. Rev., 34: 163–64, Bibcode:1929PhRv...34..163R, doi:10.1103/PhysRev.34.163  18. ^ a b Schrödinger, E. (1930), "Zum Heisenbergschen Unschärfeprinzip", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, 14: 296–303  19. ^ a b Griffiths, David (2005), Quantum Mechanics, New Jersey: Pearson  20. ^ Riley, K. F.; M. P. Hobson and S. J. Bence (2006), Mathematical Methods for Physics and Engineering, Cambridge, p. 246  21. ^ Davidson, E. R. (1965), "On Derivations of the Uncertainty Principle", J. Chem. Phys., 42 (4): 1461, Bibcode:1965JChPh..42.1461D, doi:10.1063/1.1696139  22. ^ a b Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245  23. ^ Jackiw, Roman (1968), "Minimum Uncertainty Product, Number‐Phase Uncertainty Product, and Coherent States", J. Math. Phys., 9 (3): 339, Bibcode:1968JMP.....9..339J, doi:10.1063/1.1664585  24. ^ a b Carruthers, P.; Nieto, M. M. (1968), "Phase and Angle Variables in Quantum Mechanics", Rev. Mod. Phys., 40 (2): 411–440, Bibcode:1968RvMP...40..411C, doi:10.1103/RevModPhys.40.411  25. ^ Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer  27. ^ Hilgevoord, Jan (1996). "The uncertainty principle for energy and time" (PDF). American Journal of Physics. 64 (12): 1451–1456. Bibcode:1996AmJPh..64.1451H. doi:10.1119/1.18410. ; Hilgevoord, Jan (1998). "The uncertainty principle for energy and time. II.". American Journal of Physics. 66 (5): 396–402. Bibcode:1998AmJPh..66..396H. doi:10.1119/1.18880.  28. ^ The broad linewidth of fast decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay rate, to get sharper peaks. Gabrielse, Gerald; H. Dehmelt (1985), "Observation of Inhibited Spontaneous Emission", Physical Review Letters, 55 (1): 67–70, Bibcode:1985PhRvL..55...67G, PMID 10031682, doi:10.1103/PhysRevLett.55.67  29. ^ Likharev, K.K.; A.B. Zorin (1985), "Theory of Bloch-Wave Oscillations in Small Josephson Junctions", J. Low Temp. Phys., 59 (3/4): 347–382, Bibcode:1985JLTP...59..347L, doi:10.1007/BF00683782  30. ^ Anderson, P.W. (1964), "Special Effects in Superconductivity", in Caianiello, E.R., Lectures on the Many-Body Problem, Vol. 2, New York: Academic Press  31. ^ More precisely,   whenever both   and   are defined, and the space of such   is a dense subspace of the quantum Hilbert space. See Hall, B. C. (2013), Quantum Theory for Mathematicians, Springer, p. 245  35. ^ Steiger, Nathan. "Quantum Uncertainty and Conservation Law Restrictions on Gate Fidelity". Brigham Young University. Retrieved 19 June 2011.  36. ^ L. Maccone and A. K. Pati, "Stronger uncertainty relations for all incompatible observables", Phys. Rev. Lett. 113, 260401 (2014). 37. ^ Curtright, T.; Zachos, C. (2001). "Negative Probability and Uncertainty Relations". Modern Physics Letters A. 16 (37): 2381–2385. Bibcode:2001MPLA...16.2381C. arXiv:hep-th/0105226 . doi:10.1142/S021773230100576X.  38. ^ Busch, P.; Lahti, P.; Werner, R. F. (2013). "Proof of Heisenberg's Error-Disturbance Relation". Physical Review Letters. 111 (16): 160405. Bibcode:2013PhRvL.111p0405B. PMID 24182239. arXiv:1306.1565 . doi:10.1103/PhysRevLett.111.160405.  39. ^ Busch, P.; Lahti, P.; Werner, R. F. (2014). "Heisenberg uncertainty for qubit measurements". Physical Review A. 89. Bibcode:2014PhRvA..89a2129B. arXiv:1311.0837 . doi:10.1103/PhysRevA.89.012129.  40. ^ Erhart, J.; Sponar, S.; Sulyok, G.; Badurek, G.; Ozawa, M.; Hasegawa, Y. (2012). "Experimental demonstration of a universally valid error-disturbance uncertainty relation in spin measurements". Nature Physics. 8 (3): 185–189. Bibcode:2012NatPh...8..185E. arXiv:1201.1833 . doi:10.1038/nphys2194.  41. ^ Baek, S.-Y.; Kaneda, F.; Ozawa, M.; Edamatsu, K. (2013). "Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation". Scientific Reports. 3: 2221. Bibcode:2013NatSR...3E2221B. PMC 3713528 . PMID 23860715. doi:10.1038/srep02221.  42. ^ Ringbauer, M.; Biggerstaff, D.N.; Broome, M.A.; Fedrizzi, A.; Branciard, C.; White, A.G. (2014). "Experimental Joint Quantum Measurements with Minimum Uncertainty". Physical Review Letters. 112: 020401. Bibcode:2014PhRvL.112b0401R. PMID 24483993. arXiv:1308.5688 . doi:10.1103/PhysRevLett.112.020401.  43. ^ Björk, G.; Söderholm, J.; Trifonov, A.; Tsegaye, T.; Karlsson, A. (1999). "Complementarity and the uncertainty relations". Physical Review. A60: 1878. Bibcode:1999PhRvA..60.1874B. arXiv:quant-ph/9904069 . doi:10.1103/PhysRevA.60.1874.  44. ^ Fujikawa, Kazuo (2012). "Universally valid Heisenberg uncertainty relation". Physical Review A. 85 (6). Bibcode:2012PhRvA..85f2117F. arXiv:1205.1360 . doi:10.1103/PhysRevA.85.062117.  45. ^ Judge, D. (1964), "On the uncertainty relation for angle variables", Il Nuovo Cimento, 31 (2): 332–340, Bibcode:1964NCim...31..332J, doi:10.1007/BF02733639  46. ^ Bouten, M.; Maene, N.; Van Leuven, P. (1965), "On an uncertainty relation for angle variables", Il Nuovo Cimento, 37 (3): 1119–1125, Bibcode:1965NCim...37.1119B, doi:10.1007/BF02773197  47. ^ Louisell, W. H. (1963), "Amplitude and phase uncertainty relations", Physics Letters, 7 (1): 60–61, Bibcode:1963PhL.....7...60L, doi:10.1016/0031-9163(63)90442-6  48. ^ DeWitt, B. S.; Graham, N. (1973), The Many-Worlds Interpretation of Quantum Mechanics, Princeton: Princeton University Press, pp. 52–53, ISBN 0-691-08126-3  49. ^ Hirschman, I. I., Jr. (1957), "A note on entropy", American Journal of Mathematics, 79 (1): 152–156, JSTOR 2372390, doi:10.2307/2372390.  50. ^ Beckner, W. (1975), "Inequalities in Fourier analysis", Annals of Mathematics, 102 (6): 159–182, JSTOR 1970980, doi:10.2307/1970980.  51. ^ Bialynicki-Birula, I.; Mycielski, J. (1975), "Uncertainty Relations for Information Entropy in Wave Mechanics", Communications in Mathematical Physics, 44 (2): 129–132, Bibcode:1975CMaPh..44..129B, doi:10.1007/BF01608825  52. ^ Chafaï, D. (2003), Gaussian maximum of entropy and reversed log-Sobolev inequality, pp. 194–200, ISBN 978-3-540-00072-3, arXiv:math/0102227 , doi:10.1007/978-3-540-36107-7_5  53. ^ Havin, V.; Jöricke, B. (1994), The Uncertainty Principle in Harmonic Analysis, Springer-Verlag  54. ^ Folland, Gerald; Sitaram, Alladi (May 1997), "The Uncertainty Principle: A Mathematical Survey", Journal of Fourier Analysis and Applications, 3 (3): 207–238, MR 1448337, doi:10.1007/BF02649110  55. ^ Sitaram, A (2001) [1994], "Uncertainty principle, mathematical", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4  56. ^ Amrein, W.O.; Berthier, A.M. (1977), "On support properties of Lp-functions and their Fourier transforms", Journal of Functional Analysis, 24 (3): 258–267, doi:10.1016/0022-1236(77)90056-8.  57. ^ Benedicks, M. (1985), "On Fourier transforms of functions supported on sets of finite Lebesgue measure", J. Math. Anal. Appl., 106 (1): 180–183, doi:10.1016/0022-247X(85)90140-4  58. ^ Nazarov, F. (1994), "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type,", St. Petersburg Math. J., 5: 663–717  59. ^ Jaming, Ph. (2007), "Nazarov's uncertainty principles in higher dimension", J. Approx. Theory, 149 (1): 30–41, doi:10.1016/j.jat.2007.04.005  60. ^ Hardy, G.H. (1933), "A theorem concerning Fourier transforms", Journal of the London Mathematical Society, 8 (3): 227–231, doi:10.1112/jlms/s1-8.3.227  61. ^ Hörmander, L. (1991), "A uniqueness theorem of Beurling for Fourier transform pairs", Ark. Mat., 29: 231–240, Bibcode:1991ArM....29..237H, doi:10.1007/BF02384339  62. ^ Bonami, A.; Demange, B.; Jaming, Ph. (2003), "Hermite functions and uncertainty principles for the Fourier and the windowed Fourier transforms", Rev. Mat. Iberoamericana, 19: 23–55., Bibcode:2001math......2111B, arXiv:math/0102111 , doi:10.4171/RMI/337  63. ^ Hedenmalm, H. (2012), "Heisenberg's uncertainty principle in the sense of Beurling", J. Anal. Math., 118 (2): 691–702, doi:10.1007/s11854-012-0048-9  64. ^ Demange, Bruno (2009), Uncertainty Principles Associated to Non-degenerate Quadratic Forms, Société Mathématique de France, ISBN 978-2-85629-297-6  65. ^ American Physical Society online exhibit on the Uncertainty Principle 66. ^ Bohr, Niels; Noll, Waldemar (1958), "Atomic Physics and Human Knowledge", American Journal of Physics, New York: Wiley, 26 (8): 38, Bibcode:1958AmJPh..26..596B, doi:10.1119/1.1934707  67. ^ Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. 68. ^ a b c Heisenberg, W. (1930), Physikalische Prinzipien der Quantentheorie (in German), Leipzig: Hirzel  English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930. 69. ^ Cassidy, David; Saperstein, Alvin M. (2009), "Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb", Physics Today, New York: Bellevue Literary Press, 63: 185, Bibcode:2010PhT....63a..49C, doi:10.1063/1.3293416  70. ^ George Greenstein; Arthur Zajonc (2006). The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics. Jones & Bartlett Learning. ISBN 978-0-7637-2470-2.  71. ^ Tipler, Paul A.; Llewellyn, Ralph A. (1999), "5–5", Modern Physics (3rd ed.), W. H. Freeman and Co., ISBN 1-57259-164-1  72. ^ Feynman lectures on Physics, vol 3, 2–2 73. ^ a b Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p.260. 75. ^ Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p. 260–261. 76. ^ Kumar, M., Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality, Icon, 2009, p. 287. 77. ^ Isaacson, Walter (2007), Einstein: His Life and Universe, New York: Simon & Schuster, p. 452, ISBN 978-0-7432-6473-0  78. ^ Gerardus 't Hooft has at times advocated this point of view. 79. ^ a b c Popper, Karl (1959), The Logic of Scientific Discovery, Hutchinson & Co.  80. ^ Jarvie, Ian Charles; Milford, Karl; Miller, David W (2006), Karl Popper: a centenary assessment, 3, Ashgate Publishing, ISBN 978-0-7546-5712-5  81. ^ Popper, Karl; Carl Friedrich von Weizsäcker (1934), "Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations)", Naturwissenschaften, 22 (48): 807–808, Bibcode:1934NW.....22..807P, doi:10.1007/BF01496543.  82. ^ Popper, K. Quantum theory and the schism in Physics, Unwin Hyman Ltd, 1982, pp. 53–54. 83. ^ Mehra, Jagdish; Rechenberg, Helmut (2001), The Historical Development of Quantum Theory, Springer, ISBN 978-0-387-95086-0  84. ^ Compton, A. H. (1931). "The Uncertainty Principle and Free Will". Science. 74 (1911): 172. Bibcode:1931Sci....74..172C. PMID 17808216. doi:10.1126/science.74.1911.172.  85. ^ Heisenberg, M. (2009). "Is free will an illusion?". Nature. 459 (7244): 164–165. Bibcode:2009Natur.459..164H. doi:10.1038/459164a.  86. ^ a b Davies, P. C. W. (2004). "Does quantum mechanics play a non-trivial role in life?". Biosystems. 78 (1–3): 69–79. PMID 15555759. doi:10.1016/j.biosystems.2004.07.001.  External linksEdit
6f4fe283f8c65213
Semiclassical Theory of Electronically Non-Adiabatic Dynamics William Miller University of California, Berkeley (UC Berkeley) The focus of my research over the last decade has been on developing semiclassical (SC) theory into a practical way for adding quantum effects to classical molecular dynamics (MD) simulations of large, complex molecular systems. A particularly interesting and important aspect of this is the ability to describe electronically non-adiabatic processes in a fashion that treats nuclear and electronic degrees of freedom (DOF) in an equivalent dynamical framework. This is accomplished by using a model developed by Meyer and Miller (MM) [J. Chem. Phys. 70, 3214 (1979)] for replacing a finite set of electronic states of a molecular system (i.e., the various potential energy surfaces and their couplings) by a classical Hamiltonian involving the nuclear and (collective) electronic DOF. Much later Stock and Thoss (ST) [Phys. Rev. Lett. 78, 578 (1997)] showed that the MM model is actually not a ‘model’, but rather a ‘representation’ of the nuclear-electronic system; i.e., were the MM nuclear-electronic Hamiltonian taken as a Hamiltonian operator and used in the Schrödinger equation, the exact (quantum) nuclear-electronic dynamics would be obtained. In recent years various initial value representations (IVRs) of SC theory have been used with the MM Hamiltonian to describe electronically non-adiabatic processes. Of special interest is the fact that although the classical trajectories generated by the MM Hamiltonian (and which are the ‘input’ for an SC-IVR treatment) are ‘Ehrenfest trajectories’, when they are used within the SC-IVR framework the nuclear motion emerges from regions of non-adiabaticity on one potential energy surface (PES) or another, and not on an average PES as in the traditional Ehrenfest model. Very recently an even more ambitious SC description of electronic DOF—one which replaces the fermionic creation and annihilation operators in the general second-quantized many-electron Hamiltonian by functions of classical action-angle variables—has been seen to provide an excellent description of transmission of electrons through a molecular junction. This opens up the possibility of being able to use classical MD simulations (of electronic and nuclear DOF) to model the many aspects of current interest in ‘molecular electronics’. Back to Semiclassical Origins of Density Functional Approximations
18df956665751af9
The Laplacian is the following linear differential operator : d2/dx2 + d2/dy2 + d2/dz2 so it is the same thing as the divergence of the gradient. It can be generalized to dimensions other than 3 in the obvious way. The Laplacian occurs in many different situations in physics. For example, the following partial differential equation describes the electric potential generated by a distribution of electric charge : (d2/dx2 + d2/dy2 + d2/dz2) V = q/e0 where V is the potential, e0 is a constant and q is the charge density. Another example where the Laplacian makes an appearance is the wave equation: (d2/dt2 - v2(d2/dx2 + d2/dy2 + d2/dz2)) p = 0 where p(x,y,z,t) is for example the air pressure in a certain place at a certain time and v is the velocity of the waves in question. This partial differential equation can, with appropriate boundary condition s, be used to model propagation of sound or radio waves, for example. Here is the heat equation, which describes the propagation of heat: (d/dt - k (d2/dx2 + d2/dy2 + d2/dz2)) T = 0 where k is a constant describing heat conduction and T is the temperature. Here is a Schrödinger equation, which describes the quantum-mechanical behavior of a free particle if relativity is neglected: (i d/dt + hbar/2m (d2/dx2 + d2/dy2 + d2/dz2))psi = 0 , where psi is the wave function, i is the imaginary unit, hbar is the Planck's constant divided by 2 pi, and m is the mass of the particle. As you see, the Schrödinger equation looks almost like the heat equation, except for the imaginary unit in front of the time derivative, which makes it behave quite differently!
0ba7026139bf624d
Wednesday, June 30, 2010 Déjà vu Last week, on the workshop in Bonn, I was in for a nasty surprise. Sitting there, listening to one talk after the other about black holes, I saw pictures reappear that I had made. Four different pictures of mine, in four different talks. All without picture credits. When I told the speakers later that they've been using a picture that took me in some cases hours to make without even putting my name below it, they apologized. One shrugged shoulders and said "It came up in Google." I checked that, it did come up when doing a Google image source for "Black Hole Evaporation," the source being my home page. I'm not surprised by this, my homepage has always been well indexed by Google. Apparently I was expecting too much when thinking people could at least look at the front page and find my name. I will admit that I am very dismayed by this. Yes, I too sometimes do use other people's figures and plots in my talks, but I usually add a source, if possible to find. It's more complicated with photos, who will typically appear in so many copies on some dozen websites that it's next to impossible to find out who originally took the photo. In any case, some of the pictures I saw reappearing in those talks I don't even hold the copyright on. They were published in one of my papers, and with that the copyright went to the publisher. I don't mind at all if people use my pictures, otherwise I wouldn't upload them to my website. I receive the occasional email from somebody asking if they can use one or the other for a talk or a paper and I always say yes. (I once was asked for a picture to be reprinted in a popular science book, but when the publisher of my picture was asked for the reprint permission they said no for reasons I still don't understand.) But of course I do expect that people add at least my name below it. It has previously happened that I saw pictures of mine reappear, this one showing an evaporating black hole seems to be the favorite but that workshop convinced me to add my name in a corner of all these pictures. Sure, one can cut it out, but it takes a deliberate effort. This also reminds me that I once received a paper for peer review. It was written in dramatically bad English, then all of a sudden there were two paragraphs that weren't only readable but sounded eerily familiar. A quick check confirmed my suspicion that it was an introduction from one of my own papers. They had cited my paper somewhere, but it was by no means clear they had copied half a page from it. Again, my paper was published, the copyright was with the publisher. The paper I reviewed wasn't only badly written but also wrong, so it didn't get published. However, I later wrote to the authors making it very clear that this is not an appropriate way to cite. They either mark it as a quotation, or they rewrite it. They apologized and then rearranged a few words here and there. I know other people who have made exactly the same experience with one of their papers. I find it very worrisome that more and more people make so unashamedly use of other's work without even thinking about it. My mother is a high school teacher and as a standard procedure she'll have to check every essay for whether it's been copied elsewhere. Evidently, there's still kids stupid enough to try nevertheless. I know these checks are being done in many other places too, there's even software for it so you don't have to Google every sentence manually. An extreme case that I know of was a PhD candidate who had copied together half of his thesis from other people's review articles, including equations, references and footnotes. He did cite the papers he used, but certainly didn't mark the "borrowed" pieces as quotations. It is clear that when thousands of people write introductions to the same topic, then many of them will sound quite similar. I also understand that when you find a nice picture for your talk online it seems superfluous to spend time yourself on what Google gives it to you on a silver plate. Certainly you have better things to do than making a pictures for your talk, right? But what you're doing is simply using someone else's effort and selling them as your own. So next time, spend the three seconds and check whose homepage you've been downloading your pictures from. And here's a recent copyright story that I found hilarious "Greek man sues Swedish firm over Turkish yoghurt pic" "A Greek man has sued a dairy firm in southern Sweden after his picture ended up on a Turkish yoghurt product. The man whose picture adorns the Turkish yoghurt product, manufactured by Lindahls dairy in Jönköping, argues that the company does not have permission to use his image [...] The man, who lives in Greece, was made aware of the use of his picture on the popular Swedish product when an acquaintance living in Stockholm recognized his bearded friend [...] In his writ the man has underlined that he is not Turkish, he is Greek, and lives in Greece, and the use of his picture is thus misleading both for those who know him and for buyers of the product. Lindahls dairy has expressed surprise at the writ and argues that the image was bought from a picture agency [...]" Monday, June 28, 2010 The left-handed Piano As a left-hander, I have an early hands-on experience with the concept of chirality, or handedness: It can be quite difficult to cut a piece of paper with the left hand using standard scissors; the blades usually do not close precisely, resulting in a frayed cut. And of course, scissors with modern, "ergonomically-formed" handles cannot be used with the left hand in the first place. There is a small niche market for all kinds of chiral partners of standard right-handed everyday products and tools: left-handed scissors, left-handed can-openers, left-handed pencil sharpeners. However, I do not utilize any of them, and use standard instruments with the right hand instead. Today, I heard on the radio about something really amazing on the market on left-handed products: There are left-handed pianos! Invented by Geza Loso, musician, piano teacher, left-hander and father of three left-handed kids, they are exact mirror images of usual pianos, with the pitch rising from the right to the left. As Geza Loso explains on his website: For the first time left-handed people receive a real chance to learn how to play the piano on an adequate instrument. Left-handed people would basically use their right hand to accompany and the skilled hand to handle the main functions of a piano-play, to play the melody. This is very decisive for every artistic interpretation. The left-handed piano will be distributed by the Leipzig piano-manufacturing company Blüthner. Chief executive Christian Blüthner doesn't expect a big commercial success, but thinks that the left-handed piano demonstrates his company's inventiveness. And I am wondering if my career with the piano may have have been longer than a couple of lessons if the instrument would have been left-handed. Saturday, June 26, 2010 Hello from Bonn Stefan and I, we are currently in Bonn for a workshop on "Black Holes in a Violent Universe." Bonn is the former German capital and a quite charming city, though not what you'd expect from a capital. So probably a good thing Berlin has taken over the burden. Germany is collectively in a good mood these days since the Germans won Friday's soccer game, and everybody is looking forward to Sunday's game. We're staying in a small hotel near the river Rhine. Needless to say, our room is on the 4th floor without elevator. On the other hand, we have a small roof patio. And here's what we found looking out of the window on the side opposite the patio: A small staircase leading to a platform (the top of the downstairs windows) with railing. That little walkway ends then, leaving you with the only option of a 4 floors' jump down on the paved street. I was thinking it might be the emergency exit, but the evacuation plan on our door points another direction. So not sure what this is. An invitation for suicide? A diving platform in case the river floods? My talk about the black hole information loss problem went very well (slides here). I wish you all a great weekend. Thursday, June 24, 2010 Guestpost: Marcelo Gleiser [A month ago, I was at a workshop at Perimeter Institute and I reported on a talk by Marcelo Gleiser. Marcelo's talk was very interesting and thought-stimulating. It touched upon very many different topics, from the process of knowledge discovery to the question of whether we should be searching for a fundamental theory of everything. In my post I expressed my opinion that of course believing in a theory of everything, if you take the name literally, is religion not science because if we had one we would never know if not one day we'd discover something that the theory would not explain. But the whole question of whether it exists is somewhat besides the point, the actual question (for me, the pragmatist) is what is a promising approach to take that will lead to progress. Marcelo has now written a reply to some of the points that came up in my post and the comments, and to some other reactions that he got. This reply can also be found at his blog 13.7.] To Unify Or Not To Unify: That Is (Not) The Question My latest book, A Tear at the Edge of Creation, came out in the US early April. In it, I present a critique of some deeply ingrained ideas in physics. In particular, I examine the question of unification and the search for a theory of everything, arriving at conclusions that—judging from some of the reactions I’ve been getting in lectures and in various blogs around the world—are shocking to many people. Of course, I welcome criticism and skepticism. We are used to this in scientific debates. What’s surprising to me, and perhaps alarming, is the speed with which superficial commentary in the blogosphere quickly escalates into complete misunderstanding of what it is that I am saying and why. So, I think the time is ripe for sketching a reply, even though the space here won’t do justice to the details of the argument. I do hope, however, that this will at least inspire critics and skeptics to actually read the book and judge for themselves and not through a few lines on a blog post. Among other things, in the book I suggest that the notion of a final theory, that is, a theory that encompasses complete knowledge of how matter particles interact with one another, is impossible. First, note that “final theory” here deals only with fundamental particle physics. Any claim that physical theories could be complete in the sense of describing (and predicting) all natural phenomena, including why you’re reading this, shouldn’t be taken seriously. First, we must consider if a complete theory of matter does exist. Second, assuming it does, if we can ever get to it. The first question is quite nebulous. We have no way of knowing if such a complete theory exists. We don’t even know what a “complete” theory is. You may believe it does and spend your life searching for it. That’s a personal choice. Or, like most physicists, you may believe this is nonsense, more metaphysics than physics. The second question, though, is tangible. Can humans achieve complete knowledge of the subatomic world? To answer this question, we must look at how science actually works. In a post at her blog Back Reaction, physicist Sabine Hossenfelder expressed her surprise at my statement that it took me 15 years to figure out that the notion of a final theory is faulty. Sorry Sabine, I guess old habits are hard to break. At least, I did see the light in the end. Happily, she agreed with my basic argument, that since what we know of the world depends on our measurements of the world, we can never be sure that we arrived at a final explanation: as tools advance, there is always room for new discoveries. Knowledge is limited by discovery. I go on to describe how the unifications that we have achieved so far, beautiful and enlightening as they are, are approximations and not “perfect” in any sense. The electroweak theory, a unification of the electromagnetic and the weak nuclear forces, is not a true unification but a mixing of the two interactions. Even electromagnetism, the paradigm of unification, only works flawlessly in the absence of sources. To be a truly perfect unification, objects called magnetic monopoles would have to exist. And even though they could still be found, their properties are clearly very different from the ubiquitous electric monopoles, e.g. point-like particles like electrons. We have partial unifications and we should keep on looking for more of them. This is the job of theoretical physicists. The mistake is made when symmetry, a very useful tool in physics, is taken as dogma. I don’t agree with Sabine when she says that it doesn’t matter what you believe in as long as the search “helps you in your research.” I think beliefs are very important, and to a large extent drive what it is that we are searching and the cultural context in which research is undertaken. Wrong beliefs can have very negative consequences. And can keep us blind for a long time. So, one of the points I make is that science is a construction that evolves in time to expand our body of knowledge through a combination of intuition and experimental consensus. There is no end point to it, no final truth to arrive at. Now, here are some of the things that have been said about my arguments: “Marcelo is disillusioned with unification; he has closed up his mind to string theory; he couldn’t find a Theory of Everything and now thinks no one can find one as well; he’s just frustrated; he doesn’t understand the role of symmetry in physics (!); he’s timing is bad because the LHC will be revealing new physics.” George Musser, at a Scientific American blog post wrote “My own reaction was that although it’s useful to caution against clinging to preconceived ideas about a final theory, Gleiser was too insistent on seeing the glass of physics as half-empty.” Musser goes on to say how much we do know about Nature and how much of that is due to the fact that simple laws govern natural phenomena. It’s true that Musser (and Sabine) were basing their comments on a lecture I gave recently at the Perimeter Institute and not on my book (you can watch the video here). Even so, as I tried to make clear in my text, I would never put down the remarkable achievements of science and much less be foolish to say that there are no patterns and symmetries in Nature! After all, that is how science works, by searching for simplifying explanation of natural phenomena. Having the LHC turned on and able to probe physics at energies higher than ever before is a very exciting prospect. The same general defensive zeitgeist was echoed by Neil Turok, the current director of the Perimeter Institute. We recently participated in a televised debate hosted by TV Ontario on Stephen Hawking’s ideas. We were a group of six physicists, hosted by Steve Paikin and had a great time. But at the end, when I made my arguments about final unification and the limits of knowledge, Turok accused me of pessimism! If anything, my book is a celebration of the human mind and all that we have achieved in such a short time. The fact that I point out that science has limitations doesn’t detract from all of its achievements. Or from all that lies ahead. I’m not disillusioned for not having found a TOE or for believing it doesn’t exist. I’m actually relieved! The reactions that I have encountered only reinforce my point, that there is great confusion these days about the cultural role of science and scientists. Science is not a new form of religion, scientists are not holy men and women, and we don’t have or can have all the answers. As I wrote in Tear at the Edge of Creation, “Human understanding of the world is forever a work in progress. That we have learned so much, speaks well of our creativity. That we want to know more, speaks well of our drive. That we think we can know all, speaks only of our folly.” Hopefully, this acceptance of our perennial ignorance won’t be interpreted as an opening to religion and supernatural explanations. Let me make my position clear: behind our ignorance there is only the science we still don’t know. Monday, June 21, 2010 Friday, June 18, 2010 The summer solstice is near and days here in Stockholm are getting longer and longer. The other day I woke up early and, looking out of the window, saw that it was dawning already. Or so I thought. The clock revealed that it wasn't the dawn I was seeing, but that the sun hadn't even set. My biorhythm seems to be a little confused these days. Along with midsummer also the long awaited wedding of Sweden's Crown Princess Victoria is coming closer. Tomorrow Victoria will exchange I-do's in Stockholm Cathedral with her former personal trainer Daniel Westling. It's a giant marketing event: The Swedes have declared Stockholm's airport Arlanda the "Official Love Airport 2010" and the two weeks before the wedding we had to endure the "LOVE Stockholm 2010," a "two-week festival of love, right in the centre of Stockholm." You can buy postcards and posters of the happy couple in every supermarket here, together with loads of blue-yellow decorations. Busy cityworkers have planted yellow and blue flowers all over the place. Just the weather isn't really playing along, today it's rainy at 17° C. My Swedish isn't good enough to actually understand the traffic report on the radio, but I understand as much as a long list of streets separated by stängt stängt stängt stängt (closed). I for certain will stay as far away as possible from the city center tomorrow. If your national TV station doesn't broadcast the event, you can follow the wedding ceremonies live tomorrow via SVT. I think it's great the two get married tomorrow because that way I was able to grab a slot for the laundry room on Saturday morning. Next week, I'll be on a short trip to Bonn for a workshop on quantum black holes, where I'll give a talk about my paper with Lee on the black hole information loss. I wish you all a lovely weekend :-) Thursday, June 17, 2010 Science Metrics Nature has a very interesting News Feature on metrics for scientific achievement, titled Metrics: Do metrics matter? The use of scientific metrics is a recurring theme on this blog. I wrote about it most recently in my post Against Measure. The main point of my criticism on science metrics is that they deviate researchers' interests. It is what I refer to as a deviation from primary goals to secondary criteria. Here, the primary goal is good research. The secondary criteria are some measures that for whatever reason are thought to be relevant quantifiers for the primary goal. The problem is that, even if the secondary criteria have initially had some relevance, their implementation inevitably affects researcher's own assessment of what success means and leads them to strive for the secondary criteria rather than the primary goal. With that, the secondary criteria become less and less useful since they are being pursued as an end in itself. Typical example: number of publications. In principle not a completely useless criterion to assess a researcher's productivity. But it becomes increasingly less useful the more tricks scientists pull to increase the number of publications instead of focusing on the quality of their research. Note that for a deviation of interests to happen it is not necessary that the measures are actually used! It is only relevant that researchers believe they are used. It's a sociological effect. You can cause such believes by simply doing much talk about science metrics. The better known a measure is, the more likely people are to believe it has some relevance. It is a well known fact about human psychology that people pay attention to what they hear repeatedly. Now Nature did a little poll asking readers how much they believe science metrics are used at their institution for various purposes. 150 readers responded; the results are available here. They then contacted scientists in administrative positions at nearly 30 research institutions around the world and asked them what metrics are being used, and how heavily they are relied on. In a nutshell the administrators claim that metrics are being used much less than scientists believe they are. "The results suggest that there may be a disconnect between the way researchers and administrators see the value of metrics." While this is an interesting suggestion, it is not much more than a suggestion. It is entirely unclear whether the sample of people who replied to the poll had a good overlap with the sample of administrators being asked. By such a small sample size the distribution of people in both groups over countries matters significantly. It remained unclear to me from the article whether in their contacting of institutes they have made sure that the representation of countries is the same as that of the poll's participants, and also if the distribution of research fields is the same. If not, the mismatch between the administration and the researchers might simply show national differences or differences between fields of research. Also, it is conceivable that people who filled out the questionnaire had some concerns about the topic to begin with, while this would not have been the case for people contacted. It did not become clear to me how the poll was publicized. In any case, given what I said earlier, we should of course appreciate the suggestion of these results. Please do not believe that science metrics matter for your career! Tuesday, June 15, 2010 Why do people get tattooed? Last night, I had a weird dream. A white haired man with a long beard insisted on tattooing my shoulder. I couldn't get him to drop his plans, so he started punching. I asked him what the image will be. “I'm doing a circle,” he said. He continued his circle but when he finished it didn't close. “Now I have to walk around with a stupid non-closing circle!” I complained and he poured his ink over me. Then I woke up. You're welcome to analyze this dream, but not allowed to use the words “string” and “loop.” If you read science blogs frequently you'll probably have come across one or the other posting of a science related tattoo. (See eg here for a nice compilation.) It always leaves me wondering what drives people to do that. It's one of these emerging social and cultural trends that are so complex even the people doing it don't know why they're doing it. It is, from an evolutionary perspective, very interesting what weird behaviors intelligent creatures can develop in large groups. My attempt to understand humans recently brought me across the paper “Modifying the body: Motivations for getting tattooed and pierced” (Wohlrab, Stahl and Kappeler, Body Image 4 (1007) 87). They start with an interesting historical summary (please see paper for references): “[Tattooing and body piercing] have a long history and are well known from various cultures in Asia, Africa, America, and Oceania. There is also evidence for the prevalence of tattoos in Europe, dating back over 5000 years. Although the appearance of tattoos and body piercings varied geographically, they always possessed a very specific meaning for the particular culture. Piercings were often used in initiation rites, assigning their bearer to a certain social or age group, whereas tattoos were utilized to signal religious affiliations, strength or social status. In Europe, the practice of tattooing was predominant among sailors and other working class members from the beginning of the 20th century onwards. Later on, tattoos assigned affiliations to certain groups, such as bikers or inmates. In the 1980s the punk and the gay movement picked up invasive body modification, mainly as a protest against the conservative middle class norms of society. Until the 1990s, body modifications remained a provocative part of various subcultures. In the last decade tattoos and piercings have increased tremendously in popularity, rising not only in numbers but also involving a broader range of social classes.” Thus, historically tattoos seem to predominantly have been used to signal affiliation to or sympathy with a group. The paper is basically a literature survey, and the authors then identify ten motivations for getting tattooed that have been studied. These are: 1) Beauty, art and fashion 2) Individuality 3) Personal narrative 4) Physical endurance 5) Group affiliation and commitment 6) Resistance 7) Spirituality and cultural tradition 8) Addiction (to obtaining the tattoo) 9) Sexual motivation (in the case of tattoos: expressing affectation or emphasizing the own sexuality) 10) No specific reason (eg under the influence of drugs). For what science tattoos are concerned, I think we can forget about the last category. It seems quite unlikely to me the average guy on the street will get drunk and wake up the next morning with the Wheeler-DeWitt equation on his shoulder. For what point 4) is concerned, I think we can leave this aside as well. I don't think the physical endurance is higher for scientific motives. Unless maybe there's a mistake in the equation. For what sexual motivations are concerned, it is in this context interesting to draw upon a recent survey, conducted in Germany (sample size approximately 2500, as reported in “Machen Tattoos sexy?” forschung SPEZIAL. Das Magazin der Deutschen Forschungsgemeinschaft, 2/07, 22-25). More than 10% of men and more than 8% of women were tattooed. The age range that currently dominates the wedding market (18-36 years) has the largest fraction of tattooed people. Men are more likely to be tattooed on arms and legs, whereas women prefer places that can easily be covered by cloths: back, belly, bottom. Not so surprisingly, men prefer designs with skulls, weapons and such, whereas women prefer flowers and animals. Maybe the most interesting fact though is that while only 8% of women had a tattoo, 56% of the participants with a tattoo had a partner who was also tattooed. So there's clearly some matching going on there. Another study in which participants were shown images of tattooed people revealed that both women and men judged people with tattoos to be more “aggressive” and “dominant.” Maybe for some, that is a desired effect? Needless to say, all that reading didn't really explain why people want to have an equation on their arm. I can relate to the beauty/fashion motivation to some extend, but I suspect that if your fashion statement are Maxwell's equations you'll get more confused than admiring looks. I suppose the most likely motives are thus personal narrative and showing group affiliation and commitment. Or maybe we're seeing an attempt of resistance to anti-intellectualism? Not to mention that you can upload the photo to your blog and collect cheers. As for myself, I've fleetingly considered getting tattooed once or twice, but my tastes are at the best metastable and whatever the design, I'd probably get fed up with it after a few months, so tattoos are not for me. Anyway, it is sometimes very refreshing to read an article in a journal I had never heard of before like Body Image. The most amusing part was this sentence from the abstract, right out of the ivory tower: “[A] profound understanding of the underlying motivations behind obtaining tattoos and body piercings nowadays is required.” Sure, I mean, unstable financial systems are ruining the lives of millions of people, climate change is about to erode the basis of many economies posing a threat for global political and social stability, each year about 5 million people still die because they don't have enough to eat, but what's really required is a profound understanding of why people punch needles through their nipples. If you replace “motivations behind” with “structure of” and “obtaining tattoos and body piercings” with your favourite physics term, I'm sure you'll find the same sentence in a significant fraction of arxiv papers... Saturday, June 12, 2010 Book review: From Eternity to Here by Sean Carroll By Sean Carroll Dutton Adult (January 7, 2010) Most of you will know Sean Carroll, who blogs at Cosmic Variance. Sean is a Senior Research Associate at CalTech and his research focuses on cosmology, general relativity and the standard model, as well as extensions thereof. He has written a textbook on General Relativity, and the lecture notes that gave rise to the book are available online. I've met Sean a few times, he's an interesting person and gives great talks. Sean has a special interest in the arrow of time, and that is also the topic of book “From Eternity to Here.” The arrow of time is, in a nutshell, the question why the past is different from the present. I bought the book for three reasons. One is that for many years I've been using the PDF version of his lecture notes as a handy quick reference when on travel and had a bad consciousness for never buying the book. The second one is that from reading Sean's blog I know he writes well. The third reason is that adding a second book to the order rendered delivery free. “From Eternity to Here” is a very well written book that communicates a lot of science, both textbook science and contemporary science, while at the same time being amazingly accurate. The biggest part of the book - all but the last chapter - is dedicated to accurately framing the question. Why is it interesting to ask why the past was what it was? What exactly is it that we don't understand? How do we get a grip on the problem? For this, Sean covers first of all the second law of thermodynamics, then special relativity, general relativity, cosmology, quantum mechanics, black hole physics, and finally inflation and the multiverse. In the last chapter, he then discusses possible solutions to the question he has posed and puts forward his own solution as the most plausible one. Along the way he scratches on topics like the vacuum energy, structure formation, the AdS/CFT duality and magnetic monopoles. Sean is very careful with distinguishing between established science and unconfirmed speculations. The only glitch is the section on the holographic principle where he fails to point out that there is no experimental evidence for such a feature of Nature to be true in all generality. I am somewhat sick of being misinterpreted on this point so let me be very clear here. All I am saying is that, absent experimental evidence, scientists should be very careful with what they put forward as a true description of Nature. Theoretical evidence can very easily be biased simply because a topic that attracts attention may mount one-sided “evidence.” This can never replace actual tests of a hypothesis. The holographic principle certainly does not rest on the same basis as ΛCDM or the Schrödinger equation and I wish its status had been framed more clearly. Anyway, Sean needs the holographic counting of degrees of freedom for the rest of his argument. I was very pleased that Sean's explanations of physical concepts are not as superficial and vague as one frequently finds in popular science books. He does not shy away from the phase space, using logarithms, and discusses the amplitude of the wave function. The chapter on quantum mechanics however somewhat suffers from the overuse of cats and dogs. The book has plenty of footnotes with additional explanations, and offers many references so that the interested reader will easily be able to find the relevant keywords and dig deeper, should they wish so. On several occasions I took a note that Sean had forgotten to point out a specific assumption that entered his argument or left out some exceptions. In every single case, these points were later addressed, so I am left with nothing to complain about. I personally don't have a large interest in the topic and don't care very much about the whole discussion. I think the question is ill-posed and when we have a better understanding of quantum gravity we'll see why. Sean's book didn't succeed in increasing my interest. Nevertheless, it was a pleasure to read. Sean has a good sense of humor, but doesn't overdo it. The story he tells is also well embedded into its scientific history and I learned a thing or two here that I hadn't known before. Both the historical and the philosophical aspects however play a secondary role and don't take over the scientific discussion. All together, the book is very well balanced and a recommendable read. It has something to offer for anybody who has an interest in modern cosmology and/or the arrow of time. I'd give this book 5 out of 5 stars. From January through April, Sean offered a book club at his blog, each weak discussing another chapter. You might find this a useful addition to the book itself. Wednesday, June 09, 2010 Perimeter Institute is looking for a Scientific IT specialist Two years ago, I organized a conference on Science in the 21st Century, focusing on topics at the intersection on science, society and information technology. (I wrote about the conference here, a summary is here and a brief write-up of my own talk is here.) There are three aspects to the changes that the use of information technologies are bringing to science. One is the improving communication with the public - this blog is an example for such a change. The second one is that advances in hard- and software allow us to better understand the process of knowledge discovery and the dynamics of the scientific communities itself - the Maps of Science are an example for this. The third aspect, and probably the one most interesting for the scientist at work, is the development of new tools that support research and researchers in their every day work. As I learned the other day, Perimeter Institute is now looking for a person who works at exactly this intersection. The job description reads as follows: The Perimeter Institute for Theoretical Physics (PI) is looking for a Scientific IT specialist -- a creative individual with experience in both scientific research and information technology (IT). This is a new, hybrid, research/IT position within the Institute, dedicated to helping PI’s scientific staff make effective use of IT resources. It has two clear missions. First, to directly assist researchers in using known, available IT tools to do their research. Second, to uncover or develop cutting-edge IT resources, introduce and test them with PI researchers, and then share the things we create and discover with the worldwide scientific community. By "tools", we mean almost anything. Coding techniques are an obvious example. Collaboration and communication technologies are another: tools for peer-to-peer interactions (such as skype), virtual whiteboards, video conferencing tools, platforms for running virtual conferences (that can do justice to talks in the mathematical sciences), and novel ways of presenting research results such as archives for recorded seminars, blogs, and wikis. Further examples include tools for helping researchers organize information (e.g., specialized search engines and filtering schemes), and end-user software that facilitates bread-and-butter scientific activities like writing papers collaboratively, preparing presentations, and organizing references. We are seeking a person who brings an independent and ambitious vision that will help define this vision. The job is as yet quite malleable in its scope and duties! We're looking for someone who is inspired by the possibility that new IT tools can improve or perhaps even revolutionize the way that physics research is done, and someone who can take full advantage of a mandate to create and implement that vision. Some Duties and Responsibilities: - Act as a knowledge broker among Researchers. That is, find and test new programs and practices, advertise them, and be prepared to train others in their use. - Participate in the creation of a high quality “standard" Researcher IT environment (desktop hardware, software set-up), built from a mix of open source software and popular commercial packages. - Help with High Performance Computing demands. - Maintain expert level knowledge in the use of the main packages used by Researchers, including Mathematica, Maple, LaTex, etc. For the official job ad, go here. [Via Rob Spekkens]. The deadline for applications is Friday, July 2, 2010. The Albert Einstein Institute in Potsdam meanwhile offers an almost identically sounding position. I've been told PI was first, but their posting is not dated. I very much like this development. My requirements on IT staff these days are however very modest. I am happy when the printer spits out my paper without chewing up some pages or leaving them blank. My biggest wish would be not a virtual whiteboard but an actual whiteboard with a plugin to my computer so I could use the board for equations and figures during a skype call. The equations are usually cumbersome but still doable, in the worst case by typing them in LaTex into the chat interface. But diagrams are a disaster. Drawing with a mouse yields no sensible results and the drawing pads that I've tried weren't too convincing either, even neglecting the problem on how to incorporate them into the call. On occasion I've thus drawn on a paper and held it into the camera. This however only works for figures with few details and necessitates plenty of additional explanations. What is the software or hardware you dream of for your research life? Saturday, June 05, 2010 Diamonds in Earth Science To clarify the situation, experiments would need to push above 120 Gigapascal and 2500 Kelvin. I [...] started laboratory experiments using diamond-anvil cell, in which samples of mantle-like materials are squeezed to high pressure between a couple of gem-quality natural diamonds (about two tenths of a carat in size) and then heated with a laser. Above 80 Gigapascal, even diamond—the hardest known material—starts to deform dramatically. To push pressure even higher, one needs to optimize the shape of the diamond anvils's tip so that the diamond will not break. My colleagues and I suffered numerous diamond failures, which cost not only research funds but sometimes our enthusiasm as well. (From The Earth's Missing Ingredient) But in the end, Kei Hirose and his group succeeded in subjecting a small sample of magnesium silicate to the pressure and temperature that prevails in the lower Earth's mantle, about 2700 kilometer below our feet. Planet Earth has an onion-like structure, as has been revealed by the analysis of seismological data: There is a central core consisting mostly of iron, solid in the inner part, molten and liquid in the upper part. On top of this follows the mantle, which is made up of silicates, compounds of silicon oxides with magnesium and other metals. The solid crust on which we live is just a thin outer skin. The lower part of the mantle down to the iron core was long thought to consist of MgSiO3 in a crystal structure called perovskite. However, seismological data also revealed that the part of the mantle just above the CMB (in earth science, that's the core-mantle boundary, not the cosmic microwave background... ) somehow is different from the rest of the mantle. This lower-mantle layer was dubbed D″ (D-double-prime, shown in the light shade in the figure), and it was unclear if the difference was by chemical composition or by crystal structure. As Kei Hirose describes in the June 2010 issue of the Scientific American, his group started a series of experiments to study the properties of magnesium silicate at a pressure up to 130 Gigapascal (water pressure at an ocean depth of 1 kilometer is 0.01 GPa) and a temperature exceeding 2500 Kelvin ‒ the conditions expected for the D″ layer of the lower mantle. To achieve such extreme conditions, one squeezes a tiny piece of magnesium silicate between the tips of two diamonds, and heats up the probe by a laser. The press used in such experiments is called "laser-heated diamond anvil cell". The figure shows the core of a diamond anvil cell: The sample to be probed is fixed by a gasket between the tips of two diamonds. The diameter of the tips is about 0.1 millimeter, so applying a moderate force results in huge pressure. Diamonds are used because of their hardness, but they have the additional bonus of being transparent. Hence, the probe can be observed, or irradiated by a laser for heating, or x-rayed for structure determination. The diamonds are fixed in cylindrical steel mounts, but creating huge pressure does not require huge equipment: The whole device fits on a hand! (Photo from a SPring-8 press release about Kei Hirose's research.) Actually, the force on the diamond tips is applied in such a device by tightening screws by hand. In the experiment, the cell was mounted in a brilliant, thin beam of x-rays created by the SPring-8 synchrotron facility in Japan. This allows to monitor the crystal structure of the probe by observing the pattern of diffraction rings. It was found that under the conditions of the D″ layer of the lower mantle, magnesium silicate forms a crystal structure unknown before for silicates, which was called "Post-Perovskite". The formation of post-perovskite in the lower mantle is a structural phase transition of the magnesium silicate, and this transition can explain the existence of a separate the D″ layer, and many of its peculiar features. It also facilitates heat exchange between core and mantle, which seems to have quite important implications for earth science. And here is the heart of the experiment (from the "High pressure and high temperature experiments" site of the Maruyama & Hirose Laboratory at the Department of Earth and Planetary Sciences, Tokyo Institute of Technology) ‒ a diamond used in a diamond anvil pressure cell: High-quality diamonds of this size cost about US $500 each. Thursday, June 03, 2010 Impressions from the PI workshop on the Laws of Nature As you know, 2 weeks ago I was at Perimeter Institute for the workshop on the Laws of Nature: Their Nature and Knowability. It was a very interesting event, bringing together physicists with philosophers, a mix that isn't always easy to deal with. People (them) On the list of participants, you'll find some well known names. Besides the usual suspects Julian Barbour and Lee Smolin, Paul Davies was there (though only for the first day), Anthony Aquirre (the event was sponsored by FQXi) and of course several people from PI and the University of Waterloo. In my previous post, I already wrote about Marcelo Gleiser's talk. Marcelo is from Brazil, and he is apparently well known there for his popular science books (which was confirmed by Christine in an earlier post.) I had frankly never heard of him before. I talked to him later over dinner, and he told me he writes for a group blog called 13.7 together with, among others, Stuart Kauffman who is also well known for his popular science books. (13.7 is the estimated age of the universe in billion years. What will they do if that number gets updated?) Another interesting name on the list of participants is Roberto Unger, who is a well-known Brazilian politician and besides that a professor for law at Harvard Law School, and author of multiple books on social and political theory. He apparently has an interest not only in the laws of societies, but also in the laws of Nature*. And finally let me mention George Musser was also at the workshop. George writes for Scientific American and is author of The Complete Idiot’s Guide to String Theory. He turned out to be a very nice guy with the journalist's theme "I want to know more about that." Talks (their) Now let me say a word about the talks. First, and most important, all the talks were recorded and are available on PIRSA here. The talks on the first day were heavily philosophical. I will admit that I often have problems making sense of that. Not because I don't have an interest in philosophy, but because one frequently ends up arguing about the meaning of words which is, at the bottom of things, a consequence of lacking definitions and thus a waste of time. Yes, my apologies, I'm, duh, a theoretical physicist with some semesters maths on my CV. If I don't see a definition and an equation, I get lost easily. In some cases it seems the philosophers imply some specific meaning that they just never bother to explain. But in other cases they'll start arguing about it themselves, and that's when I usually zoom out wondering what's the point in arguing if they don't know what they're arguing about anyway. The most interesting event on the first day was arguably Lee Smolin's and Roberto Unger's shared talk "Laws and Time in Cosmology". Let me add that I've heard Smolin talking about the "reality of time" several times and I still can't make sense of it. The problem I have is simply that I don't know what he's talking about. This recent talk didn't change anything about my confusion, but if you haven't heard it before, you might find it inspiring. Unger's talk is very impressive on the rhetorical side. Unfortunately, it made even less sense to me than Lee's talk. For all I can see, there's no tension neither between a block-universe and a notion of simultaneity nor between a block-universe and causality, as I think I heard Unger saying (thus my question in the end). Point is, I don't understand the problem they're attempting to address to begin with. I see no problem. As Barbara Streisand already told us "Life is a moment in space" and "In love there is no measure of time." Consequently, a universe where time is real must be loveless. I don't like that idea. On that note, let me recommend Julian Barbour's talk "A case for geometry". Julian is a charming British guy and he has his own theory of a lovely, timeless universe. I don't buy a word of what he says, but his talk is very accessible and fun to listen to. It makes your head spin what he's saying, just try it out, it's very intriguing. I am curious to see how these ideas will develop, it seems to me they might be on the brink of actually making predictions. (A somewhat more detailed explanation of his ideas is here, audio becomes audible at 3:30 min.) On the second day, we had several talks discussing concrete proposals for how one could think of the laws of Nature off the trodden path. You probably won't be surprised to hear that one of the suggestions is that of "Law without Law: Entropic Dynamics" by Ariel Caticha. It is not directly related to Erik Verlinde's entropic gravity, but certainly plays in the same corner of the room: exploiting the possibility that fundamentally all our dynamics is simply a consequence of the increase of entropy. Ariel's talk however isn't really recommendable, it sits on a funny edge between too many and too few details. Another approach is Kevin Knuth's who put forward in his talk "The Role of Order in Natural Law" the idea that on the basis of all, there's order - in a well-defined mathematical sense. I can't avoid the impression though that even if this worked out to reproduce the standard model, it would merely be a reformulation. Kevin's talk was basically a summary of this recent paper. And Philip Goyal gave a very nice talk on "The common symmetries underlying quantum theory, probability theory, and number systems." I have a lot of sympathy for the attempt to reconstruct quantum theory, it's just that I don't understand why literally all the quantum foundations guys hang themselves up on the measurement process in quantum mechanics. For what I'm concerned, quantum field theory is the thing, and I'm still waiting for somebody to reconstruct the non-commutativity of annihilation and creation operators. Finally, let me mention Kevin Kelley's talk "How does simplicity help science find true laws?" Kelley is a philosopher from Carnegie Mellon, and in his talk he explored whether it is possible to put Ockham's Razor on a rational basis. Unfortunately, while the theme could in principle have been very interesting, his talk is not particularly accessible. He assumed way too much knowledge from the audience. At least I get very easily frustrated when technical terms are dropped and procedures are mentioned without being explained, since it's not a field I work in. In any case, I'll spare you the time watching the full thing and just mention an interesting remark that came up in the discussion. Apparently there have been efforts to create a computer software that could simulate a "scientist," in this case for the example of trying to extract a theory from data of the motion of the planets. At least so far, such attempts failed (if anybody knows a reference, it would be highly appreciated.) So it seems, for the time being, scientists will not be replaced by computers. At the end of the last day we had a discussion session, moderated by Steven Weinstein, wrapping up some of the topics that came up the previous days and some others. One of them is the question about the power of mathematics and if there are limits to what humans can grasp (a theme we have previously discussed here). For a fun anecdote making the point well, watch Steven at 1:13:50 min ("I remember distinctively being in a graduate quantum mechanics class by Bob Wald...") Of course Tegmark's mathematical universe made an appearance as well, another topic we have previously discussed on this blog. For what I am concerned, declaring that all is mathematics may be some sort of unification of the laws of Nature, alright, but it's eventually a completely useless unification. And that brings me to... Thoughts (mine) On several occasions at the workshop, I felt like the stereotypical physicist among philosophers, and it took me a while to figure out what I found lacking at this workshop. You could say I'm a very pragmatic person. There's even an ism that belongs to that! If you talk about reality and truth, I don't know what you mean, and I actually don't care. This is just words. I'll start caring if you tell me what it's good for. If you want to reformulate the laws of physics, fine, go ahead. But if you want me to spend time on it, you'll have to tell me what the advantage is. If there's two theories and they make the same predictions, that doesn't cause me headaches. For what I'm concerned, if they make the same predictions, they're the same theory. What matters in the end about a law or a theory or a model is not whether it's philosophically appealing and not even if there's a rational process by which it's been selected (and btw, what means "rational" anyway), but simply whether it's useful. And usefulness is eventually a notion deeply connected to human societies and values. For that reason I think to understand the scientific method and its success one inevitably needs to take into account the dynamics of the communities and the embedding of scientific knowledge into our societies. (It should be clear that with usefulness I don't necessarily mean technical applications as I have recently expressed in this post.) Leaving aside that I found this aspect entirely missing to the discussions about the process of science itself and its possible limitations, the workshop has given me a lot to think about. Having said that the pragmatist in me searches for the use in all that enters my ears, I nevertheless have enough fantasy to imagine that some of the themes discussed at the workshop will become central to shaping our thinking about the laws of Nature in the future and thus eventually prove their usefulness. It was a very stimulating meeting and the approaches that were presented are all as bold as courageous. It will be interesting to follow the progress of these thoughts. *I once made an attempt to read one of Unger's books, What should the left propose? I had to look up every second word in a dictionary, and even that didn't always help. When I had, after an hour or so, roughly deciphered the meaning of a page it seemed to me one could have said the same in one simple sentence, avoiding 3 or more syllable words. I gave up on page 20. Sorry for being so incredibly unintellectual, but to me language is first and foremost a means of communication. If you want to be heard, you better use a code that the receiver can decipher. Friedrich Engels, for example, was an excellent writer... Tuesday, June 01, 2010 Update on the ESQG 2010 • What to sacrifice? • The Future of Particle Physics. • Experiments and Thought Experiments
4cea98b6c1c8d0b2
Facebook Twitter Instagram YouTube Multielectron effects in strong field processes in molecules TitleMultielectron effects in strong field processes in molecules Publication TypeThesis Year of Publication2016 AuthorsXia, Y Academic DepartmentDepartment of Physics Number of Pages175 Date Published04-2016 UniversityUniversity of Colorado Laser technology has experienced a rapid evolution in available intensities, frequencies, and pulse durations over the last three decades. Many new laser induced phenomena in atoms have been discovered, such as multiphoton ionization, above-threshold ionization, high-order harmonic generation etc. For the interaction with atoms, usually only one electron in the outermost shell is assumed to be active (called single-active-electron approximation) while all other electrons are considered to remain frozen in their initial states. Due to the extra degrees of freedom (vibration and rotation) and the more complex structures, the interaction of molecules with intense laser pulses reveals many new features. Recent experiments have indicated that electrons from inner valence orbitals of molecules can have significant contributions to ionization and high harmonic generation. Theoretical analysis of these processes in molecules faces the challenge to extend previous theories developed for the atomic case by including the multielectron character of the molecular target. In this thesis we systematically investigate multielectron effects in the interaction of moleculeswith intense laser light. To this end, we apply time-dependent density-functional theory to solve the multielectron Schrödinger equation and analyze highly nonlinear processes such as high harmonic generation, laser-induced ionization and nonadiabatic electron localization. Based on the results of our numerical simulations we predict a new feature in the harmonic spectra of molecules, namely the occurrence of fractional harmonics in the form of Mollow sidebands. Such additional peaks in the spectra appear due to a field-induced resonant coupling of an inner valence orbital with the outermost orbital in a molecule. Furthermore, we show that the theoretical explanation of recent experimental data for the ellipticity of high harmonics in N2 and CO2 require the systematic consideration of all inner valence shells as well as the proper alignment distribution in the experiment. We also show that the coupling of molecular orbitals in the field can lead to an enhancement of (inner-shell) ionization, potentially leading to a population inversion in the ion, as well as nonadiabatic electron dynamics, where the electron can be trapped at one side of the molecule over several field cycles. Finally, we present the development of a new intense-field theory based on the Floquet theorem with complex Gaussian basis sets and show results of first applications for ionization of simple systems.
441e3a253ecb734b
The Biotic Logic of Quantum Processes and Quantum Computation The Biotic Logic of Quantum Processes and Quantum Computation Hector Sabelli (Chicago Center for Creative Development, USA) and Louis H. Kauffman (University of Illinois at Chicago, USA) Copyright: © 2014 |Pages: 69 DOI: 10.4018/978-1-4666-5125-8.ch035 This chapter explores how the logic of physical and biological processes may be employed in the design and programing of computers. Quantum processes do not follow Boolean logic; the development of quantum computers requires the formulation of an appropriate logic. While in Boolean logic, entities are static, opposites exclude each other, and change is not creative, natural processes involve action, opposition, and creativity. Creativity is detected by changes in pattern, diversification, and novelty. Causally-generated creative patterns (Bios) are found in numerous processes at all levels of organization: recordings of presumed gravitational waves, the distribution of galaxies and quasars, population dynamics, cardiac rhythms, economic data, and music. Quantum processes show biotic patterns. Bios is generated by mathematical equations that involve action, bipolar opposition, and continuous transformation. These features are present in physical and human processes. They are abstracted by lattice, algebras, and topology, the three mother structures of mathematics, which may then be considered as dynamic logic. Quantum processes as described by the Schrödinger’s equation involve action, coexisting and interacting opposites, and the causal creation of novelty, diversity, complexity and low entropy. In addition to ‘economic’ (not entropy producing) reversible gates (the current goal in the design of quantum gates), irreversible, entropy generating, gates may contribute to quantum computation, because quantum measurements, as well as creation and decay, are irreversible processes. Chapter Preview Quantum gates and circuits may provide an opportunity to incorporate the pattern of quantum processes into the logical structure of the computer, and thereby employ rules for reasoning that take into account the pattern of quantum processes as well as that of many other natural processes. This chapter explores the possibility for a logical design of computers that matches the logic inherent in natural processes. There are natural creative processes that are evident in biological organisms and also found in physical processes, including quantum ones. These biotic processes could become the basis for the working of a machine. We speculate on the possibility of harnessing these biotic processes for computation. Such logic would be natural and empirically based. In most mathematical systems, the axioms that we take are carefully abstracted from a certain aspect of experience. Quantum processes do not follow Boolean logic (Birkhoff and von Neumann, 1936), and therefore the development of quantum computation involves the development of quantum logic. Adapting our humanly-conceived computer hardware and software to the actual logic of nature, one would hope to model mathematical, natural, and mental processes more directly and accurately. Of course we do not know what the “actual logic of nature” is, but we have a proposal based on physical and biological considerations. The founders of science and philosophy regarded biological processes as useful models for the cosmos; Aristotle noted that the heavens are high and far off and of celestial things the knowledge that our senses give is scanty and dim while living creatures are at our door and we may gain ample and certain knowledge of each and all (quoted by Prigogine, 1980). Cybernetics, Chaos and Complexity were largely inspired by biology, and Bios was found in human physiology before it was demonstrated in physics. The objective of this chapter is to sketch the general principles of a system of logic that incorporates the biotic complexity of quantum processes, hoping that it will be useful in quantum computation. We focus on quantum processes because they are particularly relevant to the development of quantum computers. This will allow the full use of quantum processes for computation. Employing quantum processes directly could be extremely helpful to understanding them. Processing, transmitting and storing information encoded in systems with quantum properties will provide practical advantages as illustrated by the factoring of large numbers (Shor, 1997)2. A number of physical systems for quantum computation hardware (Ladd et al, 2010), and software (Nielsen & Chuang, 2000) are being developed. Here we address a complementary task, to formulate the logic of quantum processes. A key new element that we bring into physics, logic and computation is the existence of causal creativity at all levels of organization including fundamental physical and biological processes (Sabelli, 2005). As a result, quantum processes as well as cosmological ones display life-like (biotic) patterns (Sabelli and Kovacevic, 2003, 2006; Thomas et al. 2006; Sabelli et al, forthcoming) as illustrated in Figure 1 for the wave function of an electron confined to a well. The same pattern is observed for purported gravitational waves presumably originating in the Big bang, and at processes at multiple levels of organization (see later). The Schrödinger equation originally was intended to portray the movement of electrons; it was later on interpreted by Born as the probability of finding the electron in a given place, but such ‘probability’ changes in time, hence the Schrödinger equation portrays a process, the temporal change of the distribution of the electron3. Complete Chapter List Search this Book:
608c49b4da6c0526
Interaction picture From Wikipedia, the free encyclopedia Jump to navigation Jump to search Paul Dirac, together with Werner Heisenberg and Erwin Schrödinger. The Dirac picture may be thought of as in between the Schrödinger picture and the Heisenberg picture. In quantum mechanics, the interaction picture (also known as the Dirac picture after Paul Dirac) is an intermediate representation between the Schrödinger picture and the Heisenberg picture. Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables.[1] The interaction picture is useful in dealing with changes to the wave functions and observables due to interactions. Most field-theoretical calculations[2] use the interaction representation because they construct the solution to the many-body Schrödinger equation as the solution to the free-particle problem plus some unknown interaction parts. Equations that include operators acting at different times, which hold in the interaction picture, don't necessarily hold in the Schrödinger or the Heisenberg picture. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. Operators and state vectors in the interaction picture are related by a change of basis (unitary transformation) to those same operators and state vectors in the Schrödinger picture. To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts: Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that H0,S is well understood and exactly solvable, while H1,S contains some harder-to-analyze perturbation to this system. If the Hamiltonian has explicit time-dependence (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with H1,S, leaving H0,S time-independent. We proceed assuming that this is the case. If there is a context in which it makes sense to have H0,S be time-dependent, then one can proceed by replacing by the corresponding time-evolution operator in the definitions below. State vectors[edit] A state vector in the interaction picture is defined as[3] where |ψS(t)〉is the state vector in the Schrödinger picture. An operator in the interaction picture is defined as Note that AS(t) will typically not depend on t and can be rewritten as just AS. It only depends on t if the operator has "explicit time dependence", for example, due to its dependence on an applied external time-varying electric field. Hamiltonian operator[edit] For the operator H0 itself, the interaction picture and Schrödinger picture coincide: This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called H0 without ambiguity. For the perturbation Hamiltonian H1,I, however, where the interaction-picture perturbation Hamiltonian becomes a time-dependent Hamiltonian, unless [H1,S, H0,S] = 0. It is possible to obtain the interaction picture for a time-dependent Hamiltonian H0,S(t) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H0,S(t), or more explicitly with a time-ordered exponential integral. Density matrix[edit] The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let ρI and ρS be the density matrices in the interaction picture and the Schrödinger picture respectively. If there is probability pn to be in the physical state |ψn〉, then Heisenberg Interaction Schrödinger Ket state constant Observable constant Density matrix constant Time-evolution equations in the interaction picture[edit] Time-evolution of states[edit] Transforming the Schrödinger equation into the interaction picture gives which states that in the interaction picture, a quantum state is evolved by the interaction part of the Hamiltonian as expressed in the interaction picture[4]. Time-evolution of operators[edit] If the operator AS is time-independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for AI(t) is given by In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian H' = H0. Time-evolution of the density matrix[edit] The evolution of the density matrix in the interaction picture is in consistency with the Schrödinger equation in the interaction picture. Expectation values[edit] For a general operator , the expectation value in the interaction picture is given by Using the density-matrix expression for expectation value, we will get Use of interaction picture[edit] The purpose of the interaction picture is to shunt all the time dependence due to H0 onto the operators, thus allowing them to evolve freely, and leaving only H1,I to control the time-evolution of the state vectors. The interaction picture is convenient when considering the effect of a small interaction term, H1,S, being added to the Hamiltonian of a solved system, H0,S. By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of H1,I,[5]:355ff e.g., in the derivation of Fermi's golden rule,[5]:359–363 or the Dyson series[5]:355–357 in quantum field theory: in 1947, Tomonaga and Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series. Summary comparison of evolution in all pictures[edit] Evolution Picture of: Heisenberg Interaction Schrödinger Ket state constant Observable constant Density matrix constant 1. ^ Albert Messiah (1966). Quantum Mechanics, North Holland, John Wiley & Sons. ISBN 0486409244; J. J. Sakurai (1994). Modern Quantum Mechanics (Addison-Wesley) ISBN 9780201539295. 2. ^ J. W. Negele, H. Orland (1988), Quantum Many-particle Systems, ISBN 0738200522. 3. ^ The Interaction Picture, lecture notes from New York University. 4. ^ Quantum Field Theory for the Gifted Amateur, Chapter 18 - for those who saw this being called the Schwinger-Tomonaga equation, this is not the Schwinger-Tomonaga equation. That is a generalization of the Schrödinger equation to arbitrary time-like foliations of spacetime. 5. ^ a b c Sakurai, J. J.; Napolitano, Jim (2010), Modern Quantum Mechanics (2nd ed.), Addison-Wesley, ISBN 978-0805382914  • Townsend, John S. (2000). A Modern Approach to Quantum Mechanics, 2nd ed. Sausalito, California: University Science Books. ISBN 1-891389-13-0.  See also[edit]
1abe37b26ad37ca6
Mathematical Physics 1701 Submissions [9] viXra:1701.0679 [pdf] submitted on 2017-01-30 21:21:09 Metamorphic Topological Schemes Authors: Miguel A. Sanchez-Rey Comments: 2 Pages. Establish topological schemes in metamorphic space as A-scheme and B-scheme. Category: Mathematical Physics [8] viXra:1701.0653 [pdf] submitted on 2017-01-28 10:44:20 Advance on Electron Deep Orbits of the Hydrogen Atom Authors: J.L.Paillet, A.Meulenberg Comments: 12 Pages. In previous works, we discussed arguments for and against the deep orbits, as exemplified in published solutions. So we considered the works of Maly and Va’vra on the topic, the most complete solution available and one showing an infinite family of EDO solutions. In particular, we deeply analyzed their 2nd of these papers, where they consider a finite nucleus and look for solutions with a Coulomb potential modified inside the nucleus. In the present paper, we quickly recall our analysis, verification, and extension of their results. Moreover, we answer to a recent criticism that the EDOs would represent negative energy states and therefore would not qualify as an answer to the questions posed by Cold Fusion results. We can prove, by means of a simple algebraic argument based on the solution process, that, while at the transition region, the energy of the EDOs are positive. Next, we deepen the essential role of Special Relativity as source of the EDOs, which we discussed in previous papers. But the central topic of our present study is an initial analysis of the magnetic interactions near the nucleus, with the aim of solving important physical questions: do the EDOs satisfy the Heisenberg Uncertainty relation (HUR)? Are the orbits stable? So, we examine some works related to the Vigier-Barut Model, with potentials including magnetic coupling. We also carried out approximate computations to evaluate the strength of these interactions and the possibilities of their answering some of our questions. As a first result, we can expect the HUR to be respected by EDOs, due to the high energies of the magnetic interactions near the nucleus. Present computations for stability do not yet give a plain result; we need further studies and tools based on QED to face the complexity of the near-nuclear region. For the creation of EDOs, we outline a possibility based on magnetic coupling. Category: Mathematical Physics [7] viXra:1701.0651 [pdf] submitted on 2017-01-28 08:06:57 Double Conformal Space-Time Algebra (ICNPAA 2016) Authors: Robert B. Easter, Eckhard Hitzer Comments: 10 pages. In proceedings: S. Sivasundaram (ed.), International Conference in Nonlinear Problems in Aviation and Aerospace ICNPAA 2016, AIP Conf. Proc., Vol. 1798, 020066 (2017); doi: 10.1063/1.4972658. 4 color figures. The Double Conformal Space-Time Algebra (DCSTA) is a high-dimensional 12D Geometric Algebra G(4,8) that extends the concepts introduced with the Double Conformal / Darboux Cyclide Geometric Algebra (DCGA) G(8,2) with entities for Darboux cyclides (incl. parabolic and Dupin cyclides, general quadrics, and ring torus) in spacetime with a new boost operator. The base algebra in which spacetime geometry is modeled is the Space-Time Algebra (STA) G(1,3). Two Conformal Space-Time subalgebras (CSTA) G(2,4) provide spacetime entities for points, flats (incl. worldlines), and hyperbolics, and a complete set of versors for their spacetime transformations that includes rotation, translation, isotropic dilation, hyperbolic rotation (boost), planar reflection, and (pseudo)spherical inversion in rounds or hyperbolics. The DCSTA G(4,8) is a doubling product of two G(2,4) CSTA subalgebras that inherits doubled CSTA entities and versors from CSTA and adds new bivector entities for (pseudo)quadrics and Darboux (pseudo)cyclides in spacetime that are also transformed by the doubled versors. The "pseudo" surface entities are spacetime hyperbolics or other surface entities using the time axis as a pseudospatial dimension. The (pseudo)cyclides are the inversions of (pseudo)quadrics in rounds or hyperbolics. An operation for the directed non-uniform scaling (anisotropic dilation) of the bivector general quadric entities is defined using the boost operator and a spatial projection. DCSTA allows general quadric surfaces to be transformed in spacetime by the same complete set of doubled CSTA versor (i.e., DCSTA versor) operations that are also valid on the doubled CSTA point entity (i.e., DCSTA point) and the other doubled CSTA entities. The new DCSTA bivector entities are formed by extracting values from the DCSTA point entity using specifically defined inner product extraction operators. Quadric surface entities can be boosted into moving surfaces with constant velocities that display the length contraction effect of special relativity. DCSTA is an algebra for computing with quadrics and their cyclide inversions in spacetime. For applications or testing, DCSTA G(4,8) can be computed using various software packages, such as Gaalop, the Clifford Multivector Toolbox (for MATLAB), or the symbolic computer algebra system SymPy with the GAlgebra module. Category: Mathematical Physics [6] viXra:1701.0540 [pdf] submitted on 2017-01-18 19:08:15 Solving Partial Differential Equations by Self-Generated Stochasticity. Authors: Michail Zak Comments: 7 Pages. New physical principle for simulations of PDE has been introduced. It is based upon replacing the PDE to be solved by the system of ODE for which the PDE represents the corresponding Liouville equation. The proposed approach has a polynomial (rather than exponential) algorithmic complexity, and it is applicable to nonlinear parabolic, hyperbolic, and elliptic PDE. Category: Mathematical Physics [5] viXra:1701.0533 [pdf] submitted on 2017-01-18 05:02:17 A Note on the Gravitational Equations Analogous to Maxwell's Electromagnetic Equations. Authors: J. Dunning-Davies, J. P. Dunning-Davies Comments: 7 Pages. Ever since Oliver Heaviside's suggestion of the possible existence of a set of equations, analogous to Maxwell's equations for the electromagnetic field, to describe the gravitational field, others have considered and built on the original notion. However, if such equations do exist and really are analogous to Maxwell's electromagnetic equations, new problems could arise related to presently accepted notions concerning special relativity. This note, as well as offering a translation of a highly relevant paper by Carstoiu, addresses these concerns in the same manner as similar concerns regarding Maxwell's equations were. Category: Mathematical Physics [4] viXra:1701.0523 [pdf] replaced on 2017-04-26 09:09:52 Draft Introduction to Abstract Kinematics Authors: Grushka Ya.I. Comments: 208 Pages. Mathematics Subject Classification: 03E75; 70A05; 83A05; 47B99. DOI: 10.13140/RG.2.2.24968.62720 This work lays the foundations of the theory of kinematic changeable sets ("abstract kinematics"). Theory of kinematic changeable sets is based on the theory of changeable sets. From an intuitive point of view, changeable sets are sets of objects which, unlike elements of ordinary (static) sets, may be in the process of continuous transformations, and which may change properties depending on the point of view on them (that is depending on the reference frame). From the philosophical and imaginative point of view the changeable sets may look like as "worlds" in which evolution obeys arbitrary laws. Kinematic changeable sets are the mathematical objects, consisting of changeable sets, equipped by different geometrical or topological structures (namely metric, topological, linear, Banach, Hilbert and other spaces). In author opinion, theories of changeable and kinematic changeable sets (in the process of their development and improvement), may become some tools of solving the sixth Hilbert problem at least for physics of macrocosm. Investigations in this direction may be interesting for astrophysics, because there exists the hypothesis, that in the large scale of Universe, physical laws (in particular, the laws of kinematics) may be different from the laws, acting in the neighborhood of our solar System. Also these investigations may be applied for the construction of mathematical foundations of tachyon kinematics. We believe, that theories of changeable and kinematic changeable sets may be interesting not only for theoretical physics but also for other fields of science as some, new, mathematical apparatus for description of evolution of complex systems. Category: Mathematical Physics [3] viXra:1701.0309 [pdf] replaced on 2017-02-04 13:13:24 Inversions And Invariants Of Space And Time Authors: Hans Detlef Hüttenbach Comments: 5 Pages. This paper is on the mathematical structure of space, time, and gravity. It is shown that electrodynamics is neither charge inversion invariant, nor is it time inversion invariant. Category: Mathematical Physics [2] viXra:1701.0299 [pdf] replaced on 2017-01-15 11:21:14 A Child's Guide to Spinors Authors: William O. Straub Comments: 13 Pages. Finalized, with typos fixed in Equations (6.2.2) and (6.3.2) A very elementary overview of the spinor concept, intended as a guide for undergraduates. Category: Mathematical Physics [1] viXra:1701.0166 [pdf] submitted on 2017-01-03 13:20:03 Prolate Spheroidal Wave Function as Exact Solution of the Schrödinger Equation Authors: J. Akande, D. K. K. Adjaï, L. H. Koudahoun, Y. J. F. Kpomahou, M. D. Monsia Comments: 6 pages In quantum mechanics, the wave function and energy are required for the complete characterization of fundamental properties of a physical system subject to a potential energy. It is proved in this work, the existence of a Schrödinger equation with position-dependent mass having the prolate spheroidal wave function as exact solution, resulting from a classical quadratic Liénard-type oscillator equation. This fact may allow the extension of the current one-dimensional model to three dimensions and increase the understanding of analytical features of quantum systems. Category: Mathematical Physics
eaa8b464a8e14c1d
onsdag 2 september 2015 Finite Element Quantum Mechanics 5: 1d Model in Spherical Symmetry The new Schrödinger equation I am studying in this sequence of posts takes the following form, in spherical coordinates with radial coordinate $r\ge 0$ in the case of spherical symmetry, for an atom with kernel of charge $Z$ at $r=0$ with $N\le Z$ electrons of unit charge distributed in a sequence of non-overlapping spherical shells $S_1,...,S_M$ separated by spherical surfaces of radii $0=r_0<r_1<r_2<...<r_M=\infty$, with $N_j>0$ electrons in shell $S_j$ corresponding to the interval $(r_{j-1},r_j)$ for $j=1,...,M,$ and $\sum_j N_j = N$: Find a complex-valued differentiable function $\psi (r,t)$ depending on $r≥0$ and time $t$, satisfying for $r>0$ and all $t$, • $i\dot\psi (r,t) + H(r,t)\psi (r,t) = 0$              (1) where $\dot\psi = \frac{\partial\psi}{\partial t}$ and $H(r,t)$ is the Hamiltonian defined by • $H(r,t) = -\frac{1}{2r^2}\frac{\partial}{\partial r}(r^2\frac{\partial }{\partial r})-\frac{Z}{r}+ V(r,t)$, • $V(r,t)= 2\pi\int\vert\psi (s,t)\vert^2\min(\frac{1}{r},\frac{1}{s})R(r,s,t)s^2\,ds$, • $R(r,s,t) = (N_j -1)/N_j$ for $r,s\in S_j$ and $R(r,s,t)=1$ else, • $4\pi\int_{S_j}\vert\psi (s,t)\vert^2s^2\, ds = N_j$ for $j=1,...,M$.                  (2) Here $-\frac{Z}{r}$ is the kernel-electron attractive potential and $V(r,t)$ is the electron-electron repulsive potential computed using the fact that the potential $W(s)$ of a spherical uniform surface charge distribution of radius $r$ centered at $0$ of total charge $Q$, is given by $W(s)=Q\min(\frac{1}{r},\frac{1}{s})$, with a reduction for a lack of self-repulsion within each shell given by the factor $(N_j -1)/N_j$. The $N_j$ electrons in shell $S_j$ are thus homogenised into a spherically symmetric charge distribution of total charge $N_j$. This is a free boundary problem readily computable on a laptop, with the $r_j$ representing the free boundary separating shells of spherically symmetric charge distribution of intensity $\vert\psi (r,t)\vert^2$ and a free boundary condition asking continuity and differentiability of $\psi (r,t)$.    Separating $\psi =\Psi +i\Phi$ into real part $\Psi$ and imaginary part $\Phi$, (1) can be solved by explicit time stepping with (sufficiently small) time step $k>0$ and given initial condition (e.g. as ground state): • $\Psi^{n+1}=\Psi^n-kH\Phi^n$,  • $\Phi^{n+1}=\Phi^n+kH\Psi^n$,  for $n=0,1,2,...,$ where $\Psi^n(r)=\Psi (r,nk)$ and $\Phi^n(r)=\Phi (r,nk)$, while stationary ground states can be solved by the iteration • $\Psi^{n+1}=\Psi^n-kH\Psi^n$,  • $\Phi^{n+1}=\Phi^n-kH\Phi^n$,  while maintaining (2). A remarkable fact is that this model appears to give ground state energies as minimal eigenvalues of the Hamiltonian for both ions and atoms for any $Z$ and $N$ within a percent or so, or alternatively ground state frequencies from direct solution in time dependent form. Next I will compute excited states and transitions between excited states under exterior forcing. Specifically, what I hope to demonstrate is that the model can explain the periods of the periodic table corresponding to the following sequence of numbers of electrons in shells of increasing radii: 2, (2, 8), (2, 8, 8), (2, 8, 18, 8), (2, 8, 18, 18, 8)... which to be true lacks convincing explanation in standard quantum mechanics (according to E. Serri among many others). The basic idea is thus to represent the total wave function $\psi (r,t)$ as a sum of shell wave functions with non-overlapping supports in the different in shells requiring $\psi (r,t)$ and thus $\vert\psi (r,t)\vert^2$ to be continuous across inter-shell boundaries as free boundary condition, corresponding to continuity of charge distribution as a classical equilibrium condition. I have also with encouraging results tested this model for $N\le 10$ in full 3d geometry without spherical shell homogenisation with a wave function as a sum of electronic wave functions with non-overlapping supports separated by a free boundary determined by continuity of wave function including charge distribution. We compare with the standard (Hartree-Fock-Slater) Ansatz of quantum mechanics with a multi-dimensional wave function $\psi (x_1,...,x_N,t)$ depending on $N$ independent 3d coordinates $x_1,...,x_N,$ as a linear combination of wave functions of the multiplicative form • $\psi_1(x_1,t)\times\psi_2(x_2,t)\times ....\times\psi_N(x_N,t)$,   with each electronic wave function $\psi_j(x_j,t)$ with global support (non-zero in all of 3d space). Such multi-d wave functions with global support thus depend on $3N$ independent space coordinates and as such defy both direct physical interpretation and computability, as soon as $N>1$, say. One may argue that since such multi-d wave function cannot be computed, it does not matter that they have no physical meaning, but the net output appears to be nil, despite the declared immense success of standard quantum mechanics based on this Ansatz. Inga kommentarer: Skicka en kommentar
158e86cdf049a901
Printable Version Back to Previous Page Extended Hueckel Extended Hückel theory provides an all-valence-electron empirical approximation for solving the electronic Schrödinger equation. The Scigress ExtHückel application accepts a chemical sample file (*.csf) as input. The computation treats all valence electrons and calculates the electronic wavefunction of the structures in the chemical sample file. Double-zeta basis sets are used for d- and f- functions. Two parameter sets are provided, a standard set and the Alvarez collected set. You can use either of these parameter sets in Scigress, or implement others. From the wavefunction, the electron density, molecular orbitals, electrostatic potential, partial charges, and bond orders are determined. Electronic information computed by the ExtHückel application is written to a file called huckel.out. Electronic properties computed by the ExtHückel application can be displayed superimposed on the molecular structure from the chemical sample file. Extended Hückel does not optimize the geometry; all properties are calculated on the current geometry.
75d247e9d6d39b21
Viewpoint: Light Bends Itself into an Arc Zhigang Chen, Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132, USA Published April 16, 2012  |  Physics 5, 44 (2012)  |  DOI: 10.1103/Physics.5.44 Nondiffracting Accelerating Wave Packets of Maxwell’s Equations Ido Kaminer, Rivka Bekenstein, Jonathan Nemirovsky, and Mordechai Segev Published April 16, 2012 | PDF (free) +Enlarge image Figure 1 (Left) Ref. [1]. (Right) Courtesy D. N. Christodoulides Figure 1 Kaminer et al. showed that shape-preserving beams of light that travel along a circular trajectory emerge as solutions to Maxwell’s equations. (Left) Calculated propagation of a self-bending beam. This solution assumes the wave’s electric field is polarized in the transverse direction (TE polarization). (Right) Illustration of a nondiffracting beam bending around an obstacle. Apart from the broadening effects of diffraction, light beams tend to propagate along a straight path. Mirrors, lenses, and light guides are all ways to force light to take a more circuitous path, but an alternative that many researchers are exploring is to prepare light beams that can bend themselves along a curved path, even in vacuum. In a paper in Physical Review Letters, Ido Kaminer and colleagues at Technion, Israel, report on wave solutions to Maxwell’s equations that are both nondiffracting and capable of following a much tighter circular trajectory than was previously thought possible [1]. Apart from fundamental scientific interest, such wave solutions may lead to the possibility of synthesizing shape-preserving optical beams that make curved left- or right-handed turns on their own. The equations describing these light waves could also be generalized to describe similar behavior in sound and water waves. The idea for making specially shaped light waves that could bend without dispersion actually emerged from quantum physics. In 1979, Berry and Balazs realized that the force-free Schrödinger equation could give rise to solutions in the form of nonspreading “Airy” wave packets [2] that freely accelerate even in the absence of any external potential. This early work remained dormant in the literature for decades, until Christodoulides and co-workers demonstrated the optical analog of Airy wave packets: specially shaped beams of light that did not diffract over long distances but could bend (or, self-accelerate) sideways [3] (see 28 November 2007 Focus story ). Such self-accelerating Airy beams have since attracted a great deal of interest due to their unique properties, and they provide the basis for a number of proposed applications including optical micromanipulation [4], plasma guidance and light bullet generation [5], and routing surface plasmon polaritons [6] (see 6 September 2011 Viewpoint ). A typical simplification when solving Maxwell’s wave equations is to assume the light waves are paraxial, meaning the angle between the wave vectors that constitute a wavepacket and the optical axis is small enough that the wave does not deviate too much from its propagation direction. Under this paraxial approximation, the resulting time-independent scalar Helmholtz equation takes the same form of the Schrödinger equation, and it was this relationship that led to the proposal that finite-power optical Airy beams could be attainable in experiments [3]. (Unlike other nondiffracting beams such as the well-known Bessel beams [7], Airy beams have a unique spatial phase structure, do not rely on simple conical superposition of plane waves, and can self-accelerate.) At small angles, Airy beams follow parabolic trajectories similar to ballistic projectiles moving under the force of gravity [8], but at large angles, beyond the paraxial approximation, they cannot maintain their shape-preserving property as they propagate. Therefore, it is important to identify mechanisms that could allow self-accelerating beams to propagate in a true diffraction-free manner even for large trajectory bending. Several studies have searched for accelerating beams beyond the paraxial regime. In one study [9], nonparaxial Airy beams were sought as exact solutions of Maxwell’s wave equation, but these beams tend to break up and decay as they propagate because parts of them consist of evanescent waves. In yet another study [10], the so-called caustic method was used to effectively stretch the paraxial Airy beams to the nonparaxial regime, but these “caustic-designed” accelerating beams don’t preserve their shape like nondiffracting paraxial Airy beams. Thus, a natural question arose: Could a beam accelerate at large nonparaxial angles but still hold its shape? Since beam propagation is governed by Maxwell’s equations, it was equivalent to asking: Were there any solutions to Maxwell’s equations that allowed nondiffracting, self-accelerating beams? In their new work, Kaminer et al. report they have found shape-preserving nonparaxial accelerating beams (NABs) as a complete set of general solutions to the full Maxwell’s equations. Differently from the paraxial Airy beams that accelerate along a parabolic trajectory, these nonparaxial beams accelerate in a circular trajectory. To find the solutions for the NABs, Kaminer et al. started with the scalar Maxwell’s wave equation for a given polarization, such as TE (transverse electric) polarization, where the electric field is perpendicular to the direction of the wave. Since the equation exhibits full symmetry between the x and z coordinates, the solutions of shape-preserving beams must have circular symmetry. The authors therefore transformed the equation into polar coordinates and looked for shape-preserving solutions where the field amplitude didn’t vary with angle. In polar coordinates, the solution to the wave equation is a Bessel function; transforming back to Cartesian coordinates, the solution must be separated into forward and backward propagating waves in Fourier space. However, only the forward propagating part forms the desired accelerating beam, so the Kaminer et al. solutions are properly called “half Bessel wave packets.” The authors found a solution for TM (transverse magnetic) polarization through a similar procedure. For both TE and TM polarizations, the beams preserve their shape while the quarter-circle bending could occur after a propagation distance of just 35μm. In addition, the authors studied the properties of these Bessel-like accelerating beams and found that the Poynting vector of the main lobe can turn by more than 90 [1]. The left part of Fig. 1 shows a typical solution of the shape-preserving beam, which can sweep out a quarter circle. It is important to note that this one-dimensional beam propagates initially along the longitudinal z direction while its curved trajectory in the x-z plane is determined by a Bessel function. This is quite different from the traditional nondiffracting Bessel beam, which propagates in a straight line while its two-dimensional transverse pattern follows a Bessel function [7]. As the authors point out, the nonparaxial shape-preserving accelerating beams found in their work originates from the full vector solutions of Maxwell’s equation. Moreover, in their scalar form, these beams are the exact solutions for nondispersive accelerating wave packets of the most common wave equation describing time-harmonic waves. As such, this work has profound implications for other linear wave systems in nature, ranging from sound waves and surface waves in fluids to many kinds of classical waves. Furthermore, based on previous successful demonstrations of self-accelerating Airy beams [3–6], one would expect that the nonparaxial Bessel-like accelerating beams proposed in this study could be readily realized in experiment. Apart from many exciting opportunities for these beams in various applications, such as beams that self-bend around an obstacle (Fig. 1, right), one might expect one day light could really travel around a circle by itself, bringing the search for an “optical boomerang” into reality. 1. I. Kaminer, R. Bekenstein, J. Nemirovsky, and M. Segev, Phys. Rev. Lett. 108, 163901 (2012). 3. G. A. Siviloglou and D. N. Christodoulides, Opt. Lett. 32, 979 (2007); G. A. Siviloglou, J. Broky, A. Dogariu, and D. N. Christodoulides, Phys. Rev. Lett. 99, 213901 (2007). 4. J. Baumgartl, M. Mazilu, and K. Dholakia, Nature Photon. 2, 675 (2008). 5. P. Polynkin, M. Kolesik, J. V. Moloney, G. A. Siviloglou, and D. N. Christodoulides, Science 324,229 (2009); A. Chong, W. H. Renninger, D. N. Christodoulides, and F. W. Wise, Nature Photon. 4,103 (2010). 6. P. Zhang, S. Wang, Y. Liu, X. Yin, C. Lu, Z. Chen, and X. Zhang, Opt. Lett. 36,3191 (2011); A. Minovich, A. E. Klein, N. Janunts, T. Pertsch, D. N. Neshev, and Y. S. Kivshar, Phys. Rev. Lett. 107,116802 (2011); L. Li, T. Li, S. M. Wang, C. Zhang, and S. N. Zhu, 107,126804 (2011). 7. J. Durnin, J. J. Miceli, Jr., and J. H. Eberly, Phys. Rev. Lett. 58,1499 (1987). 8. G. A. Siviloglou J. Broky, A. Dogariu, and D. N. Christodoulides, Opt. Lett. 33,207 (2008); Y. Hu, P. Zhang, C. Lou, S. Huang, J. Xu, and Z. Chen, 35,2260 (2010). 9. A. V. Novitsky and D. V. Novitsky, Opt. Lett. 34, 3430 (2009). 10. L. Froehly, F. Courvoisier, A. Mathis, M. Jacquot, L. Furfaro, R. Giust, P. A. Lacourt, and J. M. Dudley, Opt. Express 19,16455 (2011). About the Author: Zhigang Chen Zhigang Chen Zhigang Chen received his Ph.D. in physics from Bryn Mawr College in 1995. He was a postdoctoral research associate and then a senior research staff member at Princeton University. He has been on the faculty in the Department of Physics and Astronomy at San Francisco State University since 1998. Subject Areas New in Physics
7e77131963f87097
Overview of quantum space theory corral-pdf Quantum space theory is a pilot-wave theory [1] (similar to de Broglie’s double solution theory[2] the de Broglie-Bohm theory[3] and Vigier’s stochastic approach [4]) that mathematically reproduces the predictions of canonical quantum mechanics while maintaining a completely lucid and intuitively accessible ontology. The theory takes the vacuum to be a physical fluid with low viscosity (a superfluid), and captures the attributes of quantum mechanics and general relativity from the flow parameters of that fluid. This approach objectively demystifies wave-particle duality, eliminates state vector reduction, reveals the physical nature of the wave function, and exposes the geometric roots of Heisenberg uncertainty, quantum tunneling, non-locality, gravity, dark matter, and dark energy—making it a candidate theory of quantum gravity and a possible GUT. In short, quantum space theory offers a more detailed picture of reality—conceptually exposing additional fundamental geometric character to the vacuum that gives rise to the emergent properties of quantum mechanics and general relativity. This deeper level description restores scientific realism and determinism—powerfully arguing that the smallest parts of the world exist objectively, in the same sense that the moon and rocks exist, whether or not we observe them, and that the subtle details of that reality can be captured by the human mind—a position held in common by Einstein, Planck, Schrödinger, de Broglie, Bohm, Vigier, Descartes, Heraclitus and more. The idea is surprisingly simple—to reproduce the cornucopia of phenomena we find in Nature (those captured by quantum mechanics and general relativity) we model the vacuum as a superfluid—a dynamic fluid that is made up of many identical quanta that shuffle about, colliding and careening off of each other, like the molecules in supercooled helium do. These vacuum quanta (pixels of space) are arranged in (and move about in) superspace. The positions and velocities of these quanta define a vector space (think Hilbert space, or state space, but apply these mathematical notions to a physically real arena in which the vacuum quanta reside—called superspace). At any given moment, the “state of space” or the “vacuum state” for a particular volume of space is defined by the instantaneous arrangements (positions, velocities, and rotations) of the vacuum quanta that make up that volume. That is, the vacuum state is defined by variables that exist in superspace—not in space. Because the vacuum is a collection of many quanta, its large-scale structure—represented by the extended spatial dimensions (x,y,z)—only comes into focus as significant collections of quanta are considered. On macroscopic scales, that structure is approximately Euclidean (mimicking the flat continuous kind of space we all conceptually grew up with) only when and where the state of space captures an equilibrium distribution with no divergence or curl in its flow, and contains no density gradients. [5] There are two classes of waves in the vacuum: solitons, and pressure waves. A soliton is a wave packet that remains localized (retains its shape, doesn’t spread out). In other words, solitons are complex and non-dispersive, or what a mathematician would call “non-linear”. By contrast, pressure waves (also called longitudinal waves) do spread out. They are simple and “linear”. There are two types of solitons: pulse phonons, and vortices. Pulse phonons (undulating pulse waves) propagate through the vacuum at the speed of light, similar to how sound waves pass through the medium of air at the speed of sound. The difference between pulse phonons in the vacuum and sound waves in air is that (1) due to Anderson localization (otherwise known as strong localization) pulse phonons stay localized as they propagate through the vacuum, and (2) they resonate, and therefore possess an internal frequency. As a soliton (wave packet) advances, the randomly ordered fluid around it pushes back, collectively creating interferences that keep it from spreading out. [6] This dynamic interaction (between the soliton and the surrounding fluid) results in a redistribution of the medium—which can be described as a linear wave whose magnitude dissipates with distance from the core of the non-linear soliton wave. This surrounding wave is called a “pilot wave” because it guides and directs the path of the soliton it contains. Every soliton connects to the surrounding medium via a pilot wave, but pilot waves can exist without solitons. Pulse phonons, along with their pilot wave counterparts, represent bosons (photons, gluons, etc.). The other type of vacuum soliton is made up of waves that twist together to form stable quantized vortices, (whirling about on a closed loop path in whole wavelength multiples—matching phase with each loop). This stabilization condition leads to vortex quantization (allowing only very specific vortices). [7] These vortices can persist indefinitely, so long as they are not sufficiently perturbed. That is, once stable vortices form in a superfluid, they do not dissipate or spread out on their own. Incoming waves can transform an existing vortex to a different allowed vortex, so long as the distortive energy of those waves is equal to the difference between the two stable states. With sufficient disruption, vortices can also be canceled out—by colliding with vortices that are equal in magnitude but opposite in rotation, or by undergoing transformations that convert them into phonons. Unlike pulse phonons, which pass right through each other upon incidence, quantized vortices, or sonons, [8] (think smoke rings) cannot freely pass through each other. Instead, they hydrodynamically push and pull on each other in ways that allow only certain stable configurations, giving rise to the Pauli exclusion principle. Vacuum vortices also connect to the rest of the medium via a pilot wave. Each unique vortex, along with its surrounding pilot wave, represents a fermion (an electron, quark, muon, etc.) According to this picture, wave-particle duality is an implicit, non-excisable quality of reality because “particles” are localized vacuum waves (complex, non-linear distortions that are concentrated in a small region—solitons) surrounded by pilot waves that guide their motion. Both the particle and the pilot wave are physically and objectively real entities. Evolution of the idea In 1867, William Thomson (also known as Lord Kelvin) proposed “one of the most beautiful ideas in the history of science,” [9]—that atoms are vortices in the aether. [10] He recognized that if topologically distinct quantum vortices are naturally and reproducibly authored by the properties of the aether, then those vortices are perfect candidates for being the building blocks of the material world. [11] When Hermann Helmholtz demonstrated that “vortices exert forces on one another, and those forces take a form reminiscent of the magnetic forces between wires carrying electric currents,” [12] Thomson’s passion for this proposal caught fire. Using Helmholtz’s theorems, he demonstrated that a non-viscous medium does in fact only admit distinct types, or species, of vortices. And he showed that once these vortices form they can persist without end, and that they have a propensity to aggregate into a variety of quasi-stable arrangements. This convinced Thomson that vorticity is the key to explaining how the few types of fundamental matter particles—each existing in very large numbers of identical copies—arise in Nature. Despite the elegance of Thomson’s idea, the entire project was abandoned when the Michelson-Morley experiment ruled out the possibility that the luminiferous aether was actually there. Interpreting these vortices to critically depend on the aether (instead of allowing for some other medium to be the substrate that supports them) scientists dropped the idea altogether—unwittingly throwing the baby out with the bathwater. In 1905, in response to the discovery that light exhibits wave-particle duality—that light behaves as a wave, even though it remains localized in space as it travels from a source to a detector—Einstein proposed that photons are point-like particles surrounded by a continuous wave phenomenon that guides their motions. [13] This proposal resurrected the core of Thomson’s idea—framing it in a new mold (pilot-wave theory). [14] In 1925 Louis de Broglie discovered that wave-particle duality also applies to particles with mass, [15] and became acutely interested in the pilot-wave ontology. Determined to further develop pilot wave theory, he added internal structure to Einstein’s notion of particles, and suggested that particles are intersecting waves, like fluid vortices, made up of many interacting atoms/molecules of a sub-quantum medium. [16] Convinced that this idea was “the most natural proposal of all”, de Broglie outlined its general structure, [17] and then began working on a second proposal—a mathematically simplified approximation of that idea, which treated particles as simple point-like entities surrounded by pilot waves. De Broglie presented this second proposal at the 1927 Solvay Physics Conference, where it was ridiculed to such a degree that he dropped the idea for decades. Twenty-five years later, David Bohm rediscovered de Broglie’s simplified approach, and (in collaboration with de Broglie) completed the formalism. The result was the de Broglie-Bohm theory, [18] “the fully deterministic interpretation of quantum mechanics that reproduces all of the predictions of standard quantum mechanics without introducing any stochastic element into the world or abandoning realism.” [19] (Never heard of this before? Well most physicists haven’t either. Read Why don’t more physicists subscribe to pilot wave theory to find out why. [20]) Quantum mechanics from pilot wave theory Pilot wave theory fully (and deterministically) captures quantum mechanics, and it does so with elegance and ease. In fact, when we assume that particles (photons, electrons, etc.) are point-like entities that follow continuous and causally defined trajectories with well-defined positions \xi(t), and that every particle is surrounded by a physically real wave field \psi (\boldsymbol r,t) that guides it, we only need three supplementary conditions to perfectly choreograph all of quantum mechanics. Those conditions are: 1. The wave \psi (\boldsymbol r,t) evolves according to the Schrödinger equation, 2. The probability distribution of an ensemble of particles described by the wave function \psi, is P=\abs \psi^2, and 3. Particles are carried by their local “fluid” flow. In other words, the change of particle’s position with respect to time is equal to the local stream velocity d\xi(t)/dt=\boldsymbol v, where \boldsymbol v=\nabla S/m, and the “velocity potential” S is related to the phase \varphi of \psi by S(\boldsymbol r, t)=\hbar \hspace{.05cm} \varphi (\boldsymbol r, t). From here, obtaining a full hydrodynamic account of quantum mechanics is simply a matter of expressing the evolution of the system in terms of its fluid properties: the fluid density \rho(\boldsymbol r), the velocity potential S, and stream velocity \boldsymbol v. The first step is to write down the Schrödinger equation in its hydrodynamic form: [21] \psi=\sqrt{\rho} \hspace{.05cm} e^\frac{iS}{\hbar} Then we express fluid conservation via the continuity equation, which states that any change in the amount of fluid in any volume must be equal to rate of change of fluid flowing into or out of the volume—no fluid magically appears or disappears: \frac {\partial \rho} {\partial t} + div(\rho \hspace{.05cm} \frac{\nabla S}{m})=0 From this it follows (given that particles are carried by their guiding waves) that the path of any particle is determined by the evolution of the velocity potential \partial S/\partial t, which is: \frac{\partial S}{\partial t}=-\frac{1}{2m}(\nabla S)^2-V-Q This evolution depends on both the classical potential V and the “quantum potential” Q, where: [22] Q=-\frac{\hbar^2}{2m} \frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}}=-\frac{\hbar^2}{4m}[\frac{\nabla^2 \rho}{\rho}-\frac{1}{2} (\frac{\nabla \rho}{\rho})^2] That’s it. We now have a hydrodynamic model that fully reproduces the behavior of quantum particles in terms of a potential flow. Note that, from a classical or realist perspective, the assumptions held by this formalism are far less alarming than those maintained in canonical quantum mechanics (which regards the wave function to be an ontologically vague element of Nature, inserts an ad hoc time-asymmetric process into Nature—wave function collapse, abandons realism and determinism, etc.). Nevertheless, being based on an approximation of the more natural ontology, the auxiliary assumptions of this construction still cry out for a more complete understanding. So let’s address them. Condition 1: The wave \psi (\boldsymbol r, t) evolves according to the Schrödinger equation. Every physical medium has a wave equation that details how waves mechanically move through it. Under de Broglie’s original assumption that pilot waves are mechanically supported by a physical sub-quantum medium, the idea that the pilot wave \psi (\boldsymbol r, t) evolves according to the Schrödinger equation is completely natural—so long as the fluid has the right properties (e.g. behaves like a superfluid). But the de Broglie-Bohm theory doesn’t explicitly assume a physical medium. [23] As a consequence, it must tack on the assumption that the pilot wave (whatever it is a wave of) evolves (for some reason) according to the Schrödinger equation. It’s worth pointing out that the Schrödinger equation was originally derived to elucidate how photons move through the aether—the medium evoked to explain how light is mechanically transmitted. The aether was considered to be a “perfect fluid”, which meant that it had zero viscosity. When the aether fell out of fashion the medium was dropped but the wave equation remained, leaving an open-ended question about what light was waving through. When we fail to stipulate a physical medium, evolution according to the Schrödinger equation becomes a necessary additional (brute) assumption. With the physical medium in place (especially one with zero viscosity) the wave equation immediately and naturally follows as a descriptor of how waves mechanically move through that medium. Condition 2: The probability distribution of an ensemble of particles described by the wave function \psi, is P=\abs{\psi}^2. In order to establish that the equilibrium relation P=\abs{\psi}^2 is a natural expectation for arbitrary quantum motion, Bohm and Vigier proposed a hydrodynamic model infused with a special kind of irregular fluctuations. [24] To explain those fluctuations, they pointed out that the equations governing the \psi field could “have nonlinearities, unimportant at the level where the theory has thus far been successfully applied, but perhaps important in connection with processes involving very short distances. Such nonlinearities could produce, in addition to many other qualitatively new effects, the possibility of irregular turbulent motion.” [25] Bohm and Vigier went on to note that if photons and particles of matter have a granular substructure, analogous to the molecular structure underlying ordinary fluids, then the irregular fluctuations are merely random fluctuations about the mean (potential) flow of that fluid. They went on to prove that with these fluctuations present, an arbitrary probability density will always decay to \abs{\psi}^2—its equilibrium state. This proof was extended to the Dirac equation and the many-particle problem. [26] In short, in order to justify the equilibrium relation, Bohm and Vigier returned to de Broglie’s original idea—that particles are intersecting (non-linear) waves in a sub-quantum fluid surrounded by a (linear) pilot wave. The substructure of that fluid, how its inner parts mix and move about, is naturally responsible for the fluctuations that create the equilibrium relation—in perfect analogy to how Brownian motion is caused by the collisions and rearranging of molecules in the fluid it is in. Without assuming the physical existence of this sub-quantum fluid, the wave equation and the equilibrium relation are mysterious and unexpected conditions—additional brute assumptions. With the fluid, they naturally follow. Relating the velocity potential S to the phase \varphi of \psi by S(\boldsymbol r, t)=\hbar \hspace{.05cm} \varphi(\boldsymbol r, t), means that the phases of both (the pulsing particle and the surrounding wave) coincide. This condition—that “the particle beats in phase and coherently with its pilot wave”—is known as de Broglie’s “guiding” principle. It “ensures that the energy exchange (and thus coupling) between the particle and its pilot wave is most efficient,” [27] and that the core of the particle is carried along with the linear wave \psi. Given that what de Broglie really had in mind was that particles were intersecting waves in some fluid (pulsating non-linear waves), and that pilot waves were the linear extensions of those waves into the rest of the fluid, this condition may feel completely natural—automatically imported. But the simplified model doesn’t have that advantage. That is, under the approximation that particles are point-like structureless entities, it becomes necessary to additionally assert that (for some reason) those particles possess a phase, which pulses in sync with the surrounding pilot wave. This condition secures that the velocity of the particle matches the local stream velocity of the fluid. The moral of this story is that all of the auxiliary premises in the de Broglie-Bohm theory are necessitated by the model’s omission of the sub-quantum fluid that is responsible for the effects it is capturing—by what it washes out by way of approximation. In other words, these assumptions are consequences of the fact that the de Broglie-Bohm theory is a mean-field approximation of the real dynamics. To more viscerally connect with the quantum world, to have a richer understanding of quantum phenomenon while minimizing the number of our auxiliary assumptions, we have to tell the story from the perspective of the more complete ontology—the one that mirrors what’s actually going on in Nature—the one that de Broglie originally had in mind. [28] This is the aim of quantum space theory. [1] Pilot-wave theories (also called nonlocal hidden-variable theories) are a family of realist interpretations of quantum mechanics that believe that the statistical nature of quantum mechanics is due to an ignorance of an underlying more fundamental real dynamics, and that microscopic particles follow real trajectories just like larger classical bodies do. Quantum space theory falls under the family of models categorized as vacuum-based pilot-wave theories. It logically overlaps with both stochastic electrodynamics and superfluid vacuum theory. For a modern review of pilot wave theories, see: John W. M. Bush, “Pilot-Wave Hydrodynamics”. Annu. Rev. Fluid Mech. 2015. 47:269–92 (2015). [2] Louis de Broglie, “Interpretation of quantum mechanics by the double solution theory”. Annales de la Fondation, Volume 12, no. 4 (1987). [5] This echoes Dirac’s sentiment that, “the perfect vacuum as an idealized state, not attainable in practice.” The true vacuum is much richer than that trivial state and not “needs elaborate mathematics for its description.” It acts like a fluid, or “an aether, subject to quantum mechanics and conforming to relativity…” Paul A. M. Dirac, Nature (London) 169, 702 (1952). [6] If the vacuum were a composite of periodically arranged quanta, instead of being randomly arranged, then wave pulses passing through the vacuum would spread out (scatter and dilute). Randomness causes interference between multiple scattering paths to keep wave pulses completely localized. Anderson, P. W., “Absence of Diffusion in Certain Random Lattices”. Phys. Rev. 109 (5): 1492–1505 (1958); Roati, Giacomo; et al., “Anderson localization of a non-interacting Bose-Einstein condensate”. Nature. 453 (7197): 895–898 (2008). [9] Frank Wilczek. (December 29, 2011). Beautiful Losers: Kelvin’s Vortex Atoms. NOVA: http://www.pbs.org/wgbh/nova/physics/blog/2011/12/beautiful-losers-kelvins-vortex-atoms/ Much of this section follows this article. [10] Lord Kelvin (Sir William Thomson), On Vortex Atoms. Proceedings of the Royal Society of Edinburgh, Vol. VI, 1867, pp. 94-105. Reprinted in Phil. Mag. Vol. XXXIV, 1867, pp. 15-24. [12] Frank Wilczek. (2011, December 29). Beautiful Losers: Kelvin’s Vortex Atoms. NOVA: http://www.pbs.org/wgbh/nova/physics/blog/2011/12/beautiful-losers-kelvins-vortex-atoms/ Much of this section follows this article. [14] Einstein eventually came to the viewpoint that “quantum statistics should be due to a real subquantal physical vacuum alive with fluctuations and randomness.” Stanley Jeffers, “Jean-Pierre Vigier and the Stochastic Interpretation of Quantum Mechanics” (2000). [15] Louis de Broglie, Ann. Phys. (Paris) 3, 22 (1925). [18] David Bohm, Phys. Rev. 85, 166, 180 (1952). [21] Erwin Madelung, Physik 40, 332 (1926). See also: Madelung equations. [22] is called the Laplace operator and it represents the divergence of the gradient. Note that equations (2) and (3) do not contain the actual location of the particle , which is required to produce an exact output. That is, in order to use this theory to make an exact prediction of where the particle will end up, we must specify exactly where it was at some point. The quantum potential represents how much vacuum mixing redirects the particle, an effect that becomes less intense as the mass of the particle (the distortion magnitude of the soliton) increases. [23] Many physicists imagine a non-physical field supporting these wave dynamics instead of a physical fluid medium. [24] David Bohm & Jean-Pierre Vigier, Phys. Rev. 96, 208 (1954). [25] Ibid. [27] Ibid. [28] Louis de Broglie, Une tentative d’interprétation causale et non-linéaire de la Mécanique Ondulatorie (Gauthier-Villars, Paris), (1956). Where do we go from here? How many other mysteries can we ontologically penetrate with this model? To answer that question qst supporters are exploring the full implications of this geometry and developing its complete mathematical formulation. Whether or not this new model turns out to completely map Nature, it, at minimum, offers a unique and creative perspective. The theory paints a multi-dimensional realm, a vacuum with more texture than previously assumed, and all of the dynamics in that medium are controlled by the laws of cause and effect. This offers us the chance to get beneath the modern formalism of quantum mechanics by positing that the effects of quantum mechanics and general relativity are emergent phenomena that supervene on spacetime’s superfluid structure. Consequently, this approach explicitly reveals a Universe that is, on every level, deterministic. Those working on this project are motivated by the potential this return to determinism has to put us in better touch with reality and heighten our humanity. The more we understand Nature’s infinitely cascading structure and its dynamics, the more we can come to grips with our ‘magnificent insignificance.’ In the spirit of that investigation, we invite you to critically explore this new perspective and thank you for participating in the adventure of discovering Nature’s truths for yourself. Please note that we are acutely aware that this new theory might not turn out to accurately map Nature. So far, several testable predictions have fallen out of the theory, and any one of them could falsify it. This is part of the process of scientific investigation. Our desire to complete Einstein’s task moves us to explore theories that are capable of making epistemic contributions. In general, such efforts should be focused (in response to the constraints we are under) toward those theories with the greatest ontological potential. As the candidate for the theory of quantum gravity that offers the most intuitive accessibility, quantum space theory is our pick for the theory with the greatest ontological potential. All professional and constructive reviews of this work are welcome. A book on this topic, written for a general audience (of science enthusiasts) can be found here. (If cost is a barrier please send us a message.) Contact us with questions, comments, or to join the research effort at ei at EinsteinsIntuition dot com. The axioms of qst are: 1. The hierarchical structure of the superfluid vacuum is self-similar and, therefore, conforms to a perfect fractal. In short, the familiar medium of x, y, z space is composed of a large number of “space atoms” called quanta that dynamically mix and interact. Those quanta are composed of a large number of sub-quanta and the sub-quanta are composed of sub-sub-quanta and so on, ad infinitum. Vacuum superfluidity constrains the possible states of the vacuum in accordance with energy conservation, de Broglie relations, and linearity. More generally it constrains the vacuum as an acoustic metric. 2. Time is uniquely defined at each location in space and evolves discretely (for each quantum) as the number of whole resonations each quantum undergoes. As a result, the acoustic metric inherits a Newtonian time parameter and therefore exhibits the important property of stable causality. 3. Energy (total geometric distortion) is conserved. Energy conservation means that all metric distortions (phonons, quantum vortices, etc.) are interchangeable from one kind to another, including the transference of metric distortions from one hierarchical level to another, like the quantum level to the sub-quantum level. Some of the theorems/consequences that follow from those axioms are: 1. The wave equation (the non-linear Schrödinger equation, also known as the Gross-Pitaevskii equation) can be derived from first principles (see here, or here), from the assumption that the vacuum is a BEC whose state can be described by the wavefunction of the condensate. 2. Modeling the superfluid vacuum as an acoustic metric reproduces an analogue for general relativity’s curved spacetime within low momenta regimes. 3. Mass generation is a consequence of the symmetry breaking that occurs when quantum vortices form in the vacuum condensate. 4. The total number of spacetime dimensions in or spatiotemporal map depends on the resolution we desire. (Are we only quantizing the fabric of x, y, z? Or are we also keeping track of the subquanta that those quanta are composed of? and so on.) For any arbitrary resolution, the number of dimensions is equal to 3n + n. A second order perspective (n = 2) quantizes the fabric of space one time, and a third order perspective quantizes the volumes of that fabric, and so on, ad infinitum. 5. Quantization restricts the range of spacetime curvature: the minimum state of curvature (zero curvature) can be represented by the ratio of a circle’s circumference to its diameter in flat space (π), and the maximum state of curvature can be represented by the value of that ratio in maximally curved spacetime, a number that we will represent with the letter ж (“zhe”). 6. The constants of Nature are derivatives of the geometry of spacetime: they are simple composites of π, ж, and the five Planck numbers. 7. When the quanta of space are maximally packed they do not experience time because they cannot independently or uniquely resonate. 8. Black holes are collections of quanta that are maximally packed—regions of maximum spatial density. 9. When two objects occupy regions of different quantum density, the object in the region of greater density will experience less time. 10. Because the quanta are ultimately composed of subquanta, all propagations through space necessarily transfer some energy from the quantum level (motion of the quanta) to the subquantum level (to the internal geometric arrangements and motions of the subquanta). Although this transference of energy is proportionally very small (being approximately equal to the energy multiplied by the ratio of the subquantum scale to the quantum scale) it is additive. Therefore, it can become significant over large scales—leading to what we now call red-shift. Some of the testable hypotheses, or predictions, of this theory are: 1. Although the superfluid vacuum is non-relativistic, small fluctuations in the superfluid background obey Lorentz symmetry. This means that for low momenta conditions the theory captures the expectations of general relativity, but at high energy and high momenta conditions the theory projects Newtonian expectations over relativistic ones. Therefore, the theory predicts that when massive objects are accelerated to near the speed of light they will exhibit effects that will contradict general relativity in favor of Newtonian projections. 2. When we place a circle of any (macroscopic) size in a region where the gradient of spacetime curvature is at a minimum (where there is zero change in curvature throughout the region) the ratio of its circumference to its diameter gives us a value of 3.141592653589… (π). Qst predicts that this ratio will decrease if the circle occupies a region with a nonzero gradient of spacetime curvature. Furthermore, it predicts that in regions where the gradient of spacetime curvature is at a maximum there will be a minimum possible value for this circumference to diameter ratio. More specifically, for all possible circles centered around a black hole (or approaching the quantum scale) the minimum circumference to diameter ratio will be equal to a minimum value of 0.0854245431(31) (ж). This means that, instead of being randomly ascribed, the constants of Nature are immediate consequences of the geometric character of spacetime. A quantized picture of spacetime requires a natural minimum unit of distance (the Planck length), a natural minimum unit of time (the Planck time), and maximum amounts of mass, charge, and temperature in reference to the minimum units of space and time (Planck mass, Planck charge, and Planck temperature). Furthermore, quantization dictates minimum and maximum limits for the gradient of spacetime curvature (π and ж). According to qst, the constants of Nature are composites of these seven numbers. It turns out that this claim holds when ж is equal to 0.0854245431(31). 3. The theory predicts that temperature dependent phase changes exist in space—regions where the average geometric connectivity of the quanta of space transition from one state to another. Furthermore, the theory predicts that because the background temperature of the universe is cooling (the average wavelength of the Cosmic Microwave Background Radiation is increasing), the fraction of space characterized by the denser geometric phase should become more prevalent with time. 4. The theory predicts that the average radii of dark matter haloes should decrease as the energy output of the host galaxy decreases. It predicts that by comparing contemporary haloes we should find that the average radii of these haloes should depend on the energy output of the host galaxy and that the further the background temperature of space drops below the temperature of the critical phase transition the smaller the average radii of dark matter haloes should be. It follows from this that the radii of local dark matter haloes should decrease in the future (with a dependence on its host galaxy’s output). 5. The theory predicts that quantum tunneling should be less frequent in regions of greater curvature (regions with a greater density of space quanta). 6. The theory predicts that supersymmetric geometries are available only in axiomatic frameworks with a total number of dimensions equal to 3n + n, where n is an integer. 7. The theory leads us to expect that when the highest-energy gamma rays reach us from extremely distant supernova, they should be less red-shifted in proportion to the difference in time between the arrival of the gamma rays and the remaining wavelengths divided by the travel time of the longer wavelengths. Up until now, our intuitions about the world have, for the most part, been imprisoned by the confines of four dimensions (three dimensions of space plus one dimension of time). Our investigations of the mysteries effects we have observed in Nature have all started from this reference. As a consequence, we have tried to explain unexpected effects (like the Moon orbiting the Earth instead of just going straight through space) by inventing “forces” that we have held responsible (in the non-explanatory sense) for those effects. This process has restricted our ontological access. When we hold onto these traditional assumptions about space and time it becomes necessary to awkwardly superimpose equations for four forces on top of our preconceived axiomatic construction in order to retain predictability. The problem is that this method of regaining predictability robs us of the ability to explain those effects. Einstein interrupted this process by constructing a geometry that included the effects of gravity within his metric. Qst extends this approach by introducing an intuitive eleven-dimensional vacuum geometry (nine space dimensions and two time dimensions). So far this geometry appears to have the ability to contain Nature’s strange characteristics (the effects traditionally assigned to the four forces). To more rigorously determine whether or not those geometric characteristics fully account for the effects we have observed, we are working to complete a full mathematical formalism of the axiomatic structure. This picture gives us intuitive access to Nature’s mysteries by transforming the arcana of general relativity and quantum mechanics into necessary conditions of Nature’s geometric structure. Just how precisely qst maps all of Nature’s characteristics is a matter of scientific investigation. Before that question is resolved we can be assured that, as an intuitively accessible deductive construction, the model has significant scientific value. (Note that we have known for quite some time that Nature does not actually map to Euclidean geometry, nevertheless, the deductive, axiomatic framework known as Euclidean geometry continues to be a very useful and practical tool). The mere possibility that quantum space theory maps something new in the spectrum of Nature’s colorful character makes it worth investigating. But the fact that it enables us to visualize eleven dimensions simultaneously, something that has never been done before,  is what most directly speaks to its contributory value to science. From this we gain the potential to expand our intuitive horizon beyond our inbuilt senses and begin to penetrate the geometric origins of Nature’s mysteries: Heisenberg uncertainty, wave-particle duality, what the insides of black holes are like, the cause of the Big Bang, why the constants of Nature are what they are, dark matter, dark energy, etc. With an intuitively accessible model big science is no longer only for the professional physicist. Whether or not the model of quantum space theory is eventually shown to map Nature with precision, it provides us value because once we are equipped with the eleven-dimensional geometry of a superfluid vacuum, the biggest questions in physics gain elegant and simple analogies that anyone can understand. For more information check out this introduction video, or pick up your copy of  ‘Einstein’s Intuition‘ by Thad Roberts. Why it is needed As Thad states in chapter one of Einstein’s Intuition, we need to return to a place akin to where the young Einstein found himself, a place where the senses are allowed a deep connection to Nature, facilitating Einstein’s envisionment of the properties of light and time. Thad goes on, “this … highlights a fundamental problem in the approach taken by modern physics. For the past several decades, theorists and mathematicians have been working on constructing a framework of Nature that is capable of mathematically combining the descriptions of general relativity and quantum mechanics under the same rubric. … But their efforts have been focused on organizing Nature’s data into a self-consistent assembly—like the ones and zeros of a digital picture. The problem is that this inductive approach does not encourage, let alone require, the discovery of a conceptual portal.” “Even if physicists were one day to conclude that their assembly was mathematically correct, it would not actually increase our ability to truly comprehend Nature unless it was translated into some sort of picture. Therefore, since it is really the picture that we are after, maybe it is time for us to consider whether or not our efforts will bear more fruit under a different approach. Specifically, to maximize our chances of completing our goal of intuitively grasping Nature’s complete form, maybe we should follow the lead of young Einstein and return to a deductive conceptual approach. Perhaps it is time for us to place our focus on constructing a richer map of physical reality.” But, how do we actually do this? We are told, over and over that it is impossible to visualize more than three spatial dimensions, yet today’s leading theories routinely suggest, or even require, more than three spatial dimensions. Many people find the notion of additional dimensions absurd. They suggest that when other dimensions pop up in our equations they are just artifacts of our intricate mathematics of theoretical physics. They claim that those equations should not be taken as an indication of the “actual” existence of these extra dimensions. In opposition to this reaction quantum space theory holds these extra dimensions to be as real as the x, y, z and t dimensions we experience every day. Qst further elaborates a hierarchical structure to these extra dimensions that allows us to comprehend, and even visualize, the super and intra dimensions. qst proposes that the vacuum is a superfluid, that space is literally quantized into discrete pieces (quanta), and its eleven-dimensional structure follows from that claim.  The notion that the vacuum is a superfluid (whose geometric structure is hierarchically  quantized) gives us the ability to explain: • The constants of Nature—as a consequence of vacuum geometry • Force phenomena—in terms of allowed geometric distortions in the vacuum • The wave equation—as a descriptor of how distortions translate though the vacuum • Heisenberg uncertainty—as a manifestation of vacuum quantization and mixing • Wave-particle duality—as a manifestation of the vacuum’s fluid nature • Dark matter—as a phase change in the vacuum • Dark energy—as a transference of energy from the quanta to the sub-quanta • The State Vector—as a blurred (ensemble) representation of the vacuum’s possible state (given our ignorance of its exact state at any moment), and more. Instead of resting on a set of impenetrable dialogue filled with complex and distracting jargon, the solutions proposed by quantum space theory are all intelligible. By examining the idea that the vacuum is a superfluid we gain intuitive, simultaneous access to more than four spacetime dimensions, which allows us to intuitively absorb details of Nature and intimately understand the mysteries of physics. We invite you to participate in the task of steering science back towards its goal of obtaining  ontological clarity, of acquiring intuitive pictures, deductive solutions, and accessible explanations for Nature’s baffling effects. We invite you to read all about this model in the book Einstein’s Intuition: Visualizing Nature in Eleven Dimensions. Open yourself to a change in perspective and escape the conceptual limitations of three dimensions of space and one dimension of time.
2350310614295384
Please pay $5 ($9 for international requests), the fee for access to this archived article, to by                  New York Review Books highly regarded Collections and new Classic Series Home ArchivesSubscriptions New York Review of Books Making up the Mind from a book review by Oliver Sacks The New York Review of Books, April 8, 1993, 42 - 49. Hyperlinks and corresponding editing by Jochen Gruber Local links have been added in case remote sites have rearranged referenced links. Bright Air, Brilliant Fire: On the Matter of the Mind Gerald M. Edelman. Basic Books, 280 pages, 1992 Abstract by Jochen Gruber With his Theory of Neuronal Group Selection (also called Neural Darwinism in analogy of the Darwinism in the immune system) Gerald Edelman presents a neurobiological theory of the mind. He and his colleagues at the Neurosciences Institute have been developing it over the past 15 years. He imagines a comprehensive theory of a dozen disciplines of neuroscience. The outline of the theory is as follows: After birth a set of inborn values (feelings) (definition, more on values) allows us to begin building the structure of the brain. The smallest entity of this structure is a group of neurons (map) (definition, more on maps ) in which internal links represent our experience. Maps are then used as new building blocks and interconnected with links into scenes (definition, more on scenes) representing what we experience as the present. Ever richer maps are constructed (more on that), ultimately maps of meaning. In our search for meaning our mind develops up the evolutionary ladder to consciousness until we form the new categories of "past" and "future". On this way, the building blocks acquire step by step more internal structure that can be accessed. A continuous stream of establishing and testing of hypotheses on the basis of the existing interconnections weakens or strengthens existing connections or builds new ones (Experiential Selection). The fittest maps and connections survive (thus the name neural Darwinism). These maps are dynamic in that they are continually redrawn (more on its definition) according to our perceptions (for more on that read the paragraphs here). For example, disappointments or major new insights at young age may ask for major changes of the map structure and might distroy a person's drive for survival if these changes appear too radical. Similarly, works of art or psychoanalysis might strengthen some and weaken other connections in and between our maps and therewith start a re-interpretation of our perception of reality (Freud's Nachträglichkeit is an example) At some point, the acquisition of a new kind of memory leads to a conceptual explosion. As a result, concepts of the self, the past, and the future (higher-order consciousness) can be connected to primary consciousness: selfconsciousness, culture and 'consciousness of consciousness' becomes possible. Bright Air, Brilliant Fire is a book of astonishing variety and range, which runs from philosophy to biology to psychology to neural modeling, and attempts to synthesize them into a unified whole. It helps us understand, guide and direct our own mind (including our psyche) in that it presents a structure with which to evaluate Since one way to experience (the world) is through works of art, Edelman's Neural Darwinism gives us a method of discussing the effect of the arts on our mind.  Brief overview of the first steps of the brains neural evolution. Schematic of Ladder of Evolution to Consciousness drawn by Jochen Gruber to help with reading Oliver Sacks's essay Schematic of Ladder of Evolution to Consciousness Table of Contents 1. Model of Basic Processes 1.1 Darwinian Selection in the Immune System and Brain 1.2 Values 1.3 Developmental Selection 1.4 Experiential Selection 1.5 Summary and Experimental Confirmation of Neuro-Evolution in Psychology 1.6 Basic Building Blocks of the Theory: Maps (Categorizations) and Their Communication (Re-entrant Signaling) 1.7 Visualize the Brain as an Orchestra without Conductor Playing its Own Music 2. Memory: A Biological Model of the Development of Consciousness 2.1 Primary Consciousness and Scenes 2.2 Higher-Order Consciousness: Selfconsciousness and Culture 3. Clinical Evidence 4. DARWIN and NOMAD, the Computer Creatures 1. Model of Basic Processes .... In his latest book, Bright Air, Brilliant Fire, the neuroscientist Gerald Edelman speaks of the fragmentation (of our views about the brain, Jochen Gruber) ... The picture of psychology was a mixed one, behaviorism, gestalt psychology, psychophysics, and memory studies in normal psychology; studies of the neuroses by Freudian analysis; clinical studies of the brain lesions and motor and sensory defects ... and a growing knowledge both of neuroanatomy and the electrical behavior of nerve cells in physiology ... Only occasionally were serious efforts made ... to connect these desparate areas in a general way. Gerald Edelman, Guilio Tononi, "Consciousness: How Matter Becomes Imagination", Part III: Mechanisms of Consciousness: The Darwinian Perspective, Chapter 7: Selectionism, Degeneracy (pp. 86, 87), Penguin Books, 2000. "All selectional systems share a remarkable property that is as unique as it is essential to their functioning: In such systems, there are typically many different ways, not necessarily structurally identical, by which a particular output occurs. We call this property degeneracy. Degeneracy is seen in quantum mechanics in certain solutions of the Schrödinger equation and in the genetic code, where, because of the degenerate third position in triplet code words, many different DNA sequences can specify the same protein. Put briefly, degeneracy is reflected in the capacity of structurally different components to yield similar outputs or results. In a selectional nervous system, with its enormous repertoire of variant neural circuits even within one brain area, degeneracy is inevitable. Without it, a selectional system, no matter how rich its diversity, would rapidly fail - • in a species, almost all mutations would be lethal; • in an immune system, too few antibody variants would work; and • in the brain, if only one network path was available, signal traffic would fail. Degeneracy can operate at one level of organization or across many. It is seen • in gene networks (e.g. combinations of different genes can lead to the same structure), • in the immune system (antibodies with different structures can recognize the same foreign molecule equally well), • in the brain, and • in evolution itself (different living forms can evolve to be equally well adapted to a specific environment). There are countless examples of degeneracy in the brain. The complex meshwork of connections in the thalamocortical system assures that a large number of different neuronal groups can similarly affect, in one way or another, the output of a given subset of neurons. For example, a large number of different brain circuits can lead to the same motor output or action. Localized brain lesions often reveal alternative pathways that are capable of generating similar behaviors. Therefore, a manifest consequence of degeneracy in the nervous system is that certain neurological lesions may often appear to have little effect, at least within a familiar envirnoment. Degeneracy also appears at the cellular level. Neural signaling mechanisms utilize a great variety of transmitters, receptors, enzymes, and so-called second messengers. The same changes in gene expression can be brought about by different combinations of these biochemical elements. Degeneracy is not just a useful feature of selectional systems; it is also an unavoidable consequence of selectional mechanisms. Evolutionary selective pressure is typically applied to individuals at the end of a long series of complex events. These events involve many interacting elements or muliple temporal and spatial scales, and it is unlikely that well-defined functions can be neatly assigned to independent subsets of elements or processes in biological networks. For example, if selection occurs for our ability to walk in a particular way, connections within and among many different brain structures and to the muscoleskeletal apparatus are all likely to be modified over time. While locomotion will be affected, many other functions, including our ability to stand or jump, will also be influenced as a result of the degeneracy of neural circuits. The ability of natural selection to give rise to a large number of nonidentical structures yielding simialr functions increases both the robustness of biological networks and their adaptability to unforseen environments." 1.1 Darwinian Selection in the Immune System and Brain Edelman's early work dealt not with the nervous system, but with the immune system, by which all vertebrates defend themselves against invading bacteria and viruses. It was previously accepted that the immune system "learned", or was "instructed", by means of a single type of antibody which molded itself around the foreign body, or antigen, to produce an appropriate, "tailored" antibody. These molds then multiplied and entered the bloodstream and destroyed the alien organisms. But Edelman showed that a radically different Darwinian selective mechanism was at work; that we possess not one basic kind of antibody, but millions of them, an enormous repertoire of antibodies, from which the invading antigen "selects" one that fits. It is such a selection, rather than a direct shaping or instruction, that leads to the multiplication of the appropriate antibody and the destruction of the invader. Such a mechanism, which he called a "clonal selection", was suggested in 1959 by MacFarlane Burnet, but Edelman was the first to demonstrate that such a "Darwinian" mechanism actually occurs, and for this he shared a Nobel Prize in 1972. Edelman then began to study the nervous system, to see Both the immune system and the nervous system can be seen as systems for recognition. ... The nervous system is roughly analogous, but far more demanding: it has How does an animal come to recognize and deal with the novel situation it confronts? How is such individual development possible? The answer, Edelman proposes, is that an evolutionary process takes place - not one that selects organisms and takes millions of years, but one that occurs within each particular organism and during its lifetime, by competition among cell groups in the brain. This for Edelman is "somatic selection".... 1.2 Values What is the world of a newborn infant animal like? Is it a sudden incomprehensible (perhaps terrifying) explosion of electromagnetic radiations, sound waves, and chemical stimuli which make the infant cry and sneeze? Or an ordered intelligible world, in which the infant discerns people, object, meanings and smiles? We know that the world encountered is not one of complete meaninglessness and pandemonium, for the infant shows selective attention and preferences from the start due to the genetic instructions and biases. Clearly there are some innate biases or dispositions at work; otherwise the infant would have no tendencies whatever, would not be moved to do anything, seek anything, to stay alive. These basic biases are among the first "values" we have, as Edelman calls them. Values -simple drives, instincts, intentionalities- serve as the tools we need for adaptation and survival, It needs to be stressed that "values" are experienced, internally, as feelings - without feeling there can be no animal life."Thus", in the words of the late philosopher Hans Jonas, "the capacity for feeling, which arose in all organisms, is the mother-value of all." 1.3 Developmental Selection The developmental selection takes place largely before birth. The genetic instructions in each organism provide general constraints for neural development, but they cannot specify the exact destination of each developing nerve cell - for these grow and die, migrate in great numbers and in entirely unpredictable ways: all of them are "gypsies", as Edelman likes to say. The vicissitudes of fetal development themselves produce in every brain unique patterns of neurons and neuronal groups. Even identical twins with identical genes will not have identical brains at birth: the fine details of corticular circuitry will be quite different. Such variability, Edelman points out, would be a catastrophe in virtually any mechanical or computational system, where exactness and reproducibility are of the essence. But in a system in which selection is central, the consequences are entirely different: here variation and diversity are themselves of the essence, are the basis on which Darwinism acts. 1.4 Experiential Selection Now, already possessing a unique and individual pattern of neuronal groups through developmental selection, the creature is born, thrown into the world, there to be exposed to a new form of selection, selection based on experience. Since the infant instinctively values food, warmth, and contact with other people (for example), this will direct its first movements and strivings. These "values" serve to differentially weight experience, to orient the organism towards survival and adaptation, to allow what Edelman calls "categorization on value", e.g. to form simple, basic categories such as "edible" and "nonedible" as part of the process of getting food. 1.5 Summary and Experimental Confirmation of Neuro-Evolution in Psychology Thus, in summary, at an elementary physiological level, there are various sensory and motor "givens", • from the reflexes that automatically occur (for example, in responses to pain) • to innate mechanisms in the brain, as, for example, the feature detectors in the visual cortex which, as soon as they are activated, detect verticals, horizontals, boundaries, angles, etc., in the visual world. We have such a certain amount of basic equipment; but, in Edelman's view, very little else is programmed or built in. It is up to the infant animal, given its elementary physiological capacities, and given its inborn values, • to create its own categories and • to use them to make sense of, to construct , a world - and it is not just a world that the infant constructs, but its own world, a world constituted from the first by personal meaning and reference. Such a neuro-evolutionary view is highly consistent with some of the conclusions of psychoanalysis and developmental psychology - in particular the psychoanalyst David Stern's description of "an emergent self". "Infants seek sensory stimulation" writes Stern. "They have distinct biases or preferences with regard to the sensations they seek. ... These are innate. From birth on, there appears to be a central tendency to form and test hypotheses about what is occurring in the world .. (to) categorize ... into conforming and contrasting patterns, events, sets, and experiences." Stern emphasizes how crucial are the active processes of connecting, correlating, and categorizing information, and how with these a distinctive organization emerges, which is experienced by the infant as the sense of a self. 1.6 Basic Building Blocks of the Theory: It is precisely such processes that Edelman is concerned with. He sees them as grounded in a process of selection acting on the primary neuronal units with which each of us is equipped. These units are not individual nerve cells or neurons, but groups ranging from about fifty to ten thousand neurons; there are perhaps a hundred million such groups in the entire brain. During the development of the fetus, a unique neuronal pattern of connections is created, and then, starting with the infant stage, experience acts upon this pattern, modifying it by selectively strengthening or weakening connections between neuronal groups, or creating entirely new connections. Thus experience itself is not passive, a matter of "impressions" or "sensedata", but active, and constructed by the organism from the start. Active experience "selects", or carves out, a new, more complexely connected pattern of neuronal groups, a neuronal reflection of the child's individual experience, of the procedures by which it has come to categorize reality. But these neuronal circuits are still at a low level - how do they connect with the inner life, the mind, the behavior of the creature? It is at this point that Edelman introduces the most radical of his concepts - the concepts of "map", as he uses the term, is not a representation in the ordinary sense, but The creation of maps, Edelman postulates, involves the synchronization of hundreds of neuronal groups. Some mappings (called "categorizations"), take place in the discrete and anatomically fixed (or "prededicated") parts of the cerebral cortex - thus color is "constructed" in an area called V4. The visual system alone, for example, has over thirty different maps for representing color, movement, shape, etc. But where perception of objects is concerned, the world, Edelman likes to say, is not "labeled", it does not come "already parsed (divided) into objects". We must make the objects, in effect, through our own categorizations: "Perception makes", Emerson said. "Every perception", says Edelman, echoing Emerson, "is an act of creation." In other words: our sense organs, as we move about, take samplings of the world, creating maps in the brain. Then a sort of neuronal "survival of the fittest" occurs, a selective strengthening of those mappings which correspond to "successful" perceptions - successful in that they prove the most useful and powerful for the building of "reality". In this view, there are no innate mechanisms for complex "personal" recognition, such as the "grandmother cell" postulated by researchers in the 1970's to correspond to one's perception of one's grandmother. Nor is there any "master area", or "final common path", whereby all perceptions relating (say) to one's grandmother converge in one single place. There is no such a place in the brain where a final image is synthesized, nor any miniature person or homunculus to view this image. Rather, the perception of a grandmother or, say, of a chair depends on the synchronization of a number of scattered mappings throughout the visual cortex - mappings relating to many different perceptual aspects of the chair (its size, its shape, its color, its "leggedness", its relation to other sorts of chairs - armchairs, keeling chairs, baby chairs, etc.). In this way the brain, the creature, achieves a rich and flexible percept of "chairhood", which allows the recognition of innumerable sorts of chairs as chairs (computers, by contrast, with their need for unambiguous definitions and criteria, are quite unable to achieve this). This perceptual generalization is dynamic, i.e. can change with time. Information coded into maps Such a correlation is possible because of the very rich connections between the brain's maps - connections which are reciprocal, and may contain millions of fibres. These extensive connections allow what Edelman calls "re-entrant signaling", meaning which enables a coherent construct such as "chair" to be made. This construct arises from the interaction of many sources. Stimuli from, say, touching a chair may affect one set of maps, stimuli from seeing it may affect another set. Re-entrant signaling takes place between the two sets of maps - and between many other maps as well - as part of the process of perceiving a chair. It must be emphasized once again: This construct of an object or a part of reality, • is not comparable to a single static image or representation - • it is, rather, comparable to a giant and continually modulating equation. The outputs of innumerable maps, connected by re-entry, not only complement one another at a perceptual level but are built up to higher and higher levels. For the brain, in Edelman's vision, makes maps of its own maps, or "categorizes its own categorizations", and does so by a process which can ascend indefinitely to yield ever more generalized pictures of the world. This re-entrant signaling is more than a feedback process that corrects errors. (Such simple feedback loops are common both in the technological world, as thermostats, governors, cruise controls, etc., and in the nervous system, where they control all of the body's automatic functions, such as temperature, blood pressure and fine control of movement.) At higher levels, where flexibility and individuality are all-important, and where new powers and new functions are needed and created, one requires "re-entrant signaling" to be a mechanism capable of constructing, not just controlling or correcting. The process of re-entrant signaling, with its multitude of reciprocal connections within and between maps, may be likened to a sort of neural United Nations, in which dozens of voices are talking together, while including in their conversation a variety of constantly inflowing reports from the outside world, and giving them coherence, bringing them together into a larger picture as new information is correlated and new insights emerge. There is, to continue the metaphor, no secretary general in the brain; the activity of re-entrant signaling itself achieves the synthesis. How is this possible? Edelman, who himself once planned to be a concert violinist, uses musical metaphors here. "Think", he said in a recent BBC radio broadcast, "if you had a hundred thousand wires connecting four string quartet players and that, even though they weren't speaking words, signals were going back and forth in all kinds of hidden ways (as you usually get them by the subtle nonverbal interactions between the players) that make the whole set of sounds a unified ensemble. That's how the maps in the brain work by re-entry." The players are connected. Each player, interpreting the music individually, constantly modulates and is modulated by the others. There is no final or "master" interpretation - the music is collectively created. (Gerald Edelman on re-entrant signaling, lecture given in 2006) This, then, is Edelman's picture of the brain, an orchestra, an ensemble - but without a conductor, an orchestra which makes its own music. Thus, two basic operations are the beginning of psychic development, and they far precede - yet they are a prerequisite for all of these, the beginning of an enormous upward path, and it can achieve remarkable power even in relatively primitive animals like birds. An example: If pigeons are presented with photographs of trees, or oak leaves, or fish, surrounded by extraneous features, they rapidly learn to "home in" upon these, and to generalize, so that they can thereafter recognize any trees, or oak leaves, or fish straightaway, however distracting or confusing the context may be. It is clear from these experiments that perception selects, or rather creates, "defining" features (what counts as "defining" may be different for each pigeon), and cognitive categories, without the use of language, or being "told" what to do. Such category-creating behavior (which Edelman calls "noetic") is very different from the rigid, algorithmic procedures used by robots. (These experiments with pigeons are described in detail in Neural Darwinism, pp. 247-251.) Perceptual categorization, whether of colors, movements, or shapes, is the first step, and it is crucial for learning, but it is not something fixed, something that occurs once and for all. On the contrary - and this is central to the dynamic picture presented by Edelman - there is then a continual re-categorization, and this itself constitutes memory. "In computers", Edelman writes, "memory depends on the specification and storage of bits of coded information." This is not the case in the nervous system. Memory in living organisms by contrast takes place through activity and continual re-categorization. "By its nature, memory . . . involves continual motor activity.... in different contexts. a given categorical response in memory may be achieved in several ways. Unlike computer-based memory, brainbased memory is inexact, but it is also capable of great degrees of generalization." 2. Memory: A Biological Model of the Development of Consciousness In the extended Theory of Neuronal Group Selection, which he has developed since 1987, Edelman has been able, in a very economical way, to accommodate all the "higher" aspects of mind - concept formation, language, consciousness itself - without bringing in any additional considerations. Edelman's most ambitious project, indeed, is to try to delineate a possible biological basis for consciousness. He distinguishes, first ("primary") from "higher-order" consciousness. 2.1 Primary Consciousness and Scenes The essential achievement of primary consciousness, as Edelman sees it, is to bring together into a scene the many categorizations involved in perception. The advantage of this is that "events that may have had significance to an animal's past learning can be related to new events." The relation established will not be a causal one, one necessarily related to anything in the outside world; it will be an individual (or "subjective") one, based on, what has had "value" or "meaning" for the animal in the past. Edelman proposed that the ability to create scenes in the mind depends upon the emergence of a new neuronal circuit during evolution, a circuit allowing for continual re-entrant signaling between, This "bootstrapping process" (as Edelman calls it) goes on in all the senses, thus allowing for the construction of a complex scene. The "scene", one must stress, is Mammals, birds, and some reptiles, Edelman speculates, have such a scene-creating primary consciousness; and such consciousness is "efficacious"; it helps the animal adapt to complex environments. Without such consciousness, life is lived at a much lower level, with far less ability to learn and adapt. Primary consciousness (Edelman concludes) is required for the evolution of higher-order consciousness. But it is limited -like our consciousness in a dream- to a small memorial interval around a time chunk I call the present. An animal with primary consciousness sees the room the way a beam of light illuminates it. Only that which is in the beam is explicitly in the remembered present; all else is darkness. This does not mean than an animal with primary consciousness cannot have long-term memory or act on it. Obviously, it can, but it cannot, in general, be aware of that memory or plan an extended future for itself based on that memory. Again, we know this from our dreams. 2.2 Higher-Order Consciousness: Selfconsciousness and Culture Only in ourselves - and to some extent in apes - does a higher-order consciousness emerge. Higher order consciousness arises from primary consciousness - it supplements it, it does not replace it. It is dependent on the evolutionary developmernt of language, along with the evolution of symbols, of cultural exchange; and with all this brings an unprecedented power of detachment, generalization, and reflection, so that finally selfconsciousness is achieved, the consciousness of being a self in the world, with human experience and imagination to call upon. Higher-order consciousness Works of art make use of our higher-order consciousness, by weakening or strengthening connections between scenes. The most difficult and tantalizing portions of Bright Air, Brilliant Fire are about how this higher-order consciousness is achieved and how it emerges from the primary consciousness. No other theorist I know of has even attempted a biological understanding of this step. To become conscious of being conscious, Edelman stresses, systems of memory must be related to representation of a self. This is not possible unless the contents, the "scenes," of primary consciousness are subjected to a further process and are themselves re-categorized. Though language, in Edelman's view, is not crucial for the development of higher-order consciousness - there is some evidence of higher-order consciousness and self-consciousness in apes - it immensely facilitates and expands this by making possible previously unattainable conceptual and symbolic powers. Thus two steps, two re-entrant processes, are envisaged here: The effects of this are momentous: "The acquisition of a new kind of memory," Edelman writes, "...leads to a conceptual explosion. As a result, concepts of the self, the past, and the future can be connected to primary consciousness. 'Consciousness of consciousness' becomes possible." At this point Edelman makes explicit what is implicit throughout his work - the interaction of "neural Darwinism" with classical Darwinism. What occurs "explosively" in individual development must have been equally critical in evolutionary development. Thus "at some transcendent moment in evolution", Edelman writes, there emerged "a variant with a re-entrant circuit linking value-category memory" to current perceptions". "At that moment", Edelman continues, "a memory became the substrate and servant of consciousness." And then, at another transcendent moment, by another, higher turn of re-enry, higher-order consciousness arose. There is indeed much paleontologial evidence that higher order consciousness developed in an astonishingly short space of time - some tens (perhaps hundreds) of thousands of years, not the many millions usually needed for evolutionary change. The speed of this development has always been a most formidable challenge for evolutionary theorists - Darwin himself could offer no detailed account of it and Wallace was driven back to thoughts of a grand design. But Edelman, drawing from his own observations of cell and tissue development detailed in his earlier book Topobiology, is able to suggest how it might have come about. The principles underlying brain development and the mechanisms outlined in the Theory of Neuronal Group Selection can, he argues, account for this rapid emergence, since they allow for enormous changes in brain size over the relatively short evolutionary period in which Homo sapiens emerged. According to topobiology, relatively large changes in the structure of the brain can occur through changes in the genes that regulate the brain's morphologyn - changes that can come about as the result of relatively few mutations. And the premises of the Theory of Neuronal Group Selection allow for the rapid incorporation into existing brain structures of new and enlarged neuronal maps with a variety of functions. This interweaving of concept and observation typifies the ambition and the grandeur of Edelman's thought. His two chapters on consciousness are the most original, the most exhilarating, and the most difficult in the entire book - but they achieve, or aspire to achieve, what no other theorist has even tried to do: a biologically plausible model of how consciousness could have emerged. 3. Clinical Evidence A sense of excitement runs through all of Edelman's books. "We are at the beginning of the neuroscientific revolution", he writes in the preface to Bright Air, Brilliant Fire "At its end, we shall-know how the mind works, what governs our nature, and how we know the world." This century, as he observes, has been rich in theories - going all the way from psychophysics to psychoanalysis - but all these have been partial. New theories arise from a crisis in scientific understanding, when there is an acute incompatibility between observations and existing theories. There are many such crises in neuroscience today. Edelman, with his background in morphology and development, speaks of the "structural" crisis, the now well-established fact that there is no precise wiring in the brain, that there are vast numbers of unidentifiable inputs to each cell, and that such a jungle of connections is incompatible with any simple computational theory. He is moved, as William James was, by the apparently seamless quality of experience and consciousness - the unitary appearance of the world to a perceiver despite (as we have seen in regard to vision) the multitude of discrete and parallel systems for perceiving it; and the fact that some integrating or unifying or "binding" must occur, which is totally inexplicable by any existing theory. Since the Theory of Neuronal Group Selection was first formulated, important new evidence has emerged suggesting how widely separated groups of neurons in the visual cortex can become synchronized and respond in unison when an animal is faced with a new perceptual task - a finding directly suggestive of re-entrant signaling. (I discussed this work in an earlier article, "Neurology and the Soul.", New York Book Review, November 20, 1990) There is also much evidence of a more clinical sort, which one feels may be illuminated, and perhaps explained, by the Theory of Neuronal Group Selection. I often encounter situations in day-to-day neurological practice which completely defeat classical neurological explanations, which cry out for explanations of a radically different kind, and which are clarified by Edelman's theory. (Some of these situations are discussed by Israel Rosenfield in his new book: The Strange, Familiar and Forgotten, wbere he speaks of "the bankruptcy of classical neurology".) Thus if a spinal anesthetic is given to a patient - as used to be done frequently to women in childbirth - there is not just a feeling of numbness below the waist. There is, rather, the sense that one terminates at the umbilicus, that one's corporeal self has no extension below this, and that what lies below is not-self, not-flesh, not-real, not-anything. The anesthetized lower half has a bewildering nonentity, completely lacks meaning and personal reference. The baffled mind is unable to categorize it, to relate it in any way to the self. One knows that sooner or later the anesthetic wear off, yet it is impossible to imagine the missing parts in a positive way. There is an absolute gap in primary consciousness which higher order consciousness can report, but cannot correct. This indeed is a situation I know well from personal no less than clinical experience, for it is what I experienced in myself after a nerve injury to one leg, when for a period of two weeks, while the leg lay immobile and senseless, I found it "alien," not me, not real. I was astonished when this happened, and unassisted by my neurological knowledge - the situation was clearly neurological, but classical neurology has nothing to say about the relation of sensation to knowledge and to "self"; about how, normally, the body is "owned"; and how, if the flow of neural information is impaired, it may be lost to consciousness, and "disowned" - for it does not see consciousness as a process Such body-image and body-ego can be fully understood, in Edelman's thinking, as breakdowns in local mapping, consequent upon nerve damage or disuse. It has been confirmed, further, in animal experiments that the mapping is not something fixed, but plastic and dynamic, and dependent upon a continual inflow of experience and use; and that if there is continuing interference with, say, one's perception of a limb or its use, there is not only a rapid loss of its cerebral map, but a rapid remapping of fhe rest of the body which then excludes the limb itself. Stranger still are the situations which arise when the cerebral basis of body-image is affected, especially if the right hemisphere of the brain is badly damaged in its sensory areas. At such times patients may show an "anosognosia," an unawareness that anything is the matter, even though the left side of the body may be senseless, and perhaps paralyzed, too. Or they may show a strange levity, insisting that their own left sides belong to "someone else." Such patients may behave (as an eminent neurologist, M.M. Mesulam, has written) ". . . as if one half of the universe had abruptly ceased to exist . . . as if nothing were actually happening [there] . . . as if nothing of importance could be expected to occur there." Such patients live in a hemispace, a bisected world, but for them, subjectivly, their space and world is entire. Anosognosia is unintelligible (and was for years misinterpreted as a bizarre neurotic symptom) unless we see it (in Edelman's term) as a "disease of consciousness," a total breakdown of high-level re-entrant signaling and mapping in one hemisphere - the right hemisphere, which, Edelman suggests, may have only primary but no higher-order consciousness - and a radical reorganization of consciousness in consequence. Less dramatic than these complete disappearances of self or parts of the self from consciousness, but still remarkable in the extreme, are situations in which, following a neurological lesion, a dissociation occurs between perception and consciousness, or memory and consciousness, cases in which there remains only "implicit" perception or knowledge or memory. Thus my amnesiac patient Jimmie ("The Lost Mariner") had no explicit memory of Kennedy's assassination, and would indeed say, "No president in this century has been assassinated, that I know of." But if asked, "Hypothetically, then, if a presidential assassination had somehow occurred without your knowledge, where might you guess it occurred: New York, Chicago, Dallas, New Orleans, or San Francisco?" he would invariably "guess" correctly, Dallas. Similarly, patients with visual agnosias, like Dr. P. ("The Man who Mistook his Wife for a Hat"), while not concsciously able to recognize anyone,often "guess" the identity of peoples faces correctly. And patients with cortical blindness, from massive bilateral damage to the primary visual areas of the brain, while asserting that they can see nothing, may also mysteriously "guess" correctly what lies before them - so-called "blindsight." In all these cases, then, we find that perception, and perceptual categorization of the kind described by Edelman, has been preserved, but has been divorced from consciousness. In such cases it appears to be only the final process, in which the re-entrant loops combine memory with current perceptual categorization, that breaks down. Their understanding, so elusive hitherto, seems to come closer with Edelman's "re-entrant" model of consciousness. Dissatisfaction with the classical theories is not confined to clinical neurologists; it is also to be found among theorists of child development, among cognitive and experimental psychologists, among linguists, and among psychoanalysts. All find themselves in need of new models. This was abundantly clear in May of 1992, at an exciting conference on "Selectionism and the Brain" held at the Neurosciences Institute in New York and attended by prominent workers in all of these fields. Particularly suggestive was the work of Esther Thelen and her colleagues at the University of Indiana in Bloomington, who have for some years been making a minute analysis of the development of motor skills - watching, reaching for objects - in infants. "For the developmental theorist," Thelen writes, "individual differences pose an enormous challenge.... Developmental theory has not met this challenge with much success." And this is, in part, because individual differences are seen as extraneous, whereas Thelen argues that it is precisely such differences, the huge variation between individuals, that allow the evolution of unique motor patterns. Thelen found that the development of such skills, as Edelman's theory would suggest, follows no single programmed or prescribed pattern. Indeed there is great variability among infants at first with many patterns of reaching for objects; but there then occurs, over the course of several months, a competition among these patterns, discovery or selection of workable patterns, or workable motor solztions. These solutions, though roughly similar (for there are a limited number of ways in which an infant can reach), are always dlfferent and individual, adapted to the particular dynamics of each child, and they emerge by degrees, through exploration and trial. Each child, Thelen showed, explores a rich range of possible ways to reach for an object and selects its own path, without the benefit of any blueprint or program. The child is forced to be original, to create its own solutions. Such an adventurous course carries its own risks - the child may evolve a bad motor solution - but sooner or later such bad solutions tend to destabilize, break down, and make way for further exploration, and better solutions. Similar considerations arise with regard to recovery and rehabilitation after strokes and other injuries.There are no rules, there is no prescribed path to recovery; every patient must discover or create his own motor and perceptual patterns, his own solutions to the challenges that face him; and it is the function of a sensitive therapist to help him in this. This is well understood in the practice of "functional integration," pioneered by Moshe Feldenkrais, and used increasingly both in rehabilitation after injury and in the training of dancers and athletes. "One cannot teach a person how to organize movement or how to perceive", writes Carl Ginsburg, a leading Feldenkrais teacher. "We need a system that organizes itself as it experiences. . . a system that has both stability and extraordinary plasticity to shift with changing circumstances. It is a system that is exceedingly difficult to model." Ginsberg feels that Theory of Neuronal Group Selection is closest to the model required ("The Roots of Functional Integration, Part III: The Shift in Thinking", The Feldenkrais Journal, No. 7 (Winter 1992), pp. 3447. When Thelen tries to envisage the neural basis of such learning, she uses terms very similar to Edelman's: she sees a "population of movements being selected or "pruned" by experience. She writes of infants "remapping" the neuronal groups that are correlated with their movements, and "selectively strengthening particular neuronal groups". She has, of course, no direct evidence for this, and such evidence cannot be obtained until we have a way of ~visualizing vast numbers of neuronal groups simultaneously in a conscious subject, and following their interactions for months on end. No such visualization is possible at the present time, but it will perhaps become possible by the end of the decade. Meanwhile, the close correspondence between Thelen's observations and the kind of behavior that would be expected from Edelman's theory is striking. If Esther Thelen is concerned with direct observation of the development of motor skills in the infant, Arnold Modell of Harvard, at the same conference, was concerned with psychoanalytical interpretations of early behavior; he too felt, like Thelen, that a crisis had developed, but that it might also be resolved by the Theory of Neuronal Group Selection - indeed, the title of his paper was "Neural Darwinism and a Conceptual Crisis in Psychoanalysis". The particular crisis he spoke of was connected with Freud's concept of Nachträglichkeit, the retranscription of memories which had become part of pathological fixations but were opened to consciousness, to new contexts and reconstructions, as a crucial part of the therapeutic process of liberating the patient from the past, and allowing him to experience and move freely once again. This process cannot be understood in terms of the classical concept of memory in which a fixed record or trace or representation is stored in the brain - an entirely static or mechanical concept - but requires a concept of memory as active and "inventive" (see Israel Rosenfeld, The Invention of Memory: A New View of the Brain (Basic Books, 1991). That memory is essentially constructive (as Coleridge insisted, nearly two centuries ago) was shown experimentally by the great Cambridge psychologist Frederic Bartlett. "Remembering," he wrote, is not the re-excitation of innumerable fixed, lifeless and fragmentary traces. It is an imaginative reconstruction, or construction, built out of the relation of our attitude toward a whole mass of organized past reactions or experience. It was just such an imaginative, context-dependent construction or reconstuction that Freud meant by Nachträglichkeit - but this, Modell emphasizes, could not be given any biological basis until Edelman's notion of memory as re-categorization. Beyond this, Modell as an analyst is concerned with the question of how the self is created, the enlargement of self through finding, or making, personal meanings. Such a form of inner growth, so different from "learning" in the usual sense, he feels, may also find its neural basis in the formation of ever-richer but always self-referential maps in the brain, and their incessant integration through re-entrant signaling, as Edelman has described it. Modell's ideas have been set out in full in Other Times, Other Realities (Harvard University Press, 1990), and in a forthcoming book, The Private Self(Harvard University Press, 1993). Others too - cognitive psychologists and linguists - have become intensely interested in Edelman's ideas, in particular by the implication of the extended Theory of Neuronal Group Selection which suggests that the exploring child, the exploring organism, seeks (or imposes) meaning at all times, that its mappings are mappings of meaning, that its world and (if higher consciousness is present) its symbolic systems are constructed of "meanings." When Jerome Bruner and others launched the "cognitive revolution" in the mid 195Os, this was in part a reaction to behaviorism and other "isms" which denied the existence and structure of the mind. The cognitive revolution was designed "to replace the mind in nature", to see the seeking of meaning as central to the organism. In a recent book, Acts of Meaning, (Harvard University Press, 1990), Bruner describes how this original impetus was subverted, and replaced by notions of computation, information processing, etc., and by the computational (and Chomskian) notion that the syntax of a language could be separated from its semantics. But, as Edelman writes, it is increasingly clear, from studying the natural acquisition of language in the child, and, equally, from the persistent failure of computers to "understand" language, its rich ambiguity and polysemy, that syntax cannot be separated from semantics. It is precisely through the medium of "meanings" that natural language and natural intelligence are built up. From Boole, with his "Laws of Thought" in the 1850s, to the pioneers of Artificial Intelligence at the present day, there has been a persistent notion that one may have an intelligence or a language based on pure logic without anything so messy as "meaning" being involved. That this is not the case, and cannot be the case, may now find a biological grounding in the Theory of Neuronal Group Selection. 4. DARWIN and NOMAD, the Computer Creatures None of this, however, can yet be proved - we have no way of seeing neuronal groups or maps or their interactions; no way of listening in to the re-entrant orchestra of the brain. Our capacity to analyze the living brain is still far too crude. Partly for this reason researchers in neuroscience, Edelman among them, have felt it necessary to simulate the brain, and the power of computers and supercomputers makes this more and more possible. One can endow one's simulated neurons with physiologically realistic properties, and allow them to interact in physiologically realistic ways. Edelman and his colleagues at the Neurosciences Institute have been deeply interested in such "synthetic neural modeling", and have devised a series of "synthetic animals" or artifacts designed to test the Theory of Neuronal Group Selection. Although these "creatures" - which have been named DARWIN I, II, III, and IV - make use of supercomputers, their behavior (if one may use the word) is not programmed, not robotic, in the least, but (in Edelman's word) "noetic." They incorporate both a selectional system and a primitive set of "values"- for example, that light is better than no light - which generally guide behavior but do not determine it or make it predictable. Unpredictable variations are introduced in both the artifact and its enviromnent so that it is forced to create its own categorizations. DARWIN IV or NOMAD, with its electronic eye and snout, has no "goal", no "agenda", but resides in a sort of pen, a world of varied simple objects (with different colors, shapes, textures, weights). Her follows an illustration showing N0DMAD, an adaptive device constructed by Gerald M. Edelman and his colleagues at the Neurosience Institute, in its environment. NOMAD is controlled by a computer simulated "selectionist nervous system.". It has a TV camera "eye" and a snout with eletrical "taste" sensors. Synapses in its simulated brain change with experience so that NOMAD learns to approach and taste blocks. After forming an assosiation between taste and color, it avoids bad-tasting blocks (blue) but collects tasty ones (red). True to its name, it wanders around like a curious infant, exploring these objects, reaching for them, classifying them, building with them, in a spontaneous and idiosyncratic way (the movement of the artifact is exceedingly slow, and one needs time-lapse photography to bring home its creatural quality). No two "individuals" show identical behavior - and the details of their reachings and learnings cannot be predicted, any more than Thelen can predict the development of her infants. If their value circuits are cut, the artifacts show no learning, na "motivation", no convergent behavior at all, but wander around in an aimlessway, like patients who have had their frontal lobes distroyed. Since the entire circuitry of these DARWINS is known, and can be seen functioning on the screen of a supercomputer, one can continuously monitor their inner workings, their internal mappings, their re-entrant signalings - one can see how they sample their environment, one can see how the first, vague, tentative peercepts emerge, and how, with hundreds of further samplings, they evolve and become recognizable, refined models of reality, following a process similar to that projected by Edelman's theory. Normally one is not aware of the brain's almost automatic generation of "perceptual hypotheses" (in Richard Gregory's terms) and their refinement through a process of repeated samplings and testing. But under certain circumstances, as in recovery after acute nerve injury, one may become vividly aware of these normally unconscious (and sometimes exceedingly rapid) operations. I give a personal example of this in A Leg to Stand On. Seeing the DARWINS, especially DARWIN IV, at work can induce a curious state of mind. Going to the zoo after my first sight of DARWIN IV, I found myself looking at birds, antelopes, lions, with a new eye: were they, so to speak, nature's DARWINS, somewhere up around DARWIN XII in complexity? And the gorillas, with higher-order consciousness but no language - where would they stand? DARWIN XIX? And we, writing about the gorillas, where would we stand? DARWIN XXVII perhaps? A particularly intriguing, sometimes frightening part of Bright Air, Brilliant Fire is its penultimate chapter, "Is It Possible to Construct a Conscious Artefact?" Edelman has no doubt of the possibility, but places it, mercifully, well on in the next century. Such then is the sweep of Bright Air, Brilliant Fire, and its central ambition of "replacing the mind in nature". It is a book of astonishing variety and range, which runs from philosophy to biology to psychology to neural modeling, and attempts to synthesize them into a unified whole. Neural Darwinism (or Neural Edelmanism, as Francis Crick has called it) coincides with our sense of "flow," that feeling we have when we are functioning optimally, of a swift, effortless, complex, ever changing, but integrated and orchestrated stream of consciousness; it coincides with the sense that this consciousness is ours, and that all we experience and do and say is, implicitly, a form of selfexpression, and that we are destined, whether we wish it or not, to a life of particularity and self-development; it coincides, finally, with our sense that life is a journey - unpredictable, full of risk and uncertainty, but, equally, full of novelty and adventure, and characterized (if not sabotaged by external constraints or pathology) by constant advance, an ever deeper exploration and understanding of the world. Edelman's theory proposes a way of grounding all this in known facts about the nervous system and testable hypotheses about its operations. Any theory, even a wrong theory, is better than no theory; and this theory - the first truly global theory of mind and consciousness, the first biological theory of individuality and autonomy - should at least stimulate a storm of experiment and discussion. Merlin Donald, at the end of his fine and far-reaching recent book Origins of the Modern Mind (Harvard University Press, 1991), speaks of this in his conclusion: Mental materialism is back, with a vengeance. It is not only back, but back in an unapologetic, out-of-the-closet, almost exhibitionistic form. This latest incarnation might be called "exuberant materialism." Changeux (1985), Churchland (1986), Edelman (1987), Young (1988), and many others have announced a new neuroscientific apocalypse. Optimism is basically more productive than pessisism, and exuberant materialists are certainly optimists. Neuroscience is in its adolescence, and the field is drunk ; with its own dizzying growth; how not to be optimistic? There is no better place to read about this than in Edelman's own works, dense and difficult though they frequently are. Bright Air, Brilliant Fire is the most wide-ranging and accessible. It is strenuous and sometimes maddening, and one must struggle to understand it; but if one struggles, if one reads and reads again, the stubborn paragraphs finally yield their meaning, and a brilliant and captivating new vision of the mind emerges. Oliver Sacks in September 2011 on Web of STORIES At the time he wrote this review, Oliver Sacks was Professor of Neurology at the Albert Einstein College of Medicine in New York. His books include Awakenings (in which he described some of his work at Beth Abraham Hospital, the Bronx, New York City. This is the hospital where he is now, in 1996), A Leg to Stand On, The Man Who Mistook His Wife for a Hat, and, published shortly before this review, Seeing Voices Recently he finished The Island of the Colorblind More on/of Oliver Sacks: New York Review of Books Home  Archives Subscriptions Books Mail For subscription inquiries please email our Subscriptions department or call (212) 757-8070. Please call with back issue requests. Version: Dec. 5, 2012 Location (URL) of this page Sciences Home Joachim Gruber
4c2ae1c4902af03e
How a wave packet travels through a quantum electronic interferometer Splitting the heat: the quantum limits of thermal energy flow Device geometry. a) Scanning electron micrograph of the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer, mode spacing is ħω = 5 meV. Device geometry. a) Scanning electron micrograph of an the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer (indicated by arrow). Mode spacing is ħω = 5 meV. © 2016 Kramer et al. Citation: AIP Advances 6, 065306 (2016); With ever shrinking sizes of electronic transistors, the quantum mechanical nature of electrons becomes more visible. For instance two electrons with the same spin orientation and velocities cannot be at the same location (Pauli blocking). At low temperatures, electronic waves travel many mircometers completely coherently, only reflected by the geometric of the confinement. A tight confinement leads to larger separation of quantized energy levels and restricts the lateral spread of the electrons to specific eigenmodes of a nanowire. The distribution of the electronic current into various is then given by the geometrical scattering properties of the device interior, which are conveniently computed using wave packets. The ballistic electrons entering a nanodevice carry along charge and thermal energy. The maximum amount of thermal energy Q per time which can be transported through a single channel between two reservoirs of different temperatures is limited to  Q ≤ π2 kB2 (T22-T12)/(3h) [h denotes Planck’s and kB Boltzmann’s constant]. This has implications for computing devices, since this restricts the cooling rate (Pendry 1982). In a collaboration with the novel materials group at Humboldt University (Prof. S.F. Fischer, Dr. C. Riha, Dr. O. Chiatti, S. Buchholz) and using wafers produced in the lab of A. Wieck, D. Reuter (Bochum, Paderborn) C. Kreisbeck and I have compared theoretical expectations with experimental data for the thermal energy and charge currents in multi-terminal nanorings (AIP Advances 2016, open access). Our findings highlight the influence of the device geometry on both, charge and thermal energy transfer and demonstrate the usefulness of the time-dependent wave-packet algorithm to find eigenstates over a whole range of temperature. Predicting comets: a matter of perspective What are the implications of the theoretical model? Weathering the dust around comet 67P/Churyumov–Gerasimenko Bradford robotic telescope image of comet 67P (30th Oct 2015) Day and night at comet 67P/Churyumov–Gerasimenko Comparison of homogeneous dust model with ESA/NAVCAM Rosetta images. The shape of the universe The following post is contributed by Peter Kramer. hyperbolic dodecahedron Shown are two faces of a hyberbolic dodecahedron. Topology of Platonic Spherical Manifolds: From Homotopy to Harmonic Analysis. When two electrons collide. Visualizing the Pauli blockade. The upper panel shows two (non-interacting) electrons approaching with small relative momenta, the lower panel with larger relative momenta. The upper panel shows two electrons with small relative momenta colliding, in the lower panel with larger relative momenta. From time to time I get asked about the implications of the Pauli exclusion principle for quantum mechanical wave-packet simulations. I start with the simplest antisymmetric case: a two particle state given by the Slater determinant of two Gaussian wave packets with perpendicular directions of the momentum: φa(x,y)=e-[(x-o)2+(y-o)2]/(2a2)-ikx+iky and φb(x,y)=e-[(x+o)2+(y-o)2]/(2a2)+ikx+iky This yields the two-electron wave function The probability to find one of the two electrons at a specific point in space is given by integrating the absolute value squared wave function over one coordinate set. The resulting single particle density (snapshots at specific values of the displacement o) is shown in the animation for two different values of the momentum k (we assume that both electrons are in the same spin state). For small values of k the two electrons get close in phase space (that is in momentum and position). The animation shows how the density deviates from a simple addition of the probabilities of two independent electrons. If the two electrons differ already by a large relative momentum, the distance in phase space is large even if they get close in position space. Then, the resulting single particle density looks similar to the sum of two independent probabilities. The probability to find the two electrons simultaneously at the same place is zero in both cases, but this is not directly visible by looking at the single particle density (which reflects the probability to find any of the electrons at a specific position). For further reading, see this article [arxiv version]. The impact of scientific publications – some personal observations I will resume posting about algorithm development for computational physics. To put these efforts in a more general context, I start with some observation about the current publication ranking model and explore alternatives and supplements in the next posts. Solvey congress 19... Solvey congress 1970, many well-known nuclear physicists are present, including Werner Heisenberg. Working in academic institutions involves being part of hiring committees as well as being assessed by colleagues to measure the impact of my own and other’s scientific contributions. In the internet age it has become common practice to look at various performance indices, such as the h-index, number of “first author” and “senior author” articles. Often it is the responsibility of the applicant to submit this data in electronic spreadsheet format suitable for an easy ranking of all candidates. The indices are only one consideration for the final decision, albeit in my experience an important one due to their perceived unbiased and statistical nature. Funding of whole university departments and the careers of young scientists are tied to the performance indices. I did reflect about the usefulness of impact factors while I collected them for various reports, here are some personal observations: 1. Looking at the (very likely rather incomplete) citation count of my father I find it interesting that for instance a 49 year old contribution by P Kramer/M Moshinsky on group-theoretical methods for few-body systems gains most citations per year after almost 5 decades. This time-scale is well beyond any short-term hiring or funding decisions based on performance indices. From colleagues I hear about similar cases. 2. A high h-index can be a sign of a narrow research field, since the h-index is best built up by sticking to the same specialized topic for a long time and this encourages serialised publications. I find it interesting that on the other hand important contributions have been made by people working outside the field to which they contributed. The discovery of three-dimensional quasicrystals discussed here provides a good example. The canonical condensed matter theory did not envision this paradigmatic change, rather the study of group theoretical methods in nuclear physics provided the seeds. 3. The full-text search provided by the search engines offers fascinating options to scan through previously forgotten chapters and books, but it also bypasses the systematic classification schemes previously developed and curated by colleagues in mathematics and theoretical physics. It is interesting to note that for instance the AMS short reviews are not done anonymously and most often are of excellent quality. The non-curated search on the other hand leads to a down-ranking of books and review articles, which contain a broader and deeper exposition of a scientific topic. Libraries with real books grouped by topics are deserted these days, and online services and expert reviews did in general not gain a larger audience or expert community to write reports. One exception might be the public discussion of possible scientific misconduct and retracted publications. 4. Another side effect: searching the internet for specific topics diminishes the opportunity to accidentally stumble upon an interesting article lacking these keywords, for instance by scanning through a paper volume of a journal while searching for a specific article. I recall that many faculty members went every monday to the library and looked at all the incoming journals to stay up-to-date about the general developments in physics and chemistry. Today we get email alerts about citation counts or specific subfields, but no alert contains a suggestion what other article might pick our intellectual curiosity – and looking at the rather stupid shopping recommendations generated by online-warehouses I don’t expect this to happen anytime soon. 5. On a positive note: since all text sources are treated equally, no “high-impact journals” are preferred. In my experience as a referee for journals of all sorts of impact numbers, the interesting contributions are not necessarily published or submitted to highly ranked journals. To sum up, the assessment of manuscripts, contribution of colleagues, and of my own articles requires humans to read them and to process them carefully – all of this takes a lot of time and consideration. It can take decades before publications become alive and well cited. Citation counts of the last 10 years can be poor indicators for the long-term importance of a contribution. Counting statistics provides some gratification by showing immediate interest and are the (less personal) substitute for the old-fashioned postcards requesting reprints. People working in theoretical physics are often closely related by collaboration distance, which provides yet another (much more fun!) factor. You can check your Erdos number (mine is 4) or Einstein number (3, thanks to working with Marcos Moshinsky) at the AMS website. How to improve the current situation and maintain a well curated and relevant library of scientific contributions – in particular involving numerical results and methods? One possibility is to make a larger portion of the materials surrounding a publication available. In computational physics it is of interest to test and recalculate published results shown in journals. The platform is in my view a best practice case for providing supplemental information on demand and to ensure a long-term availability and usefulness of scientific results by keeping the computational tools running and updated. It is for me a pleasure and excellent experience to work with the team around nanohub to maintain our open quantum dynamics tool. Another way is to provide and test background materials in research blogs. I will try out different approaches with the next posts. Better than Slater-determinants: center-of-mass free basis sets for few-electron quantum dots Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states. Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states. The arrows point out the increasing error of the CI approach with increasing center-of-mass admixing. Solving the interacting many-body Schrödinger equation is a hard problem. Even restricting the spatial domain to a two-dimensions plane does not lead to analytic solutions, the trouble-makers are the mutual particle-particle interactions. In the following we consider electrons in a quasi two-dimensional electron gas (2DEG), which are further confined either by a magnetic field or a harmonic oscillator external confinement potential. For two electrons, this problem is solvable for specific values of the Coulomb interaction due to a hidden symmetry in the Hamiltonian, see the review by A. Turbiner and our application to the two interacting electrons in a magnetic field. For three and more electrons (to my knowledge) no analytical solutions are known. One standard computational approach is the configuration interaction (CI) method to diagonalize the Hamiltonian in a variational trial space of Slater-determinantal states. Each Slater determinant consists of products of single-particle orbitals. Due to computer resource constraints,  only a certain number of Slater determinants can be included in the basis set. One possibility is to include only trial states up to certain excitation level of the non-interacting problem. The usage of Slater-determinants as CI basis-set introduce severe distortions in the eigenenergy spectrum due to the intrusion of spurious states, as we will discuss next. Spurious states have been extensively analyzed in the few-body problems arising in nuclear physics but have rarely been mentioned in solid-state physics, where they do arise in quantum-dot systems. The basic defect of the Slater-determinantal CI method is that it brings along center-of-mass excitations. During the diagonalization, the center-of-mass excitations occur along with the Coulomb-interaction and lead to an inflated basis size and also with a loss of precision for the eigenenergies of the excited states. Increasing the basis set does not uniformly reduce the error across the spectrum, since the enlarged CI basis set brings along states of high center-of-mass excitations. The cut-off energy then restricts the remaining basis size for the relative part. The cleaner and leaner way is to separate the center-of-mass excitations from the relative-coordinate excitations, since the Coulomb interaction only acts along the relative coordinates. In fact, the center-of-mass part can be split off and solved analytically in many cases. The construction of the relative-coordinate basis states requires group-theoretical methods and is carried out for four electrons here Interacting electrons in a magnetic field in a center-of-mass free basis (arxiv:1410.4768). For three electrons, the importance of a spurious state free basis set was emphasized by R Laughlin and is a design principles behind the Laughlin wave function. Slow or fast transfer: bottleneck states in light-harvesting complexes Exciton dynamics in LHCII. High-performance OpenCL code for modeling energy transfer in spinach Flashback to the 80ies: filling space with the first quasicrystals This post provides a historical and conceptional perspective for the theoretical discovery of non-periodic 3d space-fillings by Peter Kramer, later experimentally found and now called quasicrystals. See also these previous blog entries for more quasicrystal references and more background material here. The following post is written by Peter Kramer. Star extension of the pentagon. From Kramer 1982. Star extension of the pentagon. Fig 1 from Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells by Peter Kramer, Acta Cryst. (1982). A38, 257-264 When sorting out old texts and figures from 1981 of mine published in Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells, Acta Cryst. (1982). A38, 257-264), I came across the figure of a regular pentagon of edge length L, which I denoted as p(L). In the left figure its red-colored edges are star-extending up to their intersections. Straight connection of these intersection points creates a larger blue pentagon. Its edges are scaled up by τ2, with τ the golden section number, so the larger pentagon we call p(τ2 L). This blue pentagon is composed of the old red one plus ten isosceles triangles with golden proportion of their edge length. Five of them have edges t1(L): (L, τ L, τ L), five have edges t2(L): (τ L,τ L, τ2 L). We find from Fig 1 that these golden triangles may be composed face-to-face into their τ-extended copies as t1(τ L) = t1(L) + t2(L) and t2(τ L) = t1(L) + 2 t2(L). Moreover we realize from the figure that also the pentagon p(τ2 L) can be composed from golden triangles as p(τ2 L) = t1(τ L) + 3 t2(τ L) = 4 t1(L) + 7 t2(L). This suggests that the golden triangles t1,t2 can serve as elementary cells of a triangle tiling to cover any range of the plane and provide the building blocks of a quasicrystal. Indeed we did prove this long range property of the triangle tiling (see Planar patterns with fivefold symmetry as sections of periodic structures in 4-space). An icosahedral tiling from star extension of the dodecahedron. The star extension of the dodecahedron. Star extension of the dodecahedron d(L) to the icosahedron i(τ2L) and further to d(τ3L) and i(τ5L) shown in Fig 3 of the 1982 paper. The vertices of these polyhedra are marked by filled circles; extensions of edges are shown except for d(L). In the same paper, I generalized the star extension from the 2D pentagon to the 3D dodecahedron d(L) of edge length L in 3D (see next figure) by the following prescription: • star extend the edges of this dodecahedron to their intersections • connect these intersections to form an icosahedron The next star extension produces a larger dodecahedron d(τ3L), with edges scaled by τ3. In the composition of the larger dodecahedron I found four elementary polyhedral shapes shown below. Even more amusing I also resurrected the paper models I constructed in 1981 to actually demonstrate the complete space filling! These four polyhedra compose their copies by scaling with τ3. As for the 2D case arbitrary regions of 3D can be covered by the four tiles. Elementary cells The paper models I built in 1981 are still around and complete enough to fill the 3D space. The four elementary cells shown in the 1982 paper, Fig. 4. The four shapes are named dodecahedron (d) skene (s), aetos (a) and tristomos (t). The paper models from 1981 are still around in 2014 and complete enough to fill the 3D space without gaps. You can spot all shapes (d,s,a,t) in various scalings and they all systematically and gapless fill the large dodecahedron shell on the back of the table. The only feature missing for quasicrystals is aperiodic long-range order which eventually leads to sharp diffraction patterns of 5 or 10 fold point-symmetries forbidden for the old-style crystals. In my construction shown here I strictly preserved central icosahedral symmetry. Non-periodicity then followed because full icosahedral symmetry and periodicity in 3D are incompatible. In 1983 we found a powerful alternative construction of icosahedral tilings, independent of the assumption of central symmetry: the projection method from 6D hyperspace (On periodic and non-periodic space fillings of Em obtained by projection) This projection establishes the quasiperiodicity of the tilings, analyzed in line with the work Zur Theorie der fast periodischen Funktionen (i-iii) of Harald Bohr from 1925 , as a variant of aperiodicity (more background material here). GPU-HEOM 2d spectra computed at nanohub 1. login on (it’s free!) 2. switch to the gpuheompop tool 3. click the Launch Tool button (java required) You can select this preset from the Example selector. 10. Voila: your first FMO spectra appears. GPU and cloud computing conferences in 2014 Oscillations in two-dimensional spectroscopy Transition from electronic coherence to a vibrational mode. Computational physics on GPUs: writing portable code GPU-HEOM code comparison for various hardware. Here some considerations and observations: AMDs porting remarks Matt Scarpinos OpenCL blog Computational physics on the smartphone GPU adb pull /system/lib/ rm plasma_disk_gpu -I. \ -Llib \ -lOpenCL \ -o plasma_disk_gpu plasma_disk.cpp scp -P 2222 root@192.168.0.NNN: scp -P 2222 plasma_disk_gpu root@192.168.0.NNN: 9. ssh into your phone and run the GPU program: ssh -p 2222 root@192.168.0.NNN ./plasma_disk_gpu 64 16 cd /data/data/jackpal.androidterm mkdir gpu chmod 777 gpu adb push /data/data/jackpal.androidterm/gpu/ adb push plasma_disk_gpu /data/data/jackpal.androidterm/gpu/ adb shell cd /data/data/jackpal.androidterm/gpu/ ./plasma_disk_gpu 64 16 • calculating population dynamics  • tracking coherences between two eigenstates • obtaining absorption spectra • two-dimensional echo spectra (including excited state absorption) and you find further references in the supporting documentation. Wendling spectral density for FMO complex 2d spectra are smart objects FMO spectrum calculated with GPU-HEOM autocorrelation function in a uniform force field is known in analytic form: The physics of GPU programming GPU cluster Peak oscillations in the FMO complex calculated using GPU-HEOM The Nobel Prize 2011 in Chemistry: press releases, false balance, and lack of research in scientific writing To get this clear from the beginning: with this posting I am not questioning the great achievement of Prof. Dan Shechtman, who discovered what is now known as quasicrystal in the lab. Shechtman clearly deserves the prize for such an important experiment demonstrating that five-fold symmetry exists in real materials. My concern is the poor quality of research and reporting on the subject of quasicrystals starting with the press release of the Swedish Academy of Science and lessons to be learned about trusting these press releases and the reporting in scientific magazines. To provide some background: with the announcement of the Nobel prize a press release is put online by the Swedish academy which not only announces the prize winner, but also contains two PDFs with background information: one for the “popular press” and another one with for people with a more “scientific background”. Even more dangerously, the Swedish Academy has started a multimedia endeavor of pushing its views around the world in youtube channels and numerous multimedia interviews with its own members (what about asking an external expert for an interview?). Before the internet age journalists got the names of the prize winners, but did not have immediately access to a “ready to print” explanation of the subject at hand. I remember that local journalists would call at the universities and ask a professor who is familiar with the topic for advice or get at least the phone number of somebody familiar with it. Not any more. This year showed that the background information prepared in advance by the committee is taken over by the media outlets basically unchanged. So far it looks as business as usual. But what if the story as told by the press release is not correct? Does anybody still have time and resources for some basic fact checking, for example by calling people familiar with the topic, or by consulting the archives of their newspaper/magazine to dig out what was written when the discovery was made many years ago? Should we rely on the professor who writes the press releases and trust that this person adheres to scientific and ethic standards of writing? For me, the unfiltered and unchecked usage of press releases by the media and even by scientific magazines shows a decay in the quality of scientific reporting. It also generates a uniformity and self-referencing universe, which enters as “sources” in online encyclopedias and in the end becomes a “self-generated” truth. However it is not that difficult to break this circle, for example by 1.  digging out review articles on the topic and looking up encyclopedias for the topic of quasicrystals, see for example: Pentagonal and Icosahedral Order in Rapidly Cooled Metals by David R. Nelson and Bertrand I. Halperin, Science 19 July 1985:233-238, where the authors write: “Independent of these experimental developments, mathematicians and some physicists had been exploring the consequences of the discovery by Penrose in 1974 of some remarkable, aperiodic, two-dimensional tilings with fivefold symmetry (7). Several authors suggested that these unusual tesselations of space might have some relevance to real materials (8, 9). MacKay (8) optically Fourier-transformed a two-dimensional Penrose pattern and found a tenfold symmetric diffraction pattern not unlike that shown for Al-Mn in Fig. 2. Three-dimensional generalizations of the Penrose patterns, based on the icosahedron, have been proposed (8-10). The generalization that appears to be most closely related to the experiments on Al-Mn was discovered by Kramer and Neri (11) and, independently, by Levine and Steinhardt (12). 2. identifying from step 1 experts and asking for their opinion 3. checking the newspaper and magazine archives. Maybe there exists already a well researched article? 4. correcting mistakes. After all mistakes do happen. Also in “press releases” by the Nobel committee, but there is always the option to send out a correction or to amend the published materials. See for example the letter in Science by David R. Nelson Icosahedral Crystals in Perspective, Science 13 July 1990:111 again on the history of quasicrystals: “[…] The threedimensional generalization of the Penrose tiling most closely related to the experiments was discovered by Peter Kramer and R. Neri (3) independently of Steinhardt and Levine (4). The paper by Kramer and Neri was submitted for publication almost a year before the paper of Shechtman et al. These are not obscure references: […] Since I am working in theoretical physics I find it important to point out that in contrast to the story invented by the Nobel committee actually the theoretical structure of quasicrystals was published and available in the relevant journal of crystallography at the time the experimental paper got published. This sequence of events is well documented as shown above and in other review articles and books. I am just amazed how the press release of the Nobel committee creates an alternate universe with a false history of theoretical and experimental publication records. It does give false credits for the first theoretical work on three-dimensional quasicrystals and at least in my view does not adhere to scientific and ethic standards of scientific writing. Prof. Sven Lidin, who is the author of the two press releases of the Swedish Academy has been contacted as early as October 7 about his inaccurate and unbalanced account of the history of quasicrystals. In my view, a huge responsibility rests on the originator of the “story” which was put in the wild by Prof. Lidin, and I believe he and the committee members are aware of their power  since they use actively all available electronic media channels to push their complete “press package” out. Until today no corrections or updates have been distributed. Rather you can watch on youtube the (false) story getting repeated over and over again. In my view this example shows science reporting in its worst incarnation and undermines the credibility and integrity of science. Quasicrystals: anticipating the unexpected The following guest entry is contributed by Peter Kramer Time to find eigenvalues without diagonalization Aharnov-Bohm Ring conductance oscillations Cosmic topology from the Möbius strip Fig 1. The Möbius twist. The following article is contributed by Peter Kramer. Fig 2: The planar Möbius crystal cm Fig 3: Cubic twist N3. Fig 4: Cubic twist N2.
5af4988b0ee03c6f
Thursday, April 20, 2017 Tagore's shift to universal humanism was subsequent Swapan Dasgupta Tapan Raychaudhuri's study of Bengal's responses to the West in the 19th century dealt with three intellectual stalwarts - Bhudeb Mukherjee, Bankim Chandra Chatterjee and Swami Vivekananda. All three focused on issues that related to Hindus as Hindus. To them, modernity did not mean discarding the Hindu inheritance but reshaping (and in Bhudeb's case rediscovering) the Hindu inheritance. In the realms of political activism too, the movement against the Partition of Bengal had explicitly Hindu overtones - take Aurobindo Ghose and Bipin Chandra Pal as foremost examples - and this religio-political aspect was embraced by Rabindranath Tagore. Tagore's shift to universal humanism was a subsequent development and his trenchant critique of Mahatma Gandhi's non-cooperation movement did not endear him to most fellow Bengalis. However, his iconic status, particularly after he won the Nobel prize, insulated him from any politically-inspired criticism. Arguably, C.R. Das was an exception but his legacy of communal power-sharing collapsed rapidly after his untimely death. From the late 1920s till Independence, there was often very little to distinguish the Bengal Congress from the Hindu Mahasabha. In spite of the parallel attraction of Marxism, the upper echelons of bhadralok society did not shun explicitly Hindu mobilization. The Hindu Mahasabha boasted of the involvement of intellectual stalwarts such as Shyama Prasad Mookerjee, Nirmal Chandra Chatterjee and even Ramananda Chatterjee.  The Spiritual Nationalism & Human Unity: approach taken by Sri Aurobindo in Politics D Banerjee Abstract: Sri Aurobindo's theory of Politics is quite extraordinary than other contemporary politicians of India. It also has several parts like Swaraj, boycott, resistance, national education as necessary ingredient of Indian political agitation started from 1905. But it has  Participation and the Mystery: Transpersonal Essays in Psychology, Education, and Religion JN Ferrer - 2017 Page 1. PARTICIPATION AND THE. MYSTERY Transpersonal Essays in Psychology, Education, and Religion |ORGE N. FER RER Page 2. PARTICIPATION AND THE MYSTERY Page 3. Page 4. PARTICIPATION AND THE ... › upsc-prelims › terrorist-and-... 12 hours ago - The people associated with this samiti were Sri Aurobindo, Deshabandhu Chittaranjan Das, Surendranath Tagore, Jatindranath Banerjee, Bagha Jatin, Bhupendra Natha Datta, Barindra Ghosh etc. Bhupendra ... Daily Mail-18-Apr-2017 The original founders were Sri Aurobindo and his younger brother, Barindra. Both, along with 47 other accused stood trial for the Alipur bomb blast case or the ... Chronicle of Higher Education (subscription)-19-Mar-2017 Like a lot of academics, I have long harbored the desire to write a popular book — in my case, something like Richard Dawkins's The Selfish Gene. But sadly, I ... MARCH 19, 2017 The future of our species is surely rich material, something many of us have speculated about. Will we have massive brains, dwindling little bodies, and highly functional genitalia? I tend to think so. In fact, with five kids, I rather pride myself on being an advanced specimen. Then again, maybe I’m not. From Darwin on, experts have worried that big brains are not that adaptive. Think of all of the great philosophers — Aquinas, Hume, Kant, Plato, Wittgenstein — who died childless. Mad In America-13-Mar-2017 Is the suppression of spirituality in the West the reason for our struggle and suffering labeled as mental illness? Are we medicated to numb the pain and ... On 20 Apr 2017, at 09:12, priyedarshi jetli wrote: I am not promoting logicism. Some logicists believed that you can construct all of mathematics from logic. The Intuitionists don't accept this. Neither do I. My point was a simple general one, logic is mainly about inferences and has no content, therefore free of ontology or epistemology. This is what alternative systems of logic have in common. Further, as far as I know most systems of logic can be translated into each other or shown to be extentions of classical logic. Of course it takes some doing to accomplish this. Logicism has failed. No (reasonably serious) logicians would assume it. We know that elementary arithmetic cannot be deduced from any logic. If fact, we can prove in arithmetic that arithmetic does not follow from logic. Russell and Whitehead claims that 1+1=2 can be proved in their logical system, but they assumed a part of set theory, which assumes much more than arithmetic. We know also that arithmetic is Turing equivalent. It realizes all computations, and this lead to the problem of recovering physics from a statistics on all computations see from inside (that is: structured by the modal logic of self-references imposed by the Incompleteness Phenomenon). My point is more technical: if we are machine, there is no primary physical reality, only a first person locally sharable infinities of number's "dreams". It makes me say that in Occident, we have to backtrack to Pythagoras and Plato (and the neopythagoreans + the neoplatonists) in the field of (scientific/modest) theology. I appreciate the antique greeks mainly because they do not oppose mysticism and rationalism.  My feeling, coming from my interest in buddhist mahayana is that in India, there has been always a bigger open-mindness about immaterialism, or, to put it in another way, a bigger skepticism toward the material explanations. India, and China for a long time, seem to not have separated rationalism and religion/theology as much as Europa. But of course, all civilisation has its dark period and witches hunts. In today's quantum mechanics, all matter (some of which is perceived directly by senses and called classical) is made up of quantum particles.  A quantum particle is said to be a packet of de Broglie phase waves, each of which is supposed to have a speed greater than that of light in vacuum.  Thus the phase wave is a mathematical abstraction, it is non-material in that you cannot perceive it by senses.  So, one may say all matter is made of ideas.   Dear all,      I would like to suggest something here, so many on this forum are attempting to explain the infinite and infinitesimal complexities of matter and spirit, body, mind, soul, consciousness and the unconscious. And Deepak seems to want to throw all that aside and suggests that nothing of matter subtle or gross actually exists, except for consciousness which according to him is all one non personal oneness of bliss. My own opinion is that neither of these approaches is being realistic, and I'll try to explain why.      First, Deepak's theories make no sense when put to any test, he postulates that nothing in the world exists, but we are all aware that it does, even if transitory and temporary we know something is there, we are there, the body is there, mind, universe, stars and sun, oceans etc. And we know that we didn't create all of that, something else must be the source. So it's nice to think that all phenomena are our own creation as he states, but obviously something else is there to discover.      Then the physicists and other scientists are so expert at breaking down the complex functioning of that phenomena which Deepak is claiming does not exist, that is true. But how to actually empirically explain the origins of such phenomena to get to the actual root of it? Obviously this cannot be done through speculation, why? Because our minds and intellect used for speculation are themselves products of the material process itself, and the product cannot fully explain the source, logically. Therefore Deepak's attempt to simplify things. But again that falls short, he's only partly right after all, true that consciousness is more subtle than matter, superior to matter, subjective. But we ourselves as consciousness are not the source of all matter, else we would not be "bamboozled" by it as he likes to say, one cannot be bamboozled by something he has himself created, not possible if he is the source of that.       That does leave the one logical explanation, and which I believe Bhakti Madhava Puri has been trying to explain further, that we ourselves as limited subjective conscious entities are ourselves objects of a greater unlimited subjective consciousness, infinite absolute consciousness which is the source of both individual consciousness and matter both subtle and gross. That's my conclusion anyway, it makes sense in explaining reality, I welcome any comments on this. I do have admiration for the subtle complexities of all the physicists' and philosophers' explanations on this forum but this is my own contribution, I think it makes sense. In this world we are simply the conscious agent, the minds, bodies, even thoughts, we are not the source of all these, they are under control of and the property of something else, someone else. To accept that idea is to me the key. Regards, Eric Reyes Proofs apply to mathematical theorems not to things like matter or consciousness. And even in mathematics if something cannot be proven it does not mean that it is not true. Take Fermat's last theorem for example. It took 300 years to prove it. It is like saying that if no one had ever climbed the peak of Mount Everest it did not exist. The metaphoric statement of the type Deepak Chopra makes is meaningless. There is a very old fallacy that is commonly used:  No one has proven that God does not exist. Therefore, God exists. It is an obvious fallacy. In any case the burden of proof lies on the one who accepts the existence of God or a non-material consciousness for that matter and not on those who do not accept them. The fallacy is surely obvious to those who are committing them as well. They are a disguise for covering up the issue.  The issue is that once stated as alternative hypotheses, idealism, dualism, materialism and perhaps other isms regarding the mind body problem are dogmas. None of these hypotheses can be proven and all attempts to prove them beg the question. But among dogmas we can choose the one that seems most plausible to us. To me the materialist or physicalist dogma is the most plausible. There is no emergent mental (non-physical) world, there is no non-physical or mystical forces that causally act on the physical world. The physical is causally closed. To deny this people often equate 'physics of the day' with 'everything that is physical'. I need not spend  time to dispel this fallacious equivocation. We need only reflect on the statement that "everything is in principle explainable physically." Dear Siegfried, Thanks for the comments. It seems that you agree that the quantum classical divide is not about spatial dimension in the way often talked about. I agree that the quantum level is the level of the individual quantum of action. So very often the amount of energy involved ins small. However, if we take a Bose mode as an indivisible quantum of action (as contemporary field theory generally does) rather than the notional single photon or phonon or whatever, then there is actually no limit to the energy in a quantum of action. It will be h x frequency x the notional ‘number of particles' quantum number. So a military LASER beam or a seismic mode that informs us of the chemical composition of the centre of the earth may have huge energy.  I agree that uncertainty is not present in the Schrödinger equation. But that is just a bit of writing on a piece of paper. A wave equation is not a mode or entity. It does not ‘evolve’. As an outsider it seems to me that the term ‘wave function’ encourages the belief that somehow the wave equation is a description of a mode. As I understand it, any given wave equation is a look up table for the probabilities of actualisation of a slew of modes with certain common parameters. A mode appears to have some sort of dynamic field structure with values that relate in spacetime according to complex harmonic math but being an indivisible action I can see no legitimate concept of ‘progression from state to state’. The more I get to know about modern physics and the more I talk to physicists (including Basil Hiley who was kind enough to give me some private tuition) the more it seems to me that the traditional ‘interpretations’ are all metaphysically unsound and counter to the basic concept of a theory of indivisible dynamic units. Douglas Bilodeau wrote a very nice article in 1996 in J Consc Studies to that effect. He also makes some remarks in the article about the conflation of different meanings of ‘quantum’ and ‘classical’. These are probably rather unhelpful terms that lead to misapprehensions. I think Leibniz does better in simply distinguishing descriptions of individual actions and description s of aggregate mechanics. I am not quite sure why you say that the quantum of action does not determine the quantised mode. My understanding of recent developments in condensed matter physics is that the mode is considered the quantum of action. As far as I can see uncertainty remains an essential feature of a world constituted by discrete instantiations of symmetric continuous dynamic laws, just for basic logical reasons, as pointed to by Leibniz. Best wishes No comments: Post a Comment
a63655e0407218bd
Wednesday, May 06, 2015 Many uses in researching quantum dots Because nanoparticles are so small, millions of times smaller than the width of a human hair, they have "tremendous surface area," raising the possibility of using them to design materials with more efficient solar-to-electricity and solar-to-chemical energy pathways, says Ari Chakraborty, an assistant professor of chemistry at Syracuse University. "They are very promising materials," he says. "You can optimize the amount of energy you produce from a nanoparticle-based solar cell." Chakraborty, an expert in physical and theoretical chemistry, quantum mechanics and nanomaterials, is seeking to understand how these nanoparticles interact with light after changing their shape and size, which means, for example, they ultimately could provide enhanced photovoltaic and light-harvesting properties. Changing their shape and size is possible "without changing their chemical composition," he says. "The same chemical compound in different sizes and shapes will interact differently with light." Specifically, the National Science Foundation (NSF)-funded scientist is focusing on , which are semiconductor crystals on a nanometer scale. Quantum dots are so tiny that the electrons within them exist only in states with specific energies. As such, quantum dots behave similarly to atoms, and, like atoms, can achieve higher levels of energy when light stimulates them. Chakraborty works in theoretical and , meaning "we work with computers and computers only," he says. "The goal of computational chemistry is to use fundamental laws of physics to understand how matter interacts with each other, and, in my research, with light. We want to predict chemical processes before they actually happen in the lab, which tells us which direction to pursue." These atoms and molecules follow natural laws of motion, "and we know what they are," he says. "Unfortunately, they are too complicated to be solved by hand or calculator when applied to chemical systems, which is why we use a computer." The "electronically excited" states of the nanoparticles influence their optical properties, he says. "We investigate these excited states by solving the Schrödinger equation for the nanoparticles," he says, referring to a partial differential equation that describes how the quantum state of some physical system changes with time. "The Schrödinger equation provides the quantum mechanical description of all the electrons in the nanoparticle. "However, accurate solution of the Schrödinger equation is challenging because of large number of electrons in system," he adds. "For example, a 20 nanometer CdSe quantum dot contains over 6 million electrons. Currently, the primary focus of my research group is to develop new quantum chemical methods to address these challenges. The newly developed methods are implemented in open-source computational software, which will be distributed to the general public free of charge." Solar voltaics, "requires a substance that captures light, uses it, and transfers that energy into electrical energy," he says. With solar cell materials made of nanoparticles, "you can use different shapes and sizes, and capture more energy," he adds. "Also, you can have a large  for a small amount of materials, so you don't need a lot of them." Nanoparticles also could be useful in converting solar energy to chemical energy, he says. "How do you store the energy when the sun is not out?" he says. "For example, leaves on a tree take energy and store it as glucose, then later use the glucose for food. One potential application is to develop artificial leaves for artificial photosynthesis. There is a huge area of ongoing research to make compounds that can store energy." Medical imaging presents another useful potential application, he says. "For example, nanoparticles have been coated with binding agents that bind to cancerous cells," he says. "Under certain chemical and physical conditions, the nanoparticles can be tuned to emit light, which allows us to take pictures of the . You could pinpoint the areas where there are cancerous cells in the body. The regions where the  are located show up as bright spots in the photograph." As part of the grant's educational component, Chakraborty is hosting several students from a local high school—East Syracuse Mineoa High School—in his lab. He also has organized two workshops for high school teachers on how to use computational tools in their classrooms "to make chemistry more interesting and intuitive to high school students," he says. "The really good part about it is that the kids can really work with the molecules because they can see them on the screen and manipulate them in 3-D space," he adds. "They can explore their structure using computers. They can measure distances, angles, and energies associated with the molecules, which is not possible to do with a physical model. They can stretch it, and see it come back to its original structure. It's a real hands-on experience that the kids can have while learning chemistry." No comments:
9b84b9c16486950f
Tuesday, 31 March 2009 More Geekfreak / The Schrödinger Equation Schrödinger Equation In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics. In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger, who discovered it in 1926. Schrödinger's equation can be mathematically transformed into Heisenberg's matrix mechanics, and into Feynman's path integral formulation. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is not as severe in Heisenberg's formulation and completely absent in the path integral. (From Wikipedia: http://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation) It's things like these that just make me wish I was better at the mathematical part of science. I really frickin' wish I knew exactly what all this is about. It's akin to a wormhole in my head leading to a whole new paradigm of knowledge that I could have but don't because I lack the electromagnetic field and space-time requirements for it. Quantum physics - The dreams stuff is made of. Audio Candy: Avenged Sevenfold - Dear God Monday, 30 March 2009 Typing away furiously at the keys, aligning words on a document, cutting down on text to fit the word limit, switching from one assignment to another, flipping through pages of 3-inch thick books, running back from school to the library because of a realization that the previous book had a useful source, opening websites in new tabs, scouring over online resources, searching intensively for half an hour - AND always eventually finding what I need, sending out update emails, pondering over a theoretical argument, being in a state of academic thought and deliberation non-stop the whole day. That was roughly my state of mind from Friday through til today, and I LOVED it. I really love this intensive drowning in academic work, especially when it involves researching, sifting through books, considering arguments, formulating essays and writing. And I will always be anal because I don't want it to be anything but the best that can come from me, if not in quality then in the form of formatting and bibliography. Every source must be cited, every reference must be documented. And in the end I did learn stuff. I feel more well versed in the history of Xi'an and the business viability of its location and current economic situation, I feel less alien to the differences between selection, recruitment, performance management and compensation of Singapore and Britain. I feel leveled up. I guess when it comes to work, I'll always be an individualist. Today marked the end of the hell part of everyone's favouritely coined hell week for me. A clean desk is a sign of a cluttered desk drawer. Audio Candy: Broken Social Scene - Stars And Sons Soccer Philosophy I chanced upon an article today which talked about some of the things soccer has always driven me effortlessly to ponder about. I have itched to write about some of these things but never really got down to doing it, perhaps because there is to some extent a cliched nature of soccer philosophy, often brought to the masses in random packets of gibberish by managers and coaches and ex-professionals. This article sums up a few of the things that have always tickled my mind when soccer is in question, and perhaps acts as a frame of mind for anyone who might be interested in knowing more about the sport beyond its facade of 11 men chasing after a ball, grossly overpaid players and petty violence. When you're on the pitch playing and there's nothing but you, the ball, all these things around you and a split second to make a decision, everything gets so elegantly summed up in a moment its like attaining some form of revelation with everything and nothing at all, all at once. One can never fully convince a Chinese man who knows no English that a particular Shakespearean prose is beautiful. All I can say is that I truly wish that the ones who can't appreciate it can know how much more it is than they think. Soccer brings out the philosopher in us By Douglas Todd When I need real insight into the meaning of life, I have been known to sidestep famous philosophers like William James, Jean-Paul Sartre and Lao-Tzu and go straight to the hard stuff: Books about soccer. There is nothing like soccer to focus the mind on the art of living, on making sense of the sweet bitterness of existence. For me soccer (a.k.a. football) has a complexity and cohesiveness the Olympics do not. The Olympics don't speak to me about philosophy, whereas the globe's most popular sport offers natural metaphors for life's fluidity, ambiguity, corruption, idealism, communality and beauty. Too many Olympic sports, with exceptions such as soccer, of course, and field hockey, require women and men to become like machines, fixated on going just a millimetre higher or a microgram heavier or a millisecond faster. I am a tad biased (my sons, by the way, play soccer far more than I ever did.) But even those who don't like soccer have to acknowledge that a flood of good-to-great books have been written about it since Nick Hornby's surprising 1992 bestseller, Fever Pitch. Fever Pitch is about the inner workings of a boy-man from a divorced household who finds delight, torment and healing in the then-dreary London soccer team, Arsenal (which happens to be my favorite team in the English Premier League, whose season kicks off today.) Before highlighting some of the remarkable books written about soccer in the past 16 years, it's pleasing to confirm Vancouver author Alan Twigg has recently added the first Canadian voice to the pantheon of those who have leaned on the sport to say something important. In Full-Time: A Soccer Story, Twigg, with complete lack of pretension, offers large dollops of down-home philosophy as he recounts the way his over-50s team, the Point Grey Legends, jet off on a risky adventure to Spain to play several teams of ex-professionals. Along the journey, Twigg muses honestly about his own semi-erotic obsession with the ball. He delves into the vagaries of romance, the need for glory, self-doubt, the Canadian identity, aging, loyalty and how soccer connects people in weird ways. In a fine section on the unusual courage it takes to be a referee, Twigg pulls out the philosophical stops about the value of bringing order to the apparent chaos of life, comparing the ref to a priest. "The referee, like the priest, must be a complex personality. He must have a strong ego in order to rise to the challenge of his job, and yet he must resist all signs of his egocentricity." The referee plays a transcendent role. "In the eyes of the others, the referee can only be a loser, never a winner, and so he enters each match with the private hope that he might walk off the pitch at the end of ninety minutes as a completely unsung hero." Full-Time illustrates how serious content can be packed into books about this deceptively simple game enjoyed by billions globally, including millions of Canadian youth. These books explore the intersection of soccer with history, national culture, economics, politics and philosophy. Some of the best titles include Soccer in Sun and Shadow by Eduardo Galeano, a lyrical history of the game; Franklin Foer's How Soccer Explains the World: An Unlikely Theory of Globalization, and Alex Bellos's Futebol: The Brazilian Way Of Life, which brings out the game's perennial mix of joy and pathos. To my mind, however, no soccer book reveals a more subtle philosophical mind at work than David Winner's Brilliant Orange: The Neurotic Genius of Dutch Soccer. Brilliant Orange argues that the "Total Football" developed three decades ago by the Dutch national team reflects the often-difficult personalities of the people of the Netherlands. "Total Soccer" requires every player to, in effect, be able to switch to any position. Because space is always at a premium in their small country, Winner maintains the Dutch have learned to use it in wildly innovative ways. This is seen in Dutch architecture, art and society - and soccer. That said, understanding soccer fan(atic)s can be as interesting as analyzing the game and its implications. For raw literary power, there may be no more persuasive book than Among the Thugs: The Experience, and the Seduction, of Crowd Violence. In this early 1990s account, Granta Books editor Bill Buford enters the horrifying culture of British soccer hooligans. His gift is to make the reader feel the intoxicating attraction of mob mayhem. Why does soccer evoke wider horizons of meaning in so many? American writer David Goldblatt, author of The Ball is Round, said: "Milan Kundera (author of The Unbearable Lightness of Being) defended the role of the literary critic by arguing 'Without the meditative background that is criticism, works become isolated gestures, historical accidents, soon forgotten.' I would say the same of social history and sport." Soccer especially brings out the contemplative side of many people because it doesn't lend itself to statistics, as do baseball and the Olympics. And it doesn't require body-disguising equipment, like American football and hockey. Soccer is also so fluid, so non-mechanical, that describing the game and everything that goes into it often requires a touch of poetry. Twigg's book provides bursts of such poetry, in much the same way as the highly evocative Miracle of Castel Di Sangro. In that book, famous crime writer Joe McGinnis goes to Italy and uncovers the mix of valour, solidarity and immorality that go into how a tiny village's team climbs momentarily into the big leagues. One of the refreshing peculiarities of Twigg's soccer book is that he writes about actually trying to play the game with some skill. Twigg's also in his mid-50s, so his final reflections on the bravery of the solitary referee illustrate the wisdom that can come with age, the wisdom of bringing impartiality to a rough and tumble contest. By the end of the book, Twigg even thinks about the value for himself of "outgrowing" soccer. He quotes the Nigerian striker Kanu saying, "If you make football too important, you deprive it of its beauty." As Twigg considers detaching from the game that has provided him so much passion, purpose and meaning, it's not at all a stretch to say he is offering up ultimate philosophical insights about life itself. Saturday, 28 March 2009 Dear Stressed SMU Student, I had an emphatic day of work today. I woke up early to visit the trade office of International Enterprise Singapore (IESingapore) at Bugis Junction Office Tower at 10am so that I could get some documented information and professional advice about local SMEs and Xi'an, where we have stipulated that doing business in is good. It might seem a little odd that I'm doing this because most SMU students like it in the Li Ka Shing Library where the internet, powerpoints and air conditioning are available in a comfort-zone kinda way, but I guess I just like the idea that the research I'm doing beyond the laptop can value-add to my project. So why not the trouble? Besides, I had started to hit a wall of sorts with regards to the information searching online, and my group's BSM report was due at 5pm. Once done with the visit to IESingapore's office at 1pm, I went straight back to the National Library to resume my research for my human capital management (HCM) report that is due on monday. Thankfully, the National Library isn't that far off from the office. The HCM report is proving to be very pressing, because the deadline is very near considering the scope of the report, which our professor has indicated as very difficult to do. And an exchange student group mate seems to be experiencing some difficulty in getting the information we need for our 3 man team, but it's alright, I appreciate a challenge. 2 hours on, and my research on HCM practices in Singapore's culture, while content-substantial, is still very vague, and I still hadn't began researching about Britain's HCM culture. So I had to apologetically inform my ethics group mates that I had to be a little later than the 1530h rendezvous time. This is because the National Library closes by 9pm and while I need the information, I still had a BSM meeting to go to after ethics which might end late and cause me to not be able to utilize the library's facilities. By 4pm, I was somewhat done with Singapore's research, so it was a hasty rush down to SMU's School of Business and I'd already felt quite bad that I had to keep my group mates waiting. Crashed into the GSR, settled down, and then it was diving headfirst into solid ethics debating for 2 hours on whether the Bureau of Land Management should allow Questar to continue its angular oil drilling at the expense of Wyoming's wildlife and environment. The results were a little inconclusive, especially that of sustainable development under Adam Smith's free market ethical argument, but I wouldn't discount the meeting as unproductive. As long as our directions were positively and constructively aligned - which we were - everything is constructive; even if we argue forever to find that an idea is useless, it has served to trim and shape our overall argument better. It was almost 6pm when we started to wrap up on the discussion and conclude that we need more meetings, so I hurried down to the School of Social Sciences at the other end of campus for my BSM meeting that I was already slightly late for (again!). A little more discussion and quite a bit of procrastination later, the meeting ended and it was 8pm. With a little bit more time to spare since the library closes at 9pm, it was back to the Singapore section at the 11th storey of the National Library to do some final research. After 9pm when the library closed, Angie and I had dinner, which was my first meal of the day (aside from the very delicious tangyuan beancurd she had mercifully brought down for me to snack on during BSM meeting). The day outside is done, but there was still more tidying up to do for tomorrow's presentation, and more work to fuss over for monday's HCM report submission, as well as my 5-page LTM learning journal. Who can forget DMA presentation on tuesday and ethics presentation on thursday, with more meetings to come? But that's for tomorrow to worry about. I'm gonna hit the sack soon, because I've gotta be up at 7am to prep for my presentation. :] As I look back on the day, time had passed incredibly fast, and I reckon because it has been nothing but work and thinking all day. But it's really okay because in the end I'm still smiling about how the day has unfolded without me getting into an accident or breaking down, and you know what, Stressed SMU Student? It is your own imperative and your own choice to decide how you want to be bogged down or purposified by the hell weeks of 12 and 13. The rate the school is going, if everyone can see to it that it's not that bad, then everyone would be much happier. Make a choice and be positive about things, if not for anyone, then yourself. Happy SMU Student (In reality, please just suck it up for me. One more whiner who comes along will have me telling him or her how pathetic he or she looks next to my ethics group mate, who is taking 6.5 mods this term and is still being a delightful group mate to work with.) Audio Candy: Metro Station - Shake It Wednesday, 25 March 2009 The/That SOB So many years on, and SMU is still all about its business school. Sure didn't help when the vice bigwig committed the latest faux pas during the dialogue session by saying that there are indeed 'lesser schools' around that should be easier to get a degree from. I've never disapproved of SMU as a school with a business slant - it is a corporate management university after all - but what really riles me up is that the business faculty still holds on to a deliberately ignorant and baseless perspective that other faculties are useless, impractical, worthless of one's time and basically pathetic, and by virtue of the university's fiercely business standing holds these values and perspectives to be presumably true. On the premise of intellectual rigour alone, social sciences and research is the basis of any theoretical standing that business practices may wish to stem their discipline upon. Any social science student can do as well as the average student in business, and I am confident that not any business student can do well at all in a social science course. And face this - your business degree has an expiry date, unlike a social science degree. I am of the opinion that not every business student is unenlightened - there are a good deal of students who know what they're in it for and are respectful of what others pursue - but a significant number of people, however small, is enough to create the impression that business is a better faculty, and worse if they have the backing of the administration and the impression the university gives as well to hold their ignorant prejudices. A friend of mine recently tried to apply for Unilever, which advertised 40 vacancies, and got rejected with the notification that more than 1000 applicants signed up for it. Wake up! - Mahatma Gandhi Audio Candy: TobyMac - New World Saturday, 21 March 2009 Let The Good Times Roll It's been over two weeks since the last update and quite a bit has happened. Socscistan: The Little Republic had a pre-launch on saturday during the SMU Openhouse at the social sciences booth, and was later officially launched the following monday. It now sits snug at the SOSS lift waiting to be picked up by potential readers. I think at 5000 copies and a 428 population in the social sciences faculty we might have overproduced a little... Angie and I caught the ballet rendition of Cinderella by the Singapore Dance Theatre which proved to be extremely impressive and entertaining. The choreography was original, creative and witty. This should pave the way for more performances for us, considering that ticket prices can actually be quite attractive for students. Prawning has been a new obsession recently. There is something quite therapeutic and obsessively engaging about catching your own stash and then barbecuing it to eat. I guess it really works when the rewards are so tangible. I think we're on our way past pretending to be pro and actually notching a decent amount of catch. In truth, the past 2 weeks have been pretty hectic actually. But I'm starting to just focus only on the upcoming summer holidays, although I suppose I'll be quite busy with internship. I had an interview with Pearson Education South Asia Ltd on wednesday and it seemed really positive, so employment may be quite safely in the bag. The location leaves plenty of room to be desired though, with it being situated at First Lok Yang Road which is like a 20min bus ride away from Boon Lay interchange. I've got a street soccer tournament up tomorrow and I'm dead tired. Wish me luck. :] 99 percent of lawyers give the rest a bad name. Audio Candy: Fall Out Boy Feat. John Mayer - Beat It Wednesday, 4 March 2009 Comment Posted If it had a home would it be my eyes? The mind can be likened to a computer for more than the obvious reason of its role as a processor. To broach the sensitive, just as a thought, the computer is a human and the computer-user is whatever one might wanna consider to be divine and above. All of consciousness is then whenever the computer is on, without its being aware of anything that is really guiding its operations. Then make the computer think everything it does is of its own choice and will, and you will safely remain invisible to the computer. Colour psychology can be really fascinating. For example, it has been shown that red can, quite universally across cultures, rile people up more just by them looking at it. Sometimes, a woman can seem more attractive by just wearing a red dress. I think this means that red serves to heighten emotion and arousal such that whatever the base emotion - aggression or attraction just to name two - gets elevated. That is the cause-effect part of things. But I also believe the colours one chooses to decorate or shroud one's more personal and functional belongings, such as the walls of one's room, one's handphone, one's watch, etc (as opposed to one's living room or one's occasion clothes) are indicative of one's personality and self-identity. It might sound a little obvious but I'm of the opinion that this indicator can have quite powerful predictive ability. And I think it is more important to look at the personal things that don't often get shown to the world because those non-private belongings have a greater degree of conscious statement-making, rather than what one really subconsciously perceives of oneself. Subconscious perception is interesting in that it reveals far more. One may choose to hide a weakness by consciously displaying colours or behaving in a way that announce the opposite of that weakness. But in one's subconscious self-percept, one knows in a non-conscious manner about what one dislikes about oneself, which is often the basis for insecurity. As for those who are more comfortable as they are, the colours between what's private and what's personal overlap more. But the fundamental idea is basically that what one likes to colour his or her personal belongings is very much more telling. Offhand, I can make an educated guess about what some colours represent. Red probably signifies an emotional and passionate streak. Dark blue would probably represent a powerful calmness and/or stability. Green might have something to do with zest, quirkiness, being different or intelligent. Brown strikes me as down-to-earth. And et cetera the guesses may go. I am half listening to music and half trying to sleep in the sofa chairs fashioned into a coffin-like womb in the school library after my cognitive psychology paper (a conservative estimate brings me to establish that 93.7% of the time I'm in the library for purely sleeping purposes), and also entertaining half-baked random thoughts flittering around. Audio Candy: Broken Social Scene - Lover's Spit Tuesday, 3 March 2009 We Hold These Truths To Be Self-Evident If we hope enough without expecting too much, things will always work out. And if we are curious, humble and open enough, answers will come. Cognitive Psychology has been the dominant peripheral activity of the daily grind recently. While many have expressed disdain towards the impending mid-terms this week, I'm strangely looking very much forward to it. I think this is very much an indicator of how comfortable I feel studying and even embracing psychology as a discipline, and I'm very much fortunate for that. With pride, Socscistan: The Little Republic will finish printing and come in by Friday. All of us can't believe how much of a miracle this little newsletter-magazine has been, from our maiden meeting in late January where we were all direction-impaired beings foraying into something that we had no idea could ever take off, to a confident publication that has exceeded the original 14 pages we thought we'd struggle to fill up to a total of 28 pages. Socscistan: The Little Republic will be dropped off in (hopefully not so) insiduous and strategic locations on Saturday in conjunction with the SMU Open House, so look out for it. The team is confident that it won't disappoint. :] "Humanity takes itself too seriously. It is the world's original sin. If the caveman had known how to laugh, History would have been different." - Oscar Wilde Audio Candy: Jamiroquai - Just Dance
51aba4a9b3ac0d6e
Electronic band structure From Wikipedia, the free encyclopedia   (Redirected from Band structure) Jump to: navigation, search In solid-state physics, the electronic band structure (or simply band structure) of a solid describes those ranges of energy that an electron within the solid may have (called energy bands, allowed bands, or simply bands) and ranges of energy that it may not have (called band gaps or forbidden bands). Band theory derives these bands and band gaps by examining the allowed quantum mechanical wave functions for an electron in a large, periodic lattice of atoms or molecules. Band theory has been successfully used to explain many physical properties of solids, such as electrical resistivity and optical absorption, and forms the foundation of the understanding of all solid-state devices (transistors, solar cells, etc.). Why bands and band gaps occur[edit] Animation of band formation and how electrons fill them in a metal and an insulator The electrons of a single, isolated atom occupy atomic orbitals. Each orbital forms at a discrete energy level. When multiple atoms join together to form into a molecule, their atomic orbitals combine to form molecular orbitals, each of which forms at a discrete energy level. As more atoms are brought together, the molecular orbitals extend larger and larger, and the energy levels of the molecule will become increasingly dense. Eventually, the collection of atoms form a giant molecule, or in other words, a solid. For this giant molecule, the energy levels are so close that they can be considered to form a continuum. Band gaps are essentially leftover ranges of energy not covered by any band, a result of the finite widths of the energy bands. The bands have different widths, with the widths depending upon the degree of overlap in the atomic orbitals from which they arise. Two adjacent bands may simply not be wide enough to fully cover the range of energy. For example, the bands associated with core orbitals (such as 1s electrons) are extremely narrow due to the small overlap between adjacent atoms. As a result, there tend to be large band gaps between the core bands. Higher bands involve larger and larger orbitals with more overlap, becoming progressively wider and wider at high energy so that there are no band gaps at high energy. Basic concepts[edit] Assumptions and limits of band structure theory[edit] To start out, it is important to note what has been assumed in order to gain the simplicity of the band theory: 1. Infinite-size system: For the bands to be continuous, we must consider a large piece of material. The concept of band structure can be extended to systems which are only "large" along reduced dimensions, such as two-dimensional electron systems. 2. Homogeneous system: The notion of a band structure as an intrinsic property of a material assumes that the material is homogeneous in some way. Practically, this means that band structure describes the bulk inside a uniform piece of material. 3. Non-interactivity: The band structure describes "single electron states". The existence of these states assumes that the electrons travel in a static potential without dynamically interacting with lattice vibrations, other electrons, photons, etc. The above assumptions are broken in a number of important practical situations, and the use of band structure requires one to keep a close check on the limitations of band theory: • Inhomogeneities and interfaces: Near surfaces, junctions, and other inhomogeneities, the bulk band structure is disrupted. Not only are there local small-scale disruptions (e.g., surface states or dopant states inside the band gap), but also local charge imbalances. These charge imbalances have electrostatic effects that extend deeply into semiconductors, insulators, and the vacuum (see doping, band bending). • Along the same lines, most electronic effects (capacitance, electrical conductance, electric-field screening) involve the physics of electrons passing through surfaces and/or near interfaces. The full description of these effects, in a band structure picture, requires at least a rudimentary model of electron-electron interactions (see space charge, band bending). • Small systems: For systems which are small along every dimension (e.g., a small molecule or a quantum dot), there is no continuous band structure. The crossover between small and large dimensions is the realm of mesoscopic physics. • Strongly correlated materials: Some materials (superconductors, Mott insulators, and more) simply cannot be understood in terms of single-electron states. The electronic band structures of these materials are poorly defined (or at least, not uniquely defined) and may not provide useful information about their physics. Crystalline symmetry and wavevectors[edit] Brillouin zone of a face-centered cubic lattice showing labels for special symmetry points. Band structure plot for Si, Ge, GaAs and InAs generated with tight binding model. Note that Si and Ge are indirect band gap materials, while GaAs and InAs are direct. Main articles: Bloch wave and Brillouin zone Band structure calculations take advantage of the periodic nature of a crystal lattice, exploiting its symmetry. The single-electron Schrödinger equation is solved for an electron in a lattice-periodic potential, giving Bloch waves as solutions: where k is called the wavevector. For each value of k, there are multiple solutions to the Schrödinger equation labelled by n, the band index, which simply numbers the energy bands. Each of these energy levels evolves smoothly with changes in k, forming a smooth band of states. For each band we can define a function En(k), which is the dispersion relation for electrons in that band. The wavevector takes on any value inside the Brillouin zone, which is a polyhedron in wavevector space that is related to the crystal's lattice. Wavevectors outside the Brillouin zone simply correspond to states that are physically identical to those states within the Brillouin zone. Special high symmetry points in the Brillouin zone are assigned labels like Γ, Δ, Λ, Σ. It is difficult to visualize the shape of a band as a function of wavevector, as it would require a plot in four-dimensional space, E vs. kx, ky, kz. In scientific literature it is common to see band structure plots which show the values of En(k) for values of k along straight lines connecting symmetry points. Another method for visualizing band structure is to plot a constant-energy isosurface in wavevector space, showing all of the states with energy equal to a particular value. The isosurface of states with energy equal to the Fermi level is known as the Fermi surface. Energy band gaps can be classified using the wavevectors of the states surrounding the band gap: • Direct band gap: the lowest-energy state above the band gap has the same k as the highest-energy state beneath the band gap. • Indirect band gap: the closest states above and beneath the band gap do not have the same k value. Asymmetry: Band structures in non-crystalline solids[edit] Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band structures.[citation needed] These are somewhat more difficult to study theoretically since they lack the simple symmetry of a crystal, and it is not usually possible to determine a precise dispersion relation. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials. Density of states[edit] Main article: Density of states The density of states function g(E) is defined as the number of electronic states per unit volume, per unit energy, for electron energies near E. The density of states function is important for calculations of effects based on band theory. It appears in calculations for optical absorption where it provides both the number of excitable electrons and the number of final states for an electron. It appears in calculations of electrical conductivity where it provides the number of mobile states, and in computing electron scattering rates where it provides the number of final states after scattering. For energies inside a band gap, g(E) = 0. Filling of bands[edit] At thermodynamic equilibrium, the likelihood of a state of energy E being filled with an electron is given by the Fermi–Dirac distribution, a thermodynamic distribution that takes into account the Pauli exclusion principle: f(E) = \frac{1}{1 + e^{{(E-\mu)}/{k_{\rm B} T}}} • kBT is the product of Boltzmann's constant and temperature, and • µ is the total chemical potential of electrons, or Fermi level (in semiconductor physics, this quantity is more often denoted EF). The Fermi level of a solid is directly related to the voltage on that solid, as measured with a voltmeter. Conventionally, in band structure plots the Fermi level is taken to be the zero of energy (an arbitrary choice). The density of electrons in the material is simply the integral of the Fermi–Dirac distribution times the density of states: N/V = \int_{-\infty}^{\infty} g(E) f(E)\, dE Although there are an infinite number of bands and thus an infinite number of states, there are only a finite number of electrons to place in these bands. The preferred value for the number of electrons is a consequence of electrostatics: even though the surface of a material can be charged, the internal bulk of a material prefers to be charge neutral. The condition of charge neutrality means that N/V must match the density of protons in the material. For this to occur, the material electrostatically adjusts itself, shifting its band structure up or down in energy (thereby shifting g(E)), until it is at the correct equilibrium with respect to the Fermi level. Names of bands near the Fermi level (conduction band, valence band)[edit] A solid has an infinite number of allowed bands, just as an atom has infinitely many energy levels. However, most of the bands simply have too high energy, and are usually disregarded under ordinary circumstances.[1] Conversely, there are very low energy bands associated with the core orbitals (such as 1s electrons). These low-energy core bands are also usually disregarded since they remain filled with electrons at all times, and are therefore inert.[2] Likewise, materials have several band gaps throughout their band structure. The most important bands and band gaps—those relevant for electronics and optoelectronics—are those with energies near the Fermi level. The bands and band gaps near the Fermi level are given special names, depending on the material: • In a semiconductor or band insulator, the Fermi level is surrounded by a band gap, referred to as the band gap (to distinguish it from the other band gaps in the band structure). The closest band above the band gap is called the conduction band, and the closest band beneath the band gap is called the valence band. The name "valence band" was coined by analogy to chemistry, since in many semiconductors the valence band is built out of the valence orbitals. • In a metal or semimetal, the Fermi level is inside of one or more allowed bands. In semimetals the bands are usually referred to as "conduction band" or "valence band" depending on whether the charge transport is more electron-like or hole-like, by analogy to semiconductors. In many metals, however, the bands are neither electron-like nor hole-like, and often just called "valence band" as they are made of valence orbitals.[3] The band gaps in a metal's band structure are not important for low energy physics, since they are too far from the Fermi level. Theory of band structures in crystals[edit] The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch waves as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors (b1,b2,b3). Now, any periodic potential V(r) which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as: V(\mathbf{r}) = \sum_{\mathbf{K}}{V_{\mathbf{K}}e^{i \mathbf{K}\cdot\mathbf{r}}} where K = m1b1 + m2b2 + m3b3 for any set of integers (m1,m2,m3). From this theory, an attempt can be made to predict the band structure of a particular material, however most ab initio methods for electronic structure calculations fail to predict the observed band gap. Nearly free electron approximation[edit] In the nearly free electron approximation, interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch wavefunction: {\Psi}_{n,\mathbf{k}} (\mathbf{r}) = e^{i \mathbf{k}\cdot\mathbf{r}} u_n(\mathbf{r}) where the function u_n(\mathbf{r}) is periodic over the crystal lattice, that is, u_n(\mathbf{r}) = u_n(\mathbf{r-R}) . Here index n refers to the n-th energy band, wavevector k is related to the direction of motion of the electron, r is the position in the crystal, and R is the location of an atomic site.[4] The NFE model works particularly well in materials like metals where distances between neighbouring atoms are small. In such materials the overlap of atomic orbitals and potentials on neighbouring atoms is relatively large. In that case the wave function of the electron can be approximated by a (modified) plane wave. The band structure of a metal like Aluminium even gets close to the empty lattice approximation. Tight binding model[edit] Main article: Tight binding The opposite extreme to the nearly free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight binding model assumes the solution to the time-independent single electron Schrödinger equation \Psi is well approximated by a linear combination of atomic orbitals \psi_n(\mathbf{r}).[5] \Psi(\mathbf{r}) = \sum_{n,\mathbf{R}} b_{n,\mathbf{R}} \psi_n(\mathbf{r-R}), where the coefficients b_{n,\mathbf{R}} are selected to give the best approximate solution of this form. Index n refers to an atomic energy level and R refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by:[6][7] a_n(\mathbf{r-R}) = \frac{V_{C}}{(2\pi)^{3}} \int_{BZ} d\mathbf{k} e^{-i\mathbf{k}\cdot(\mathbf{R-r})}u_{n\mathbf{k}}; in which u_{n\mathbf{k}} is the periodic part of the Bloch wave and the integral is over the Brillouin zone. Here index n refers to the n-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites R are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the n-th energy band as: \Psi_{n,\mathbf{k}} (\mathbf{r}) = \sum_{\mathbf{R}} e^{-i\mathbf{k}\cdot(\mathbf{R-r})}a_n(\mathbf{r-R}). The TB model works well in materials with limited overlap between atomic orbitals and potentials on neighbouring atoms. Band structures of materials like Si, GaAs, SiO2 and diamond for instance are well described by TB-Hamiltonians on the basis of atomic sp3 orbitals. In transition metals a mixed TB-NFE model is used to describe the broad NFE conduction band and the narrow embedded TB d-bands. The radial functions of the atomic orbital part of the Wannier functions are most easily calculated by the use of pseudopotential methods. NFE, TB or combined NFE-TB band structure calculations,[8] sometimes extended with wave function approximations based on pseudopotential methods, are often used as an economic starting point for further calculations. KKR model[edit] The simplest form of this approximation centers non-overlapping spheres (referred to as muffin tins) on the atomic positions. Within these regions, the potential experienced by an electron is approximated to be spherically symmetric about the given nucleus. In the remaining interstitial region, the screened potential is approximated as a constant. Continuity of the potential between the atom-centered spheres and interstitial region is enforced. A variational implementation was suggested by Korringa and by Kohn and Rostocker, and is often referred to as the KKR model.[9][10] Density-functional theory[edit] In recent physics literature, a large majority of the electronic structures and band plots are calculated using density-functional theory (DFT), which is not a model but rather a theory, i.e., a microscopic first-principles theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT-calculated bands are in many cases found to be in agreement with experimentally measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape is typically well reproduced by DFT. But there are also systematic errors in DFT bands when compared to experiment results. In particular, DFT seems to systematically underestimate by about 30-40% the band gap in insulators and semiconductors. It is commonly believed that DFT is a theory to predict ground state properties of a system only (e.g. the total energy, the atomic structure, etc.), and that excited state properties cannot be determined by DFT. This is a misconception. In principle, DFT can determine any property (ground state or excited state) of a system given a functional that maps the ground state density to that property. This is the essence of the Hohenburg–Kohn theorem.[11] In practice, however, no known functional exists that maps the ground state density to excitation energies of electrons within a material. Thus, what in the literature is quoted as a DFT band plot is a representation of the DFT Kohn–Sham energies, i.e., the energies of a fictive non-interacting system, the Kohn–Sham system, which has no physical interpretation at all. The Kohn–Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopman's theorem holding for Kohn–Sham energies, as there is for Hartree–Fock energies, which can be truly considered as an approximation for quasiparticle energies. Hence, in principle, Kohn–Sham based DFT is not a band theory, i.e., not a theory suitable for calculating bands and band-plots. In principle time-dependent DFT can be used to calculate the true band structure although in practice this is often difficult. A popular approach is the use of hybrid functionals, which incorporate a portion of Hartree–Fock exact exchange; this produces a substantial improvement in predicted bandgaps of semiconductors, but is less reliable for metals and wide-bandgap materials.[12] Green's function methods and the ab initio GW approximation[edit] To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely ab initio way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation. Mott insulators[edit] Main article: Mott insulator Although the nearly free electron approximation is able to describe many properties of electron band structures, one consequence of this theory is that it predicts the same number of electrons in each unit cell. If the number of electrons is odd, we would then expect that there is an unpaired electron in each unit cell, and thus that the valence band is not fully occupied, making the material a conductor. However, materials such as CoO that have an odd number of electrons per unit cell are insulators, in direct conflict with this result. This kind of material is known as a Mott insulator, and requires inclusion of detailed electron-electron interactions (treated only as an averaged effect on the crystal potential in band theory) to explain the discrepancy. The Hubbard model is an approximate theory that can include these interactions. It can be treated non-perturbatively within the so-called dynamical mean field theory, which bridges the gap between the nearly free electron approximation and the atomic limit. Calculating band structures is an important topic in theoretical solid state physics. In addition to the models mentioned above, other models include the following: • Empty lattice approximation: the "band structure" of a region of free space that has been divided into a lattice. • k·p perturbation theory is a technique that allows a band structure to be approximately described in terms of just a few parameters. The technique is commonly used for semiconductors, and the parameters in the model are often determined by experiment. • The Kronig-Penney Model, a one-dimensional rectangular well model useful for illustration of band formation. While simple, it predicts many important phenomena, but is not quantitative. • Hubbard model The band structure has been generalised to wavevectors that are complex numbers, resulting in what is called a complex band structure, which is of interest at surfaces and interfaces. Each model describes some types of solids very well, and others poorly. The nearly free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl). Band Diagrams[edit] To understand how band structure changes relative to the Fermi level in real space, a band structure plot is often first simplified in the form of a band diagram. In a band diagram the vertical axis is energy while the horizontal axis represents real space. Horizontal lines represent energy levels, while blocks represent energy bands. When the horizontal lines in these diagram are slanted then the energy of the level or band changes with distance. Diagrammatically, this depicts the presence of an electric field within the crystal system. Band diagrams are useful in relating the general band structure properties of different materials to one another when placed in contact with each other. See also[edit] 1. ^ High-energy bands are important for electron diffraction physics, where the electrons can be injected into a material at high energies, see Stern, R.; Perry, J.; Boudreaux, D. (1969). "Low-Energy Electron-Diffraction Dispersion Surfaces and Band Structure in Three-Dimensional Mixed Laue and Bragg Reflections". Reviews of Modern Physics 41 (2): 275. Bibcode:1969RvMP...41..275S. doi:10.1103/RevModPhys.41.275. . 2. ^ Low-energy bands are however important in the Auger effect. 3. ^ In copper, for example, the effective mass is a tensor and also changes sign depending on the wave vector, as can be seen in the de Haas–van Alphen effect; see http://www.phys.ufl.edu/fermisurface/ 4. ^ Kittel, p. 179 5. ^ Kittel, pp. 245-248 6. ^ Kittel, Eq. 42 p. 267 7. ^ Daniel Charles Mattis (1994). The Many-Body Problem: Encyclopaedia of Exactly Solved Models in One Dimension. World Scientific. p. 340. ISBN 981-02-1476-6.  8. ^ Walter Ashley Harrison (1989). Electronic Structure and the Properties of Solids. Dover Publications. ISBN 0-486-66021-4.  9. ^ Joginder Singh Galsin (2001). Impurity Scattering in Metal Alloys. Springer. Appendix C. ISBN 0-306-46574-4.  10. ^ Kuon Inoue, Kazuo Ohtaka (2004). Photonic Crystals. Springer. p. 66. ISBN 3-540-20559-4.  11. ^ Hohenberg, P; Kohn, W. (Nov 1964). "Inhomogeneous Electron Gas". Phys. Rev. 136 (3B): B864––B871. Bibcode:1964PhRv..136..864H. doi:10.1103/PhysRev.136.B864.  12. ^ Paier, J.; Marsman, M.; Hummer, K.; Kresse, G.; Gerber, IC.; Angyán, JG. (Apr 2006). "Screened hybrid density functionals applied to solids.". J Chem Phys 124 (15): 154709. Bibcode:2006JChPh.124o4709P. doi:10.1063/1.2187006. PMID 16674253.  Further reading[edit] 1. Microelectronics, by Jacob Millman and Arvin Gabriel, ISBN 0-07-463736-3, Tata McGraw-Hill Edition. 2. Solid State Physics, by Neil Ashcroft and N. David Mermin, ISBN 0-03-083993-9 3. Elementary Solid State Physics: Principles and Applications, by M. Ali Omar, ISBN 0-201-60733-6 4. Electronic and Optoelectronic Properties of Semiconductor Structures - Chapter 2 and 3 by Jasprit Singh, ISBN 0-521-82379-X 5. Electronic Structure: Basic Theory and Practical Methods by Richard Martin, ISBN 0-521-78285-6 6. Condensed Matter Physics by Michael P. Marder, ISBN 0-471-17779-2 7. Computational Methods in Solid State Physics by V V Nemoshkalenko and N.V. Antonov, ISBN 90-5699-094-2 8. Elementary Electronic Structure by Walter A. Harrison, ISBN 981-238-708-0 9. Pseudopotentials in the theory of metals by Walter A. Harrison, W.A. Benjamin (New York) 1966 10. Tutorial on Bandstructure Methods by Dr. Vasileska (2008) External links[edit]
3cfa5115df9a1b90
Partnering Events: TechConnect Summit Clean Technology 2008 Quantum Gates Simulator Based on DSP TI6711 V.H. Tellez, C. Iuga, G.I. Duchen, A. Campero Universidad Autonoma Metropolitana, MX quantum gates, quantum bits, Hamiltonian, simulation Quantum theory has found a new field of application in the information and computation fields during recent years. We developed a Quantum Gate Simulator based on the Digital Signal Processor (DSP) DSP TI6711 using the Hamiltonian in the time- dependent Schrödinger equation. The Hamiltonian describes the Quantum System by manipulating a Quantum Bit (QuBit) using unitary matrices. Gates simulated are conditional NOT operation, Controlled-NOT Gate, Multi-bit Controlled-NOT Gate or Toffoli gate, Rotation Gate or Hadamard transform and twiddle gate, all useful in quantum computation due to their inherently reversible characteristic. With the simulation process, we have obtained approximately 95% fidelity action of the gate on an arbitrary two and three QuBit input state. We have determined an average error probability bounded above by 0.07 ± 0.01. Nanotech 2008 Conference Program Abstract
77ef12483c45f40d
Translate This Blog Saturday, April 30, 2016 Is Civility Lost On The Internet? Maintaining a blog makes me a potential target, especially considering that I have an international readership and currently run over 50,000 hits per month. Over the years I've had some pretty hateful things said about me and my opinions, and I've allowed the vast majority of those comments to be posted as I believe in free discourse and open speech. I've only deleted comments a handful of times even if they strongly opposed me, and those deletions were because of inappropriate language (kids read this blog) or racist comments. One of the points of contention in the past has been the fact that very few vets take payment plans. This was a big discussion on a post I made back in 2012, though it has cropped up in some other blogs I've written. The 2012 post had several rather strong comments, including some rather hateful ones directed and me and others of my profession. I ended up closing it to further discussion because of the direction the conversations were taking.  A few weeks ago I received a brief email from m_michaels17. Here it is in its entirety.  You should not be a vet. Your views are twisted and vile. Payment plans don't work? They work fine at my clinic. They work in every other business and practice. There are two professions that only except cash... drug dealer and veterinarian.  The personal attack didn't really bother me too much. I have a pretty thick skin and realize that people like this are in the wrong. In fact, it somewhat amused me. What did bother me was that this person felt that it was worth their time to send a personal email comparing me and my whole profession to drug dealers and making a rather hostile attack. Who feels this strongly, and believes that it's okay to spend their time reaching out and directly lambasting a person whom they do not know? Unfortunately I see this far too often on the internet. It is extremely easy to post something hateful and disrespectful from the anonymity of your own home, things that the same person would likely never say in person. Vehemence and polarization is becoming the rule in our modern society, and I think it has trended this way in large part because of the internet. These attitudes are especially prevalent in the current US election cycle. People are losing friendships because of Facebook posts. Democrats are demonizing Republicans, and Republicans are doing the same back across the aisle. Some of this is done at protests and rallies, but much of it is conducted online through blogs, Twitter, Facebook, and other social media. It is really easy to compare your opponent to Hitler and call his supporters vile names when you aren't face-to-face with them. Now don't get me wrong. Hateful, irresponsible comments did not begin in the 21st century. One of the dirtiest political campaigns in US history was between two founding fathers, Adams and Jefferson, in 1800. There were some rather incredible statements made about each candidate, none of which would be said in 2016 (seriously, it's rather interesting to read about....some links are here, here, and here). But I think that the internet has made these comments easier to find and faster to spread. Whatever happened to civil debate? What happened to disagreement without animosity? Why is it that so many people go on personal attacks when they find something they don't like? And why do so many people go off on rants without actually knowing the facts or doing any research? For example, let's look at m_michaels17 again. Let's break down those comments with reason and a lack of vitriol. You should not be a vet.  Why not?  Because you disagree with me?  Because you think my viewpoints are wrong?  I have a successful practice, my clients often comment about how I care about them and their pets (we do surveys), I am good at what I do, I help families every day and have saved many lives over my career.  I have volunteered my time to go to schools and children's museums to educate kids about pet care and veterinary medicine.  I have mentored numerous veterinary students and newly graduated vets.  But because I don't take payment plans I should get out of the profession.  Right? Your views are twisted and vile.  So it's twisted and vile to not accept a credit risk when companies who do it professionally won't extend credit to a person?  It's twisted and vile to expect payment from clients for the care I provide so I can pay my staff, order drugs, pay utilities, fix equipment, and support my family?  It's twisted and vile to expect people to give back for services rendered?  Here we see a great example of a logical fallacy called argumentum ad hominem, which happens when someone avoids the actual topic by directing an attack at their opponent.  Rather than presenting any evidence that payment plans work in veterinary medicine, or countering my argument that they don't work, m_michaels17 simply attacks me directly. Payment plans don't work? They work fine at my clinic.  Congratulations for your clinic being one of the very, very few that actually does this successfully!  Again, we have a logical fallacy.  Forget the fact that I have 18 years of experience as a vet and a total of 32 years in the profession.  Forget the fact that most veterinary consultants recommend against in-house payment plans.  Forget the fact that most vets who have done payment plans say that they rarely get paid in full on these bills.  Because it works at the clinic m_michaels17 goes to it must be able to work at every veterinary practice.  I'm also skeptical that this clinic does well with their payment plans.  However, because I don't know the facts on that situation I'm going to avoid making assumptions. Well, we certainly accept more than cash.  We also accept Mastercard, Visa, Discover, American Express, and Care Credit.  Every single one of those are credit systems where a client can repay the debt over time.  Hey, there is your payment plan!  See, we do accept payment plans!  We just let the client pay through someone else's credit system, and let that company accept the risks.  If someone cannot qualify for a credit line through a company that does this professionally, why should we be responsible for taking that risk?  If a person can't get a credit card or Care Credit, they are a very high risk.  Oh, and every other profession works on payment plans?  So when you go to the grocery store and want to purchase $400 worth of groceries that store will allow you to pay them back over a few months.  Right?  What about Walmart, Olive Garden, and your local jeweler?  When you go to your pharmacy to pick up a $300 prescription they don't expect full payment, do they?  I'm sure a band at a wedding accepts payment over the course of a year after they've played at a ceremony.  So only drug dealers and veterinarians accept up-front payment?  No other profession does?  When I recently went to my ophthalmologist they certainly expected full payment for my new glasses before they would order them.  And I think that most reasonable people would realize that the businesses and professions that only accept full payment far outnumber those who take payments.  Some stores may allow layaway, though that is not common, and even if this is the case the person must pay in full before they pick up their product. If m_michaels17 had wanted a discussion, then they would have been able to break down my viewpoint as I did theirs.  But they weren't interested in doing so.  They didn't even want to try and convince me of the error of my ways. All they were concerned about was telling me how horrible and evil I am personally, and then lumping my entire profession into this basket.  They are a perfect example of what is so wrong about communication on the internet. Where is civility nowadays?  On the internet it's certainly hard to come by.  But it doesn't have to be. Be sure to look for my next post, where I get into this issue with m_michaels17 in more detail. Wednesday, April 27, 2016 When Helping People Gets Unethical I came across an interesting article in one of my veterinary magazines, DVM 360.  You can read the full article here, but let me quickly summarize it for you. A relief vet was working at a long-established clinic.  He discovered that the owner would do surgeries at no cost to needy clients, but was using recently expired drugs to do so.  The relief vet (Dr. Han) refused to do that and talked to the practice owner (Dr. Keets) about the issue.  Here are some quotes from that article. Dr. Han was diplomatic—after all, he wanted to maintain a good working relationship with Dr. Keets. He said that he understood that Dr. Keets was well-intentioned but that substandard care of indigent patients was unacceptable. Dr. Keets replied that the care was not substandard. All his patients were monitored during and after surgery. If any animals showed signs of pain or inadequate anesthesia this was addressed immediately. He went on to say that offering charitable services required realistic monetary considerations. If he could not use recently outdated medications, he could not afford to offer these much-needed services. He went on to say that Dr. Han traveled from practice to practice assisting veterinarians and pets on a short-term basis. He on the other hand had a responsibility to a clientele that day-in and day-out needed services they could not afford. As a result, he had to be creative in order to assist them. A bit frustrated, Dr. Han finally said that Dr. Keets’ practices were a violation of practice statutes. Dr. Keets’ reply? “I’ve never had a complaint, and I have scores of grateful pets and pet owners.” This is a difficult situation.  Dr. Keets was doing what he thought was best for the community and was sincerely trying to help people out.  To be honest, drugs don't suddenly go bad or become dangerous at midnight on the day of expiration.  And most expired drugs would lose efficacy rather than become toxic.  However, those expiration dates are there for a reason, and they need to be followed. I agree with Dr. Keets where he said "offering charitable services required realistic monetary considerations."  That's very true.  Veterinary practices can't routinely give away services for free and still expect to stay in business while maintaining high medical standards.  If he didn't use those medications, "he could not afford to offer these much-needed services".  Again, that's true.  The only way he could afford to give away these surgeries was to use drugs that were no longer valid.  If he used drugs that were still within their expiration date he would have lost money and not been able to provide these services.  I've written many times about how it isn't realistic to expect veterinarians to give away services, especially surgeries, and not have their business suffer.  So bravo to Dr. Keets for recognizing this. However, was what he did ethical?  No.  And it wasn't even legal.  His reply of "I've never had a complaint, and I have scores of grateful pets and pet owners" is not a good defense.  It is an attempt at justifying an unethical behavior.  Just because someone doesn't complain about a behavior doesn't make that behavior right.  Here is an analysis from the author of the article. It is absolutely true that the use of expired medications is a violation of the veterinary practice act in every state. Dr. Keets was aware of this but chose to help those in need and also manage any complications that may have arisen from the use of the expired medications. There is no doubt Dr. Keets was well-intentioned. But he could have solved his medication issues in other ways. Advising vendors of his charitable efforts and asking them to participate would have been an option, as well as soliciting his more affluent clients and enlisting them in an effort to help his good works. Rules and laws exist to prevent abuse and protect our patients. Dr. Keets gets an “A” for effort but does not pass the profession’s ethical standards test. Should a veterinarian violate ethical standards and state laws in order to help people?  Personally I don't think so.  While I'm absolutely not a "big government" sort of person, I also believe in trying to uphold the letter and spirit of the law.  It is wrong to break the law because it seems convenient or helpful to do so.  It also puts that doctor on very shaky ground with his license and business.  Let's imagine for a moment that something went wrong when he was using these expired drugs.  The pet had severe complications and died, in part because the drugs were not effective and they were not able to administer proper medications in time.  The client learns that expired drugs were used and brings a lawsuit against Dr. Keets, as well as reporting him to the state board.  Because he knowingly used these drugs against state law he would have no chance of winning the lawsuit and would be facing big fines from the stat veterinary board, and possibly even be in danger of losing his license to practice. To me it is not worth risking my ability to work and support my clients and family.  While I admire Dr. Keets' desire to help people, he is going about it the wrong way.  Hopefully this gives pet owners some insight into the challenges of trying to help those who have few funds and are in need.  I'm not saying that vets should never make the attempt, but they need to make sure they are following legal and ethical standards.  If they have to break the law or violate ethics in order to help people, they shouldn't do so. Sunday, April 24, 2016 "But He's Still Hungry" Every day I have discussions with my clients about how much food to feed their pet.  Something that often confuses people is that they are following the amount recommended on the package, but their dog or cat still wants more.  "They're still hungry," the client says.  And because we don't want our pets to be hungry people tend to give them more food. Stop and think about how many times you, the pet owner, eats when you're not hungry.  When you go to the movies are you really so hungry that you need that popcorn and candy?  When you're sitting at home watching Netflix and munching on that bag of chips, is it really because you have missed a meal?  When you've finished that meal at the restaurant are you looking at the desert menu because you feel hunger pains? People and animals eat for two reasons.  First, they feel the sensation of hunger.  Second, they like the taste.  I'm sure that every person reading this blog has eaten something when they weren't truly hungry but just had a craving for that particular food or snack.  And I'm sure every person has continued to eat until far past the point of hunger sensations having stopped.  Did you ever reach the end of a meal and have been so full that you think back and realize that you shouldn't have eaten so much?  But you didn't think about that until after you were finished and your pants were too tight.  I've been in every one of these situations myself, so I speak from experience. Animals are often the same way.  Yes, they eat because they are hungry.  But when that hunger has passed they will also eat because they like the taste of the food.  Since they are continuing to seek out food that can make the owners mistakenly think that they are actually hungry, when in fact they may feel quite full.  There is also an instinct that is retained in many pets where they will gorge themselves since they don't have a good long-term memory to know that they will get fed again tomorrow. You cannot determine how much food to give a dog or cat based on whether or not they will continue to eat it.  If you try this method, you will end up with obese, unhealthy pets. Instead, use the package directions as a starting point, and then consult with your vet on what your pet's body condition score is.  A highly active pet may need more than what is on the package.  Dogs and cats who are basically couch potatoes may need considerably less than that amount.  When I see a pet I look at their proportions and how much body fat they have on them.  If they are normal then I will let the owner know that they are feeding the right amount, even if the pet still acts "hungry".  If they are overweight I'll tell them to switch to a lower calorie food and decrease the amount fed, even if they are following the package and the pet wants to eat more.  Most people don't realize how many overweight or obese pets we veterinarians see who always seem "hungry".  If they weren't getting enough food, they wouldn't be overweight! Pay attention to your pet and actually measure out the amount of food you are giving.  It's going to be much better for your pet's health. Thursday, April 21, 2016 Critical Thinking About Pet Protector Recently a reader asked my opinion about Pet Protector, a product designed for protecting against fleas and ticks.  I had never heard of it so I looked into it a bit.  From what I can see there seems to be a lot of rather bogus science behind it. The company's website is, went there to try and learn about it.  The product is a metal disc that is worn on a dog's or cat's collar, and which gives protection against fleas and ticks for four years.  All without chemicals.  Sounds pretty amazing, right?  Here are some quotes from the company on how it works. Officially tested and proven The Pet Protector Disc is made of high quality steel alloys. It is charged with a specific combination of Magnetic and Scalar waves, which after being triggered by the animal’s movement (blood circulation), produce an invisible energy field around the entire animal’s body. Pet Protector’s Scalar waves are totally harmless to people and animals (they go absolutely undetected by humans and animals alike) and they are only effective against external parasites, repelling them from the shielded area. Therefore, the Pet Protector Disc acts preventatively; it drives fleas, ticks and mosquitoes away before they get the chance to infest your pet, versus all other anti-parasite products, which kill external parasites after they have already infested your pet. Now that sounds like a pretty high-tech product, doesn't it?  And not having to use chemicals is so much better! But let's not take this on face value or even just look at the testimonials on the website (which are always hand-picked for the best ones).  Let's spend some time looking into this effect and the claims.  And above all, let's use actual critical thinking (as we always should). First, what the heck are "Scalar waves"?  I did a quick Google search learned a few things.  These kinds of waves have been researched since around the time of Nikola Tesla, and nowadays are firmly in the camp of pseudoscience.  When you find people supporting the idea of scalar waves you find them talking about conspiracy theories, ultimate healing, super weapons, weather control, and similar crackpot ideas.  Here are some choice quotes from some forums and websites. In physics, a quantity described as "scalar" only contains information about its magnitude. In contrast, a "vector" quantity contains information both about its magnitude and about its direction. By this definition, a "scalar wave" in physics would be defined as any solution to a "scalar wave equation". In reality, this definition is far too general to be useful, and as a result the term "scalar wave" is used exclusively by cranks and peddlers of woo. The main current proponent of scalar wave pseudophysics is zero-point energy advocate Thomas E. Bearden, who has concocted an entire pseudoscientific "scalar field theory" unrelated to anything in actual physics of that name.  Bearden was pushing the medical effects of scalar waves as early as 1991. He specifically attributed their powers to cure AIDS, cancer and genetic diseases to their quantum effects and their use in "engineering the Schrödinger equation." They are also useful in mind control. What is a “scalar wave” exactly? Scalar wave (hereafter SW) is just another name for a “longitudinal” wave. The term “scalar” is sometimes used instead because the hypothetical source of these waves is thought to be a “scalar field” of some kind similar to the Higgs Field for example. I try to imagine what physics would be like without mathematics. I think it would be like this "scalar wave" business. A lot of guys coming up with ideas and swapping lies 'cause math is hard. A scalar is just a number. A wave is a repetitive variation in that number. For example the altitude of each point in Wisconsin forms a scalar wave. Or sound waves, all you can hear is the intensity of the superimposed tones; the intensity is just a number (yeah, maybe a complex number) and it varies repetitively (i.e the cycles of the tones).  You've asked about Bearden before and the answer is the same: while Greer is a second order crackpot, Bearden may well be certifiably insane - he is, at the very least, a liar and a fraud.  Tom Bearden is a notorious crackpot. Has been for years. References available upon request. I kinda hate to go through this exercise again, but, if you are really interested in facts, I don't mind. He is a fraud, charlatan and temple priest of bad science. I hope I am not sugar coating this too much. It seems that most reputable physicists don't believe in the various scalar wave applications that are touted by the fringes of science and medicine.  So to me this is one of the biggest strikes against Pet Protector, as it is the primary reason why it is supposed to work. But for a moment let's assume that scalar waves really do exist in the way that they're stated.  Would this product work and is it backed up by studies? Let's first look at one of the primary statements made by Pet Protector:  Pet Protector’s Scalar waves are totally harmless to people and animals (they go absolutely undetected by humans and animals alike) and they are only effective against external parasites, repelling them from the shielded area.  Does that make scientific sense?  No, not really.  I can find no information on the website on exactly why it affects parasites but not the host.  With typical topical chemicals a product works by affecting neurotransmitters found in insects and arachnids that are not found in mammals.  They are considered safe for most pets because they affect things that the hosts don't have.  I can't find anything about scalar waves that would cause them to be unnoticed by dogs and cats but not fleas or ticks. Here is more from the website: 1. The Pet Protector Disc does not have the ability to eliminate existing parasites or their larvae  2. The Pet Protector Disc can only repel new parasites from inhabiting your pet  3. The Pet Protector Disc needs 7 to 20 days (depending on the pet’s size) to create a strong enough Scalar Wave field around your pet's whole body, protecting it from fleas and ticks successfully. This is what I find interesting.  The premise behind the disc is that it actually and literally creates a invisible force-field around your pet.  Stop and say that out loud.  It sounds rather odd, doesn't it?  Somehow the disc creates an invisible bubble that doesn't actually touch the pet.  If it did, it would repel the parasites that already exist on the pet.  How does the disc do that?  Electromagnetic waves are supposed to emanate in a straight line from the origin source, and should spread out in all directions.  Magnets and gravity can change the direction of these waves, but you have to have pretty powerful equipment to make a noticeable difference.  Somehow a disc that looks like an ID tag has the power and ability to not pass through the pet but instead make a sphere around it.  Do you realize how strange that sounds?  And there is nothing on the website that gives details on how this might actually happen, or links to the science behind it.  You basically just have to trust the company that what they say is true. Okay, so now let's assume that a product like this actually works and there are ones on the market who perform exactly as expected.  Does Pet Protector show evidence of actually repelling parasites?  For this we can go to the "Official Product Testing" part of the website. The study was conducted over 4 years in the US, Argentina, Spain, and Australia.  The dogs and cats were selected randomly and were in homes with owners.  There were 22 pets selected in each geographical location, for a total of 88 over the study.  The animals were determined to be "100% free of any external parasites", had the disc attached to their collar, and were isolated for 15 days to give the disc time to fully activate.  On the 16th day they were released back to their normal environment and the owners were told not to do anything different.  The pets were examined weekly for four years, with only an occasional tick found during that entire time. All of that sounds good, and if you look at the study document you'll see "Official" stamped in the corner of every page.  It certainly sounds convincing and scientific.  But this is far from being a true study of efficacy.  There are numerous unanswered questions, and this so-called study would be laughed at by any peer-reviewed scientific journal. • How were the pets determined to be parasite-free?  What methods were used and what was the expertise level of those doing the exams? • What were the baseline parasite levels in the various locations?  I don't know about the non-US locations, but in America the study was performed in California, which has one of the lower rates of fleas and mosquitoes in the country.  If they wanted to do a real study they should have come to the southeastern states.  Here in Georgia I never have a month go by where I don't see pets with fleas, even in the dead of winter.   • Did the lifestyles of the pets allow them access to parasites?  A cat that is strictly indoors is never going to have a tick, so making a claim of "see, our product prevented ticks" is rather pointless. Dogs that are hunting or camping are going to have a higher risk of fleas and ticks than a toy breed that only goes outside a few minutes per day to use the potty.  A pet owner who is doing routine treatment of the yard against insects is going to have a lower risk of fleas and ticks than one who isn't. • Did any of the pets chosen have a history of fleas or ticks being seen?  Even here in Georgia I have dog owners who aren't using any form of flea or tick control and yet we never see those parasites on their pets.  I routinely have clients who say "Oh, I've never seen any fleas so I don't need prevention", and despite my skepticism I can't find a single flea on the pet.  If one of these clients was using a Pet Protector the company would say "see, no parasites!"  Yet the pet never had them in the past, so why would they have them now? • Who was doing the weekly exams?  If it was the owners, I don't believe them.  I've had many, many situations opposite that I just mentioned, where they insist there are no fleas at all yet I glance at the pet and find a half dozen very easily.  Pet owners may not know how to examine the pet, may miss something, or may not easily recognize a parasite. • Where are the controls?  Here is one of the biggest problems with the Pet Protector data.  There are no controls.  If we wanted to test true efficacy we would have dogs and cats of similar breed in the same environment who used just a metal tag rather than the Pet Protector disc, and the owners didn't know which was which.  Having this kind of "blind" study with control removes bias from the people doing the routine exams.  You also have more validity in the data because if the control animals had fleas but the study ones didn't you could say that it was protective.  But if the control animals also didn't have any fleas then the lack of parasites had nothing to do with the product.  Pet Protector simply doesn't have this information. Do you know how most flea and tick products are tested?  It is generally in a laboratory with research animals.  They are certified parasite-free by the researchers, who are usually specialists in parasitology.  A specific number of fleas ticks are placed on the pet (usually 100), and the same number are placed on every animal.  Counts are regularly made to see how many of those parasites placed are remaining, as well as the numbers on the control animals (who get the same parasites but not the product).  In some studies a new set of parasites is placed on the pet periodically to determine the duration of efficacy.  Can you see how this method is much more precise and valid that the one used by Pet Protector? Hopefully you can see the incredibly numerous things wrong with this product, from the pseudoscience premise to the lack of anything that could be called a true scientific study.  There are many statements made by the company and their "study", none of which have solid science behind them.   While this product is almost certainly harmless, I can't believe that it would have any real efficacy and would be a waste of the consumer's money.  I would not recommend buying it. Monday, April 18, 2016 Parent, Guardian, Or Owner? I recently read an article on the Veterinary Information Network questioning current terminology such as "pet parent" for those who have animals in their home.  It was something I hadn't given much thought about, but Dr. Chiara Switzer made some interesting points.  VIN is a subscription-only service so I can't link to the full article, but here are some quotes from it. The terms “pet parent” and “fur baby” that are so in vogue these days bring the division to the fore. Some people love the terms, referring to themselves as the mom or dad of their pet and rejecting the concept of being owners or even caregivers of their beloved animals. Other people find the term offensive because of its implication that animals would have equal status to human beings, or the suggestion that they are unemotional if they don’t consider their dog or cat to be like their child. The division can intensify if one side tries to impose its philosophy on the other; for example, if people who consider their pets as children criticize as uncaring those who don’t treat their animals as family. There’s also something that strikes me as rather manipulative about it — when someone tells me that I became a “pet parent” when I got my puppy, it seems to me as if they are trying to define the relationship they think I should have with my dog, rather than the relationship I want to have with my dog (let alone the relationship my dog will choose to have with me, which unfortunately doesn’t always match our plans). I also wonder if those who call themselves “pet parents” are just using a trendy term, or whether they truly have the same relationship with their pet as they do (or did, or will) with their children. Or do they imagine that’s the relationship they would have had with their children, had they had any? I hope not — I think it does a disservice to animals to treat them like children, and it does a disservice to children to treat them like pets. Personally, I like the term “guardian.” It implies looking after something living and sentient, specifying my responsibility without specifying an emotional relationship. I do know that I’m not my pet's parent, even though I care for my pup and want to help her to grow up well, happy and safe. My relationship with my pet might change as we each age and grow, but she’ll never be my fur baby and I’ll never be her mom. I'm old-fashioned enough that I still refer to my clients as "owners".  This is the term that has seen the most use over my lifetime and what I've become accustomed to.  I think that most of my clients are used to that term and don't think about it otherwise.  The term stems from the fact that in the US animals are considered a special form of property, just like if a couch, TV, or car were alive.  For better or worse most laws are based around this issue of pets as property, hence the tendency to say "owners". But does that really properly classify or define the relationship?  Probably not.  A century ago people looked at dogs and cats more like they did livestock, though there has been a long tradition of keeping them as pets rather than as working animals.  Nowadays people have much different relationships with their pets, letting them sleep in their beds, buying clothing for them, taking them to "day care" and play dates, and otherwise treating them like a special kind of child or part of the family.  I'll admit that I do that in my own home to some degree.  There are really no limits on where our pets sleep, and we snuggle with them every day.  However, I don't think I'd consider myself a "parent" as I absolutely look at them differently than I do my own children.  As much as I love my pets, I would chose my children over them without hesitation if the need called for it. I don't know that I personally like the term "guardian", as my relationship with them is more than just that of a caretaker.  I have a truly emotional relationship with my dogs and cats and being simply a guardian seems to take that out of the equation. While Dr. Switzer likes the term because it doesn't define any kind of emotions, I think that those emotions play an important part of having a pet. But "owner" seems somewhat cold and unemotional as well.  I'm used to the term and will likely continue to use it, but having a pet is more than merely owning them.  I feel a much closer bond to my pets than I do to my laptop, yet I "own" both.  As I've been writing this I realize that to me none of these terms really properly defines what most people, myself included, feel about their pets.  While I've had some clients that really do treat their pets similar to children, I have others that tell me "it's just a dog".  I don't think that one term really properly encompasses everyone who has a pet, and I don't think the ones we've been using are completely adequate to describing what is happening. I'm curious as to what you think.  For the first time in years I'm putting up a poll, and will leave it up for the next month.  I'm interested in how my readers define themselves.  I would also love to see comments on this topic, as it's one that hits right in the heart. Friday, April 15, 2016 Canine Influenza Moves To Cats By now most dog owners in the US know about the risks of canine influenza, especially people in the Chicago and Atlanta areas.  The most recent strain of "dog flu", H3N2, has proven to be a highly contagious disease.  Thankfully it is rare to have a dog die from it, though some can become quite sick.  But now it may not be just dogs that are at risk. Here's copy from a recent article about some cats who contracted H3N2 in Wisconsin.  It's short, so I'm going to copy it here, but the original article is at this link. Sandra Newbury, a clinical assistant professor and director of the Shelter Medicine Program at the University of Wisconsin School of Veterinary Medicine, tested multiple cats at an animal shelter in northwest Indiana, according to the release. The cats tested positive for the H3N2 canine influenza virus. "Suspicions of an outbreak in the cats were initially raised when a group of them displayed unusual signs of respiratory disease," Newbury said in the release. "While this first confirmed report of multiple cats testing positive for canine influenza in the U.S. shows the virus can affect cats, we hope that infections and illness in felines will continue to be quite rare." Right now that's pretty much all we have.  From what I can see this is rare right now.  However it shows that cross-species transmission is possible, and that worries me.  Last year the US veterinary community was taken by surprise when H3N2 influenza made such a rapid spread across the country.  I live and practice near Atlanta and I absolutely didn't expect what happened here.  So when we stay that cats catching this virus is rare, I take that with a grain of salt.  H3N2 was unheard of in the US until a little over a year ago, and it made quite the impact.  Could that happen in cats? I know that I'll be watching this development with interest, and will be watching out for any signs like this in cats in my area.  I don't want to be blindsided like we were last year. Tuesday, April 12, 2016 Pollen Season Here in the southeastern US we see a LOT of pollen during early Spring.  Those who haven't lived in this area don't really appreciate just how much pollen we get.  It covers cars.  It runs in the gutters.  It fills the air. To give you an idea of what we see every year, here are some pictures of my car.  And I've seen it worse than this! Even the house isn't safe.  Here's a picture of my front porch.  I moved the welcome mat so you can see the accumulation of pollen.  We have footprints in the pollen on the porch! This is one of the big reasons why this part of the country has so many allergic pets.  I've had several clients who moved from the western or northeastern parts of the US and never had any allergy issues with their pets until they moved here.  A big part of my business this time of the year is dealing with the suddenly itchy dogs and cases of allergic dermatitis.  Considering that I actually can't stand dermatology, this isn't a fun part of my job.  But as long as I work and practice here I can't avoid it.   Saturday, April 9, 2016 Why Superman Is Super.....Poignancy In Comic Books It is a great time to be a comic book fan.  Movies and TV shows with our favorite characters are breaking records and gaining ratings.  But it still goes back to the printed comics, the artists and writers.  Some writers are better than others, and us die-hard fans can talk about the stories of various people like David, Claremont, McFarlane, Miller, and Johns (if you're a true fan you'll immediately recognize those names).  It seems that some writers "get" a character better than others.   For example, let's look at Superman.  He is arguably the single most recognized superhero on the planet.  While he's certainly not my favorite character, I admire some things about him when he is written correctly.  Many people think that the power of Superman comes from his invulnerability, strength, vision, speed, or any of his other powers.  But that's not the point of his character.  Though he is an orphaned alien and one of the most powerful beings in comics, his true power comes from his humble upbringing in Smallville, Kansas.  It's the humanity and morality instilled in him by the Kents that makes him such an amazing character.  This background makes him a true hero, and gives him the drive to keep control over his powers. I want to share some panels of a Superman comic that is really powerful and illustrates this point clearly.  This is the kind of hero we need, not the gritty, dark tone set in recent movies.  (click here for a larger version) Wednesday, April 6, 2016 The Truly Important Things In Life Last month I blogged about some thoughts regarding how I wanted to be remembered at the end of my life.  At my current age and stage of my career, I am realizing that a pursuit of advancement as a vet isn't really that important to me.  It only matters as it relates to how I can support my family and our preferred lifestyle.  My life is really defined by my wife, children, friends, and how I relate to them. One of my favorite comic strips of all time is Calvin & Hobbes.  Recently I came across this wonderful strip by the creator, Bill Watterson.  It resonated with me as it quite accurately sums up my recent feelings.  I hope you enjoy it as much as I have. (if you have a hard time reading it below, here is a link to the post where I found it) Sunday, April 3, 2016 Kidney Failure And Cancer. Treat or Not? I recently got this email from Amanda..... I would greatly appreciate it if you would be willing to give me your opinion on my situation. Now the funds are not the main reason, I will find the money if needed but my husband says our beloved Sammy is his best friend and wouldn't want him to suffer for one more day than necessary. My 10 year old cat has a huge (orange size) mass in his stomach about where the left kidney would be.  I have only had blood test done and he is in renal failure.  Doctors want to take x-rays and do a ultra sound to see if the mass can be operated on or to see if he has cancer and if it has spread to his lungs or other organs.  Then we would see if he has cancer or if the mass could even be operated on because of how large it is.  Mind you my cat is a medium size 12lb cat with a  orange size tumor in his stomach, which cannot be comfortable for him. I noticed a decline in his play around 6 months ago but I contributed that to us just moving to a new place.  He is thirsty all the time and practically lives in the bathroom now cause he always wants water from the faucet.  Every time we go to the bathroom he's in there asking for us to turn on the water.  He plays less and less with me and doesn't like to be touched at all around his stomach area. So I'm at a crossroads.  Do I do the x-rays and ultrasound to see if its cancer or to see if it can be operated on?  But either way my husband doesn't want to put our cat through any surgery.  The mass is so large I couldn't even imagine how much they would have to cut him open just to remove it.  This is horrible to think about putting him through this just so he could live a few more years for my own selfish reasons...  Or am I'm being stupid and not going through with it cause he could have a few more years? Any advise would be greatly appreciated.  I don't believe he is suffering too bad yet but I do believe that he is not his regular self, he is always drinking water, he is always peeing, he has some good days and some bad ones.  But if I could stop his suffering I will, I just don't know when the best time would be.  My heart is breaking just the thought of having to put him down or even having to have him operated on and him dying on the operating table. There is never an easy decision in situations like this.  Even if you know it's the best thing for your pet you still feel bad about ending their life (no matter how peaceful it may be).  I'll help as best as I can, but ultimately this is a decision between you and your own vet. The first thing that concerns me about your story is the fact that he is in renal failure.  An animal has to lose 2/3 of its kidney function before you will see any abnormalities and 3/4 of kidney function before the pet acts sick.  The kidneys don't regenerate, which is why there is so much redundancy.  Once parts of the kidney is damaged, it stays that way.  We can do great with only 50% of our kidneys, which is why we can donate one and not need any treatment.  But we can't function with much less than that. If a cat is showing signs of renal failure this essentially means that one kidney is completely non-functional and the other one is only half working.  If you remove the non-functioning kidney you still have the problems with the remaining one.  If only one kidney was damaged due to cancer and the other one was normal, you actually wouldn't see blood abnormalities, and you could remove the bad kidney without problems. So just with the lab tests alone this sounds like a bad situation that wouldn't respond well to therapy and would be a high surgery risk. If you wanted to pursue diagnostics you could probably start with just either chest x-rays or an ultrasound, not necessarily both.  If tumors show up on chest radiographs the cancer has spread and surgery isn't going to help.  If an abdominal ultrasound shows both kidneys affected (which I would suspect) or tumors throughout the abdomen, surgery isn't going to be a solution.  Sometimes it's worthwhile doing additional diagnostic tests for the peace of mind of absolutely knowing that there aren't any other options. Based on what you've shared, Amanda, I would not hold out hope that surgery would help.  However, definitely talk to your own vet about that, as I don't know other details of the case.  This is a situation where I wouldn't rely only on my advice since I'm not close to the case.  Your vet may know some aspects that I don't and that may change the decision. If surgery isn't an option due to cost, inability to help, or simply not wanting to do it, then you have to look at overall quality of life.  I've often believed that it's better to euthanize one day too early rather than one day too late.  If you know what the eventual outcome will be in a terminal case, I don't like waiting until the pet is actually starting to suffer.  If I know that they will be suffering at some point, I want to help them pass on before the suffering starts.  But that's not a clear-cut point in many cases, and it's often a very subjective decision. This past week I saw a cat who was being treated for hyperthyroidism.  We had her on medication for several months, even increasing the dosage beyond the typically recommended amount, but we still couldn't get her thyroid levels down to normal levels.  She wasn't herself, was continuing to slowly lose weight, and overall was unregulated.  The only other option was to refer her to a specialist for radioactive iodine therapy.  While this is a great treatment it is very expensive and therefore out of reach for many cat owners.  These clients simply couldn't afford this option, so we were left in a tough situation.  Medical therapy wasn't working, the cat was slowly worsening, and they couldn't go to the next step.  As hard as it was for them, they decided that the best thing for their beloved cat was to euthanize her. Amanda, I don't know if any of this helped, but it may give you some different thoughts.  Go over this with your own vet and look at what is going to be best for your family and pet. Friday, April 1, 2016 A New Adventure...Nuevo Atlantis I have an exciting announcement to make.  It's one that I've kept quiet for a long time until everything was finalized and set in stone.  But it's a pretty life-changing one and I'm glad to be able to share it with the world here (my family has already been informed). I'm going to be a colonist.  An undersea colonist. A few years ago an amazing project was announced, Nuevo Atlantis.  This is a new, international expedition into settling the sea floor in order to help with land-based overpopulation and to make farming and resource mining easier.  The ocean is a biome rich in minerals and biological resources, ripe for colonization.  And yes, it's actually become technologically possible. Articles have been written on this issue over the last few years, and it's pretty exciting (here's one from the BBC, and one from Discover Magazine).  Heck, this has been a dream in science-fiction for a long, long time!  Now the time has come to make this dream a reality. Nuevo Atlantis is being spearheaded by Dr. Gabe Farraige, a very smart man with PhDs in both engineering and oceanography.  He has gathered a team of 20 experts, including biologists, astrophysicists (there are a lot of similarities between survival in space and underwater), psycologists, and others.  I had a chance to meet him last year and he blew me away with his charisma and intelligence.  I couldn't help but want to join the project. Why me?  Why a veterinarian?   Personally I've always been fascinated by ocean animals and their modifications for living in that environment.  That's strange because I actually don't like the beach and would rather be in the mountains.  But animals such as sharks, whales, octopuses, and even starfish have always interested me.  I briefly toyed around with an education in oceanology, but decided against it.  I've wondered if I made a mistake, though I'm generally happy with my career choice.  Now is a chance to rectify that. A veterinarian actually makes a lot of sense!  We are trained scientists and have special knowledge of animal biology.  Did you know that there have actually been veterinarian astronauts?  Both Dr. Richard Linnehan and  Dr. Alex Dunlap have been part of NASA (Dr. Dunlap is the Chief Veterinarian) and have been on shuttle missions.  Besides being a scientist on an underwater expedition, Dr. Farraige is bringing families into the colony to make it as livable as possible.  And that means family pets!  Somebody is going to need to take care of those pets, and that somebody is me. And above all, I am a huge fan of the superhero Aquaman, so how could I pass this up? Nuevo Atlantis is currently being constructed of the western coast of northern Africa.  A recent topographic study shows  some of the early traces of the project. Here are some concept pictures, showing what the colony will eventually look like. Here is an early photo of workers laying the framework. Part of the colony has already been built.  Here are some pictures of those already starting to live there. Doesn't that look incredibly cool???  Can you see why I'm excited to be moving???  My wife and children are also pretty wound up, though my son is nervous about having hundreds of feet of water pushing down on us.  I think he'll adjust. Nuevo Atlantis is projected to be completed in late 2017.  By then there will be schools, businesses, living quarters, restaurants, and everything needed for a sustainable life underwater.  I will be moving in early 2017, likely March.  Before that I will be going through more education and training, learning the risks and techniques of living in a hostile environment. Nuevo Atlantis is still accepting applications, though the remaining spaces are very limited.  If you want to apply, or simply to learn about this amazing project, click on this link. As time gets closer I'll give more updates.  And once we move I will keep up this blog, as I'm sure everyone would want.
24e481eae672eb25
Dismiss Notice Join Physics Forums Today! Quaternion and Pauli matrix 1. Sep 2, 2005 #1 i am learning Quaternion now for my EM course. Can someone enlighten me on the correspondence between Quaternion and Pauli Matrix algebra? 2. jcsd 3. Sep 2, 2005 #2 Not so easy to explain; metric tensor of the Minkowski's space <=> introduction of the quaternions; a proposition from Dirac to discuss the Schrödinger equation => introduction of (4-4) matrices built in fine with the (2-2) Pauli's matrices; Let us call m(a) for a = 0, 1, 2, 3 the different (4-4) matrices; the discussion shows that following relation must hold: m(a). m(b) + m(b). m(a) = 2. g(ab) where g(ab) is the metric tensor for a Minkowski’s space. So: not a real good explanation (sorry) but a short exposé of the connections between the actors 4. Sep 2, 2005 #3 User Avatar Staff Emeritus Gold Member Dearly Missed From this site: http://home.pcisys.net/~bestwork.1/HamiltonQ/hamilton.htm [Broken] This quote: Last edited by a moderator: May 2, 2017 5. Sep 3, 2005 #4 still yet to figure out.. but the web link looks pretty informative. Thanks. Will see if i can make some sense out of it.
86944e2af21141d8
Foresight Nanotech Institute Logo Image of nano Machine learning may improve molecular design for nanotechnology At various points along the path toward productive nanosystems for molecular manufacturing it would be useful to be able to calculate the properties and reactions of assemblies of atoms of various sizes. Within the domain of non-relativistic quantum mechanics, such information is supplied by the Schrödinger equation, but this can only be solved analytically for the hydrogen atom and ions with only one electron. For larger atoms and molecules, numerical solutions require compromises between computational feasibility and accuracy. Recent work from researchers at Argonne National Laboratory suggests that machine learning can be an efficient alternative to numerical computations. A hat tip to for pointing to this New Scientist article by Lisa Grossman “Molecules from scratch without the fiendish physics“: A SUITE of artificial intelligence algorithms may become the ultimate chemistry set. Software can now quickly predict a property of molecules from their theoretical structure. Similar advances should allow chemists to design new molecules on computers instead of by lengthy trial-and-error. Our physical understanding of the macroscopic world is so good that everything from bridges to aircraft can be designed and tested on a computer. There’s no need to make every possible design to figure out which ones work. Microscopic molecules are a different story. “Basically, we are still doing chemistry like Thomas Edison,” says Anatole von Lilienfeld of Argonne National Laboratory in Lemont, Illinois. The chief enemy of computer-aided chemical design is the Schrödinger equation. In theory, this mathematical beast can be solved to give the probability that electrons in an atom or molecule will be in certain positions, giving rise to chemical and physical properties. The researchers developed a machine learning model to calculate the atomisation energy—the energy of all the bonds holding a molecule together and applied it to a database of 7165 small organic molecules of known structure and atomization energy and containing up to seven atoms of carbon, nitrogen, oxygen, or sulfur, plus the number of hydrogen atoms necessary to saturate the bonds. These molecules had atomization energies ranging from 800 to 2000 kcal/mol. The model was trained on a subset of 1000 compounds and then used to calculate the energies of the remaining molecules in the database. The results showed a mean error of only 9.9 kcal/mol, comparable to the accuracy of methods based upon the Schrödinger equation, but the computations were done in milliseconds rather than hours. The authors suggest that extensions of their approach might permit rational molecule design or molecular dynamics calculations of systems of atoms undergoing chemical reactions. The research was published in Physical Review Letters [abstract]. A free full text preprint is available. —James Lewis One Response to “Machine learning may improve molecular design for nanotechnology” 1. Machine learning may improve molecular design for nanotechnology - All about nano technology - Says: [...] reading here: Machine learning may improve molecular design for nanotechnology Share in social [...] Leave a Reply
92b0aa1bad597320
Take the 2-minute tour × Consider standard quantum mechanics, but forget about the collapse of the wavefunction. Instead, use decoherence through interaction with the environment to bring the evolving quantum state into an eigenstate (rspt arbitrarily close by). Question: Can this theory be fundamentally deterministic? If one takes into account that the variables of the environment are not known, then the evolution is of course 'undetermined' in a probabilistic sense, but that isn't the question. Question is if fundamentally quantum mechanics with environmentally induced decoherence can be deterministic. Note that I'm not saying it has to be. I might be mistaken, but it seems to me decoherence could be followed by an actual non-deterministic process still, so the decoherence alone doesn't settle the question of determinism or non-determinism. Question is if one still needs a non-deterministic ingredient? Update: Please note that I asked whether the evolution can fundamentally be deterministic, or whether it has to be non-deterministic. It is clear to me that for all practical purposes it will appear non-deterministic. Note also that my question does not refer to the prepared state after tracing out the environmental degrees of freedom, but to the full evolution of system and environment. Does one need a non-deterministic ingredient to reproduce quantum mechanics, or can it with the help of decoherence be only apparently non-deterministic yet fundamentally deterministic? share|improve this question What definition of determinism are you using? –  user1708 Jan 21 '11 at 14:45 If you know the state at time t_1, you can in principle calculate everything that's going to happen, or did happen at time t_2. –  WIMP Jan 23 '11 at 10:32 You need to rephrase your question if both myself and Matt have misunderstood your intention. Are you now asking if you can deterministically collapse to a particular eigenstate? The answer to that is no, because it violates linearity. –  Joe Fitzsimons Jan 23 '11 at 10:56 @ Joe: I just updated the question, hope it's clearer now? You don't need to collapse exactly to a particular eigenstate, just arbitrarily close by (as I already stated in my original question). –  WIMP Jan 23 '11 at 11:15 I've posted an updated answer to answer the question as I now understand it. –  Joe Fitzsimons Jan 23 '11 at 12:22 5 Answers 5 up vote 3 down vote accepted Short answer to "Can this theory be fundamentally deterministic?": No. Decoherence is the diagonalization of the density matrix in a preferred basis, with the off-diagonals vanishing at late times. Since you can get the same final diagonal matrix from several possible initial pure states of the system under consideration, there's a necessary loss of information and irreversibility. (I'm guessing this is what you meant by non-deterministic) A bit more detail: Decoherence proceeds by the rapid establishment of entanglement-induced correlations between the system and the infinite degrees of freedom of the decohering environment. The second law prevents this process from being reversible (since S has to always increase, and S is zero for the pure state, while it is greater than zero for the decohered mixed state). If you take the second law to be fundamental, then the non-determinism here is fundamental too. UPDATE: The updated question now refers to the full evolution of system + environment, in other words, the entire universe. Since there's nothing else for the universe to entangle with, it will remain in a pure state and evolve deterministically for ever if it always was in a pure state. I however don't know if the universe is in a pure state or a mixed state. Anyone? share|improve this answer I would answer the same thing, so plus one point. ;-) –  Luboš Motl Jan 21 '11 at 19:29 Another way of saying this is the process is not unitary. –  Lawrence B. Crowell Jan 22 '11 at 3:34 This isn't technically correct. Decoherence can be the result of an entirely deterministic (and unitary process), since you only care about the reduced density matrix for the system in question. Open quantum systems -do- decohere, even though the entire process is unitary. The larger wavefunction of the entire system is still pure, but the state of the local system becomes mixed. –  Joe Fitzsimons Jan 22 '11 at 6:26 My understanding is that the evolution turns non-unitary once you've traced out the environmental degrees of freedom. That's not what I'm asking for. Also, time evolution doesn't need to be unitary to be deterministic. –  WIMP Jan 23 '11 at 10:34 I concur with the criticisms of this answer: you are making the mistake of only considering the subsystem in question, not the total system. The subsystem loses information, yes, but it's lost to the total system, which may well be reversible as a whole. –  Greg Graviton Jan 23 '11 at 14:19 I don't disagree with the other answers, but I want to try to use different words: Evolution of a quantum state is deterministic in the sense that it is given by the Hamiltonian. Quantum mechanics doesn't need anything beyond unitary evolution. So in that sense, the answer is deterministic. However, decoherence means that eventually a quantum state may evolve into a superposition of very nearly orthogonal states, which for large enough systems will resemble to arbitrarily high precision the answer you might get from assuming that there is a nondeterministic, nonunitary process of "wavefunction collapse." For all practical purposes, since we are large classical observers, we can observe only one such nearly-orthogonal combination, and the interference with other possible outcomes will be unmeasurably small. So, in this sense the outcome is "nondeterministic." If this seems counterintuitive to you, consider something analogous about classical statistical mechanics and thermodynamics. If I start with a collection of gas molecules all bunched up in one corner of the room, it is a very atypical (low-entropy) state under any natural coarse-graining of phase space. Now, by entirely reversible interactions, it can become a typical (high-entropy) state with molecules scattered all over the room. This process appears to have lost information, in the sense that I would have to do many, many difficult measurements to ascertain that a short time before this was a very special state indeed. But really, the underlying physics is deterministic, so in principle the final state remembers where it came from, although for all practical purposes if I tried to evolve it backwards I would never discover the right answer. (To be clear, I'm not claiming a very sharp analogy here. But I'm saying that the notion that microscopically deterministic evolution can be consistent with apparent or "for all practical purposes" loss of determinism in real observations is something that might be more intuitive in this context.) share|improve this answer Good answer! –  Joe Fitzsimons Jan 22 '11 at 6:28 It doesn't seem counterintuitive to me, but that wasn't my question. I'm not asking "for all practical purposes." I know that for all practical purposes it will appear non-deterministic in the sense that we won't be able to predict what's going to happen -- too many variables in the environment. I'm asking if the time evolution of the system is fundamentally 'in principle' deterministic. That of course can only be before you've averaged over the variables of the environment. –  WIMP Jan 23 '11 at 10:41 Then I would say yes, it is "in principle" deterministic; the Hamiltonian specifies how everything evolves. Most of my answer was just trying to explain how to reconcile this with the observation that quantum effects look nondeterministic. –  Matt Reece Jan 23 '11 at 15:41 Thanks. Yes, that was also my understanding. –  WIMP Jan 24 '11 at 7:40 +1 for thermodynamics mention, I think some day decoherence and 2nd law will shake hands –  HDE Jan 19 '12 at 0:16 UPDATE: It seems that we have not been answering the question WIMP intended. Here is an updated answer to deal with what I now understand to be the question: Given any unknown quantum state $|\psi\rangle$, can there be any deterministic process which will make it collapse onto a particular state $|\phi \rangle$, if $|\phi \rangle\langle \psi| \neq 0$? The answer to this question is no, because it violates the linearity of quantum mechanics, allowing us to distinguish between non-orthogonal states. This is trivial, because states orthogonal to $|\phi \rangle$ will have zero probability of collapsing onto it. This may not seem like a big deal, but it turns out that linearity is fundemantal to quantum mechanics on many levels. If we remove this constraint, then entanglement can be used to signal, and hence create problems with causality. No signalling seems one of the most fundamental features of physics, showing up in many independent theories (electromag, quantum mechanics, relativity, etc.). To see how this can be done, consider an entangled state $\frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)$. This is the anti-symmetric state: for any basis $\sigma$ a measurement resulting in outcome $m$ will leave the other qubit in the opposite eigenstate of $\sigma$. Thus, if you could deterministically collapse onto the state $|0\rangle$ then you can be sure the your half of the EPR pair was not left in state $|1\rangle$ after the measurement on the other half. So, for Alice to communicate with Bob, she need only choose to measure in the $X$ or $Z$ basis. Measuring in $X$ will mean Bob receives the output $|0\rangle$ with probability 1, where as measuring in $Z$ will return result $|1\rangle$ with probability $\frac{1}{2}$. Although this is probabilistic, you can repeat the process arbitrarily many times to get exponentially close to perfect communication. This instantaneous communication breaks causality. If you allow all states to collapse to the target state, then the only solution is a channel which swaps the state with another ancilla system. Systems which can perform such deterministic collapse can always be used to signal, as well as allowing all sorts of additional weirdness like efficient solutions to PSPACE-complete problems in computation and time travel. As a result, this is totally impossible within the current framework of physical theories, and there are very substantial reasons to believe that it is a feature of any physical theory that is valid in our world. The answer is no, if by deterministic you mean possessing a local hidden variable interpretation. This follows directly from the observed violations of Bell's inequality, which ever interpretation of quantum mechanics you choose (what you are referring to is known as the Everett interpretation of quantum mechanics). Bell's inequality works as follows: Given to possible local measurement operators ($A_i$ and $B_i$) at each of two localions $i \in \{1,2\}$, what is the maximum value of the expectation value of $\langle A_1 B_1 + A_1 B_2 + A_2 B_1 - A_2 B_2\rangle$. What Bell showed was that this can take on a value of at most 2 for any local hidden variables theory. However quantum mechanics allows it to take on values up to $2\sqrt{2}$, and many experiments have recorded violations of this inequality, showing values in the range $2 < v \leq 2\sqrt{2}$. This essentially rules out a local hidden variable model. If, however, you mean can the unitary interaction of two particles give rise to decoherence, then the answer is yes, as follows: Imagine two particles initially in the state $1/\sqrt{2}(|0\rangle + |1\rangle)$. Now imagine they interact via an Ising interaction. After a certain time, they will be in the joint state $1/2(|00\rangle - |10\rangle - |01\rangle + |11\rangle)$. This is still a pure state, and so no decoherence has occurred. However, imagine one of these particles moves off far away (into the environment). If we only have access to one of these particles, then its reduced density matrix will be $1/2(|0\rangle \langle 0|+|1\rangle \langle 1|)$, which is simply a classical random distribution over the two orthogonal states, the same as would occur do to a collapse of the wavefunction. share|improve this answer Thanks for the reply. Even though you've correctly interpreted the meaning of deterministic, you haven't answered my question. You're saying the theory can't be deterministic because that would be in conflict with experimental tests on Bell's inequality. That's not correct. The theory could also be non-local instead, you just need to violate one of the assumptions to prove the theorem. –  WIMP Jan 23 '11 at 10:38 @WIMP: The first line of my answer reads: "if by deterministic you mean possessing a local hidden variable interpretation". Certainly global hidden variables can be made to work, but then they always can. –  Joe Fitzsimons Jan 23 '11 at 10:42 Yes, I know. But that non-local hidden variable theories are not excluded by Bell's theorem isn't an answer to my question. If you want to pursue that line of thought, the question would then be whether decoherence, rspt the entanglement with the environment is a sort of non-locality that spoils the assumptions to Bell's inequality and thus, isn't excluded by experiment. Also, I was wondering about the possibility of deterministic evolution from a theoretical rather than an experimental point of view. –  WIMP Jan 23 '11 at 10:53 @WIMP: Can you please revise the question to make it clear exactly what you are asking? It's currently not clear either from the question or subsequent comments. –  Joe Fitzsimons Jan 23 '11 at 10:59 @ Joe: Thanks for the update. Two things: First, you don't need to collapse exactly into |0>, you just need to get close enough so it would 'for all practical purposes' appear to be an eigenstate, see question & update. Second, that the evolution is fundamentally deterministic doesn't mean that you could in practice deterministically collapse. (Besides this, I didn't ask if it leads to instantaneous messaging as you seem to think. Also, instantaneous messaging doesn't necessarily cause problems with causality, but that's a different point.) –  WIMP Jan 24 '11 at 9:13 Not sure if you've ever heard of the De Broglie-Bohm pilot wave interpretation of QM, but it is a fundamentally deterministic interpretation. share|improve this answer Yes, I've heard of it. But my question wasn't if there is any deterministic interpretation of QM, but if the standard interpretation with decoherence instead of collapse can be deterministic. –  WIMP Jan 24 '11 at 7:45 My answer is NO. There is a problem in many QM considerations that we have an isolated system obeying (deterministic!) Schrödinger equation that is a subject of mystical "measurements" that introduce non-deterministic behavior. But one can't make a measurement and leave system isolated (indeed saying that system is isolated is always an approximation in QM, and much heavier than in CM). In fact measurement is an act of introducing interactions with measuring apparatus, measurer, coffee measurer drinks, ... -- so the measurement result can be in theory calculated, but would involve inaccessible and enormous amounts of information. This makes it practically non-deterministic, but fundamentally it is no better than classical chaos. share|improve this answer Actually, you are arguing that the answer is yes, rather than no. I haven't asked whether it is practically non-deterministic, but whether it can be fundamentally deterministic though appear non-deterministic (much like chaos indeed). –  WIMP Jan 24 '11 at 7:48 My answer to "Does decoherence need non-determinism?" is No (-; –  mbq Jan 24 '11 at 10:19 Your Answer