chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
dd9c96db19d69115 | MadSci Network: Chemistry
Re: Why do electron shells have set limits ?
Date: Wed Mar 17 16:18:14 1999
Area of science: Chemistry
ID: 921203878.Ch
Why do electron shells have set limits ?
As we know electron shells or energy levels can only contain a certain number of electrons ie 2,8,18,32,32,18,8 which match nicely to the first four quadratic numbers. We know what the individual shell configuration for each atom is. We use this information to explain the bonding between atoms, why gases are inert and it all fits very nicely into groups in the periodic table. What I would like to know is. Why does each particular shell have a set number of electrons that match the first four quadratic numbers. Doesn't this hint that there is a mathematical law that denotes why this is so, and therefore describes the nature of all matter ?
It is true that electron shells can hold 2, 8, 18, 32, 50, ... electrons. While 2, 8, 18, 32, 50, ... are each twice a quadratic number (1, 4, 9, 16, 25, ..., n2), this is an accident as we will see.
The more complex mathematics behind what I am going to tell you can be found in textbooks (such as MacQuarrie's Quantum Chemistry), or you might check out my modest collection of computational chemistry links. Some of the sites have extensive lectures available on molecular and atomic quantum mechanics.
The quantum mechanical description of many-electron atoms is based on the Schrödinger equation for the hydrogen atom, which is exactly soluble.[*] The solution tells us that an electron in an atom is fully described by four quantum numbers, labeled n, l, m and s. These quantum numbers have particular allowed values: The quantum numbers l and m can be understood in terms of subshells. Each shell (defined by n) can have n-1 types of subshell; the number of each type of subshell allowed is governed by the number of possible values of m. Thus, You should get the picture.
Thus, we see that
The first shell contains one orbital.
The second shell contains four (1+3) orbitals.
The third shell contains nine (1+3+5) orbitals.
The fourth shell contains sixteen (1+3+5+7) orbitals.
The nth shell contains n2 (1+3+5+7+...+2n-1) orbitals.
This is a quadratic sequence, but "by accident." There is no underlying mathematics which forces the sequence to be quadratic, it is merely an accident of the properties of the first three quantum numbers associated with an electron in the hydrogen atom.
All we have left is the Pauli Exclusion Principle, which says that no two electrons in the same atom can share all four quantum numbers; it is often stated as "an atomic orbital can hold a maximum of two electrons."
It is the maximum occupancy of two electrons per orbital which leads to the nice quadratic sequence: each shell 1,2,3,4,...,n can hold 2,8,18,32,...,2n2 electrons.
The obvious next question is, "Why don't the orbitals fill in a logical manner, one shell at a time, instead of skipping around?" But that's another question and anoth er answer!
Dan Berger
Bluffton College
Schrödinger equations for many-electron atoms involve more than two interacting bodies (the nucleus counts as one, but the electrons must be considered individually). The general many-body problem has not been solved and so we must use approximate methods, based on the solution for the hydrogen atom, to describe all other atoms.
Current Queue | Current Queue for Chemistry | Chemistry archives
MadSci Network,
© 1995-1999. All rights reserved. |
7afb1d7d9c8bb866 | How does it work?
The Quantum Garden is an interactive art installation that demonstrates key concepts of quantum physics, such as quantum superposition, interference, and the Schrödinger equation.
Touching the springs numerically generates a particle in a quantum superposition, which then moves through the Garden like a wave. Try and touch where you think the particle will be!
Quantum Garden simulation starts from touching the piece. From the second touch the lights reveal the position of the walker.
This installation simulates a continuous time quantum walk on a quantum network. The initial quantum superposition evolves according to the Schrödinger equation, with brightness representing the probabilities and illustrating quantum interference. Observing the particle’s position causes the wave function to collapse onto one spring according to these probabilities. This is a key model for quantum computation and quantum biology. |
ba686e56fc3dfde0 | SEMITIP V6, program UniInt3
This program computes the electrostatic potential and the resulting tunnel current between a metallic tip and a uniform (homogeneous) semiconducting sample, for a fully 3-dimensional geometry. The tunnel current is computed by integrating the Schrödinger equation along the central axis of the problem (i.e. as appropriate for a planar geometry, but an approximation for a nonplanar geometry).
Version information
Version 6.3; see top of UniInt3-6.3.f source code for prior version information.
A compiled version of the code, which should run on any Windows PC, is available in the file UniInt3.exe. Input for the executable code comes from the file FORT.9. Download these two files, into filenames "UniInt3.exe" and "fort.9". Then, run the code just by double clicking on it. Using a text editor, the input parameters in FORT.9 can be changed to whatever values are desired. In addition to the parameter values, this file also contains comments as to the meaning of each parameter. See SEMITIP V6 Technical Manual for additional comments on the meaning of the parameters.
Output from the program is contained in the following files (output depends on the value of the output parameter IWRIT as specified in the input file FORT.9):
All of the parameters in the program can be varied using the input file FORT.9, with the exception of the array sizes, the specification of a surface state density other than a uniform or Gaussian shaped one, and the specification of spatial arrangement of bulk or surface charge density. See SEMITIP V6 Technical Manual for additional information on these user-defined functions. Modification of those functions can be accomplished by changing the source code of the program. The source code is available, in the following files (version numbers follow the dash in the names):
All routines are written in Fortran. The source code can be downloaded directly from the above locations, and it can be compiled and linked on any platform. Sample input and output from the program is shown in the examples below.
Illustrative Examples of Running the Code
1. undoped GaAs(110), with no surface states. |
e9b90f0e81967db3 | The Truth About "Homeopathic" Medicine (#23)
[Audio version]
The Truth About "Homeopathic" Medicine (#23)
[Text version]
There are a few potential explanations…
For more material like this article, check out:
The 4-Hour Body
Gout: The Missing Chapter and Explanation
Leave a Reply to David Michael Cancel reply
774 Replies to “The Truth About "Homeopathic" Medicine (#23)”
1. Go ahead. Tell my cat her skin disease cleared up and her fur grew back because of the placebo effect. Then take away her remedies. Oh and bring some bandaids for your face. Kitty has something for you. 😀
2. The explanation need not exclude options it is very possible that options 2, 3, and 4 are simultaneously involved. I’ve noticed a tendency to name or identify phenomena for comfort rather than deeper understanding such as when we say “it is the placebo effect.” Naming it does not tell us much more than we knew before the naming, only the sense of mystery is reduced from having named it. I think the rush to eliminate mystery often discourages exploration. Quite often when we declare something ridiculous, mundane, or irrelevant we stop looking for answers. Haven’t we seen past instances when “how the world works” turns out to be a good approximation but totally missing huge collections of phenomena being dismissed as unrelated or irrelevant?
3. I would answer your question with another question. Homeopathy has been practiced by name for over 200 years and otherwise for much longer. All without scientific evidence.
Tim, you have studied and speak Chinese (Mandarin at least). What are your feelings about eastern medicine, specifically the chakra and meridian systems? Science cannot explain these either, but I work with accupressure and meridian tapping daily with results beyond anything that drugs have ever produced for me.
You say the homeopathic was working. Either study it some more (a lot more) or embrace it.
andrew foss
4. Tim, Nothing in this post sheds any light on the “Truth” of Homeopathy. I suggest you expend a little – perhaps a lot – more effort researching this topic. Sorry, but there is no 4 hour path to understanding what sits behind the apparent veil of this discipline and other forms of Energy Medicine. Glad to see you are nevertheless reaping the benefits!
5. Hi Tim, before I migrated to Bolivia in 1993, I worked 3 years with french homeopathic products for farm animals. They were all homeopathic complexes (5 to 10 ingedients most C30) and did the following: Cows: cure and prevent mastitis, strengthen cow’s livers, … result: much better price for the milk because less leucocites and better quality. Pigs and Bovine for meat: better growth through more efficient metabolism…. and less interference of parasites. I think it works on the energetic level.
6. I was going to tell you that in addition to personal positive results with homeopathic remedies myself from a well educated homeopath, I first used them for my Siberian Husky through a vet clinic with immediate success for a serious ailment .
Why would most of the royal households in Europe use homeopathy when they could afford any treatment ?
I think your premise that some things are later understood through Science that are not for periods of time will hold true. Quantum physics seems unbelievable but is not.
7. Muchos de los medicamentos de hoy en dia estan basados en remedios naturales. Esta comprobado que incluso hay alimentos como la Curcuma, las moras, el cafe, la col rizada entre muchos mas, son beneficiosos para la salud.
El detalle es lo que compras en una botella vs. lo que te preparan al momento.
Por ejemplo, en China y Hong Kong existen boticarios en cada esquina que venden remedios homeopaticos, los cuales tu puedes ver mientras preparan. Sabes lo que te estan sirviendo- cuando compras remedios homeopaticos preparados comercialmente no sabes 100% lo que compras. No es una formula regulada por ninguna agencia.
La doctora Rosita Arvigo, en Belize, escribio un libro sobre su experiencia con los remedios homeopaticos de la cultura Maya, guiada por uno de los ultimos chamanes Maya. El libro se llama Satsun, y relata la manera en que se entrena un tipico medico en homeopatia Maya. Muy interesante.
La homeopatia es un arte antiguo, pero puede, al igual que los alimentos, comercializarse a un nivel que pierde su valor nutritivo o su integridad natural….Por lo pronto- que tu alimento sea tu medicina…
8. Dear Tim, You have recycled this content from the 4 Hour Body, and I could tell when I read the book that you really struggle with this. Lynn McTaggart, a London based science journalist, wrote a book called “The Field” which has a chapter about homeopathy I found extremely compelling, and I suspect you will to. It is not the placebo effect. It will shift the way you think about molecules, and will bring an important element into your examination of all things health and wellness. I dare you.
Best Regards from your Cornell friend from the Berkshire Hathaway meeting,
Lisa Wolf
9. We buy many products of a homeopathic range from the pharmacy. We mainly
Use drops for nausea and vertigo, as well as a phophylactic for colds and influenza. When you feel you are getting sick, you take two tabs with each meal, and I promise you you are good again the next day. The droos for nausea also work, im sure you all know car sickness or even nausea due to many other factors are the worst feeling in the world and do not go away just from drinking eater etc. it works within 5 to 10 mins. We have given these remedies to many people who were very scepticla but it worked.
10. Hi Tim,
I first came across homeopathy about 25 years ago when I used to suffer horrendous migraines, my business partner at the time recommended I try his Homeopath so I did. After 1 visit I never had a recurrence again, I have had a couple of minor ones in recent years but nothing like the curl up and want to die versions I used to get 2-3 times a year.
Since then we have used it to cure my wife of some health issues, prevent our son getting grommets put in his ears when he was about 3 (he hasn’t had an earache since seeing the homeopath, yet prior to this point they were so bad that a conventional doctor felt the grommet operation was necessary) and our daughter of moluscum contagiosum (or whatever it is called).
I don’t care that conventional research can’t prove it’s efficacy, I only care that it works. I don’t really care if it was only a placebo effect, the thing that matters in the end is the result not the process…..
It seems modern medicine is more about boosting profits than improving health care anyway, so give me a $100 consultation and a few sugar pills over expensive and often intrusive conventional solutions any day…..
11. Im a chiropractor and have used homeopathy in many forms, on and off for over 25 years. I have seen it work well in some instances and not so much in others. Like you Tim, I want to know how it works and remain curiously skeptical but use it any way.
I think there may be some unexplained explanation more in the lines of energy medicine. I don’t discount placebo or the natural healing of things but have seen this work in resistant cases. As a doctor, I just want to do good and help people so my intention may even play a role.
Love the 4 Hour series.
Great podcast.
12. Iam a big fan. Your awesome.
Iam a Biochemist, and have a Doctorate of Pharmacy from USC.
When I spent the time to learn the real theory beyond Homeopathic medicine, it made perfect sense. It was so simple, and so explained by basic chemistry, it was poetic.
Its all about Gibbs free energy. And the idea that chaos, or entropy, cost energy. Thus, the more organized a structure, the more favorable its energy state.
The second, its the Schrödinger equation of quatum mechanics. Which, relates or states, that everythings is both mass and wave like at the same time.
Which means, that just beyond you cannot see it, doesnt mean it doesnt have mass properties. Or more importantly, that wave and mass are equal and thus interconnectable. When you have waves, you have some form or indirect form of mass.
Thus, when you “sucuss” the remedy, with the original herb, you are “energzing the herb”, via the force your muscles put in. This external forces, affectly undergoes what is comparable to “x-ray crystallization defraction”, how they determine atomic structures, the energy goes from your hand, into the container, into the warer, into the herb, which has mass and wave like properties, and when the energy comes back out, reflects elastic collision, the wave is “set to a pattern”, that of the herb.
This defracted wave, with external energy, is then able to interact with the water, and organizes it based on the “frequency” set by the herb, via the succession.
Organized water molecules, hold the energy form, of the mass component of the herb, of geo, or whatever you water to make the remedy from.
Then, for conclusion, this is the pathophysiology. The basic idea, is that of epigenetics or cell-enviroment interaction via protien expression.
When the Homeopathic remedy enters tge human body, it alters the “frequency” in a way that alters the body. It can alter the environment such that the cell will respond.
The principle of homeopathy, by definion in latin, is “like treats like”. Create the symptom you wish to treat and the body with become active in the way it needs to, to fix the problem we ourselves didn’t cause.
Got watery eyes? take somethign that causes that, and your body will now be able to know the eyes are watering, and will mount its own response to equalize.
Lastly, if you arent familar with the class of herbs called “adaptogens”, they function in a somewhat similar way. An adaptogenic herb licorice, can lower blood pressure when the patient has high BP, and can also raise the blood pressure, in another pt who has low blood pressure.
Homeopathy is different, but similar in that a little can go a long way, and that the body already has its own cure, we just need to remind the body.
13. There is no way to have a baby benefit by the placebo effect and all three of my children responded positively to the Boiron formulation for teething. Additionally, and far more impressively, was the effect arnica and calendula gels had on my youngest child’s thumbs. “M” has cerebral palsy. When he was still a baby he would sometimes get his thumb in his mouth (very difficult for him to do) and this would set off a tonic bite (his jaw would clamp down for as much as 30 seconds – HARD). By the time his jaw could relax again, and I could remove his thumb, it would be purple, mashed down with deep teeth impressions, and would then swell instantly. If there was broken skin (blessedly rare) I would apply a generous amount of calendula, then surround his thumb in arnica. NO ICE ever touched his thumb – just an almost-instant application of one or both gels. By the one-hour mark it was usually safe to wipe away any excess gels and get a look at the damage. Other than some swelling it would be recovered. Every. Single. Time. If you hadn’t been witness to the anguished screaming and frightful sight after evacuation from his mouth you’d never believe the injury occurred at all, never mind a mere hour before. I have more stories, but I always start people off (in homeopathy) with arnica and calendula gels, though if I can talk them into arnica pills, too, that’s ever better (take them for a couple of days before your next surgery, then during recovery, and see how differently you feel in the first 48 hours). I even heard arnica being referred to (though not as a homeopathic, I concede) as a remedy for a badly beaten character, in a John Wayne movie.
14. Do I get a free trip around the world for solving Tims mystery?
Can’t fully explain it, but Kevin Trudeau can (when he gets out of jail) He screwed up on the weight loss book but all else is spot on. Study the “Law of Attraction”. Everything in the Universe vibrates, including the molecules in a piece of steel, bricks and of course water. That remedy has a different vibration than plain water, thats about it. There was no placebo when I walked out of a Homeopathic office, were I went after 4 years of anger and depression. I had 0 expectation, I thought maybe something would happen if I come back 20 times or so. Less than 15 minutes walking to Subway I was feeling better. Didn’t know about vibrations then either. Kevin opened my eyes to so many things, I verified it all over number of years from many sources and fixed a few minor health problems over and over again. Modern medical science says there is no cure for the common cold, I have knocked it out at least 6 or 7 times in 2.5 years. Shouldn’t I get a Nobel prize or a million dollars or something? I am going to write a book on this when I retire. Thank You Kevin
15. My wife is in her fifth year of battling breast cancer (it recurred at stage 4 Nov. 2011). In my search to find science based nutritional answers to help manage her illness (and just maybe put it into late stage remission) one website that has helped is Its founder, Dr. Michael Greger, scours the over six thousand nutritional studies done around the world each year and condenses his findings into short (usually 2 to 4 minute) videos which he distributes for free. ( exists through donations.)
Here’s a link to a very short (43 second) video he produced on a meta analysis done looking at several studies of the effectiveness of homeopathy. Given that his conclusions are based on a meta analysis and not a single study I trust his opinion on this. Dr. Greger is a vegan but I’ve found him to be even handed when the conclusions of various studies go against the vegan-vegetarian grain.
16. I’m betting that it’s between explanations #2 and #3, with #3 triggering #2.
I have lots of thoughts about modern homeopathy. I don’t want to say that people are being ripped off, because even though it’s been proven that those homeopathic remedies are just sugar pellets, those who believe in it will find value and healing. Keywords being “those who believe” as studies have documented that people heal faster when they believe that they are going to heal. That being said, people do spend lots and lots of money not necessarily on homeopathic remedies (though they do) but in the form of both time and money going to school, classes and setting up practices to be homeopathic practitioners and prescribing placebo remedies. But of course there is value in having someone trained in medicine (even homeopathic) to sit and discuss your ailments with – as talking through pain and illness is extremely healing and homeopaths tend to spend more time with patients than Western MDs.
Being both a Mom and a San Francisco resident, I do find it upsetting when people (and I’ve had this experience several times) say they will be opting for homeopath alternatives to vaccinating their children. As, of course vaccinations are the original homeopathic theory. That’s probably my biggest gripe with homeopathic remedies.
As for the positive testimonials I see sprinkled throughout these responses if X thousand people take any supplement, a certain portion (Y) are going to have positive results for whatever reasons. Who you often don’t hear from are the portion (X – Y) who had negative or no results. Supplement companies and producers have made a good living off the testimonials of the positive responders for centuries. But science is based on replicatable results seen in double blind studies (and especially meta analysis studies based on analysis of the results of several studies). In such studies the supplement must beat the results of the placebo group or it is worthless.
18. I am not a researcher or scientist but I would like to share some info that I have come across that might shed some light on why homeopathic remedies might work as medicine or a placebo. It might be more about the water. Technically water is not a crystal but Ice is. So for argument sake water may share some properties that crystals do. One of those properties is that crystal can hold INFORMATION. If you don’t believe me you are using one right now. Computer chips use silicone crystals to store and transfer information. So for homeopathic medicine it could be that the Information from the initial medicinal molecules that were put in the water is now stored in the water even though the medicinal molecule is no longer present.
Tim or anyone else you should check out the documentary “What the bleep do we know” and “What the bleep, down the rabbit whole” the latter is the extended version. In it they talk about the work of a Japanese man named Masaru Emoto who had done some studies with water. He would expose water in a glass container to either positive or negative energies in the form of music, pictures, or words and then take samples from the water and freeze them to examine the ice crystals. What he found was the water exposed to positive music, pictures and words formed beautiful geometric shaped designs. The negative music, pictures and words produced ice crystals that were distorted asymmetrical and randomly formed. He also compared ice crystals from natural clean mountain streams to polluted bodies of water and found the same results. The documentary also talks about the mind blowing topic of quantum physics and how thoughts have energy and can affect and change your internal AND external environment.
If this information is actually true then what kind of information are we absorbing from drinking something like tap water. Tap water may be piped in from a natural source but is it polluted? It’s pumped and pushed through man made pipes to a treatment plant the filters, and uses all kinds of harsh chemicals to clean and stripe the water of contaminates and then it’s chlorinated and fluoridated and pushed at high pressure through copper, or possibly plastic or lead pipes to your faucet. I have come across this line of thinking from a guy named Daniel Vitalis.
He is a so called health guru that has researched water. One of his recommendations is to drink spring water. Not buying a plastic bottle of it but to go out and find a natural spring and collect it in a big glass bottle. The reason is because the water from a deep aquifer is literally thousands of years old. It was on the earths surface when nature was truly organic well before pollution. It has been filter as it traveled down through the earths rocks and minerals picking up INFORMATION of a pristine time on earth and is now coming back up to the surface. It is wild water that would have positive life giving information within it.
One other anecdote. This is second hand info so the accuracy is not there. I was speaking with a registered massage therapist that had attended a conference. One of the speakers (don’t know what the topic was) brought up a story of a scientist that had done some research on water back in let’s say the 70’s maybe earlier. I don’t know exactly what he was testing but fast forward to the late 90’s, 2000’s. Some researchers were trying to reproduce his experiments with water. They could not get them same outcomes as the data that had be produced by the original scientist. Some how they were either told or figured out for themselves that the original experiments were done during a time before accessibility to computers and laptops. So they removed all electronics from their lab and tried the experiments again. This time they were able to reproduce the data. The electromagnetic radiation from the computers they were using affected how the water acted in these test. So this brings up many questions. In our highly electronic wifi world what is that doing to us humans who are 75% water? How does that affect science period? Can we trust science without a doubt?
Sorry for the long post. Are these people crazy or is there more to this world than what science and hard facts can explain?
19. Hey Tim,
I’m surprised after being a 1%’r yourself you haven’t looked into frequency healing.
Apparently, I’ve understood it to work by the fact that the “frequency” of the homeopathic remedy is the same frequency as the problem it is solving (poison ivy for example). The understanding that “like” cures “like”
Looking that even the most elite individuals on the planet and some of the oldest scriptures/true philosophers talked about “frequency” and “vibration” being the root of our entire existence, I have always given respect to homeopathic remedies in that respect.
Sorry if its not a “scientific answer” But often the masses have dismissed solutions like homeopathic remedies as “pseudo-science” when in respect, there are many things science has not been able to explain….like the placebo effect itself.
My mentality…if something is controversial, take the opposite stance of what the masses have been conditioned to believe…and generally you’ll be right.
Looking at the strangle hold the pharmaceutical industry have had on our “objective scientific research” when it comes to health….I’d say that anything that deviates from the norm is generally something looking into.
Cheers!, I’ve been following you for sometime.
20. As a pharmacist I think of homeopathic often. I often recommend certain homeopathic remedies but am often reminded of what one of my professor’s thoughts on homeopathy. don’t recommend it l, it’s simply bad science. he would always go back to Avogadros number, and that if diluted it as much as the formula requires than it is impossible to have any active substance in the homeopathic product, due to the law of Avogadros constant. So why do I recommend some homeopathic products? think about what general issues they are developed for, almost none of them are serious. I believe in the amazing power of the human mind and the placebo effect. it is your conscious mind telling your subconscious mind what it would like to be fixed, generally the body will heal itself but when you select a product and believe it will work, it general does. just my thoughts
21. Hi Tim,
I have Hiatus Hernia and the acid reflux is so painful that anyone suffering will try almost anything. I went the usual allopathy route, was popping 15 odd pills/ capsules a day and all i got was temporary relief. The day i skipped, the pain was back. Then, I turned to homeopathy. It has been nearly an year that I have been ‘almost’ symptom free. There have been 2-3 minor episodes lasting 2-3 days each, but nothing like what I had gone through earlier. The intensity of the pain, when it comes, is the same (i think), but homeopathy helps be recover much, much faster. Also, there are days when i forget to take my medications, and skip them for days together, but get no symptoms. My money is on homeopathy. Sure, it cannot help an accident victim or a heart attack patient, but can help with the rehabilitation process. If you want, I will give you the name of my doctor and her husband. They routinely visit US. You could probably sit across the table and voice your concerns.
22. I use homeopathic medicine on my 17mo daughter before she gets too sick and it seems to work. I think it’s cos her little body isn’t polluted from taking previous medicines which I believe can leave traces in the blood steam making the homeopathic medicine less effective. I also use it at the start of an illness but I have no measure of its effect as I don’t have a clone of myself with the same illness taking conventional drugs to compare to.
23. Yes and No..your right and your wrong..or..maybe we should just give up being right about anything! We do seem to want to know the truth,,or the answer to everything. In reading Bruce Lipton or Lynn McTaggert, the field of quantum mechanics or cell biology suggest that there are “things” going on that we may not understand for many years. Science is certainly not the answer, as long as scientist forget that they should always be reaching,searching for a different way to explain the unexplainable. We just have to think outside the box(which Tim is famous for!) Have a nice day.
24. Homeopaty helped my son very much, though I didn’t believed – so it was not a placebo effect. Tim, it wasn’t a placebo effect for ypu to. I believe, that mechanism of homeopaty is based on water memory, that’s why it is not nessesary to have active substance in healing concentrations in each dose. I’m not going to explain what it is (water’s as a world’s element memory and other properties), because there’s a lot of information in internet to read about.
25. When I was in my early 20’s we were selling at an arts & crafts show in Los Angeles. I had a boil on my wrist at the time & had suffered several (maybe 6 or 8) very large & painful ones over the years, starting when I was 16 or so. Anyways, there was another vendor at the show who noticed the bandage on my wrist & inquired as to my problem. I showed him the boil which I had placed a piece of bacon over to try & “pull” the puss (yuk, sorry!). He said he practiced Homeopathy & looked up in this little black book the remedy for boils. He had a case of some sort where he kept these little vile’s which supposedly contained whatever “cure” was listed for the boils & other ailments. He gave me two or three drops of grain alcohol with the homeopathy remedy on my tongue & told me to inhale. I thought I was going to choke to death, but by the end of the day the boil drained completely & I never had another one again. It’s been 30 years. If I remember right he said it was Silicea? Not sure what else if anything was with it. I have been a believer since that day.
26. This is a great piece Tim.
The biggest confusion that’s out there, which I have a hard time getting across to people, is the mis-use of different terms or perhaps the consumer lumping homeopathic, natural, herbal, etc. all together.
Even worse is the lay public mixing up a homeopath and a naturopath. Very different.
Homeopathy, and especially mass-dilution techniques, just cannot produce benefit beyond placebo. BUT… maybe that’s not a bad thing. Placebo is one of the most powerful treatments we have available!
27. I came off the contraceptive pill a year or so ago and after a few months developed an intense pain in my right ovary. After repeated trips to the doctor and scans that told me nothing, I decided to try an alternative route. I found an amazing alternative medicine expert (who looked a bit like Merlin the wizard) who gave me a homeopathic remedy that was tailored to my specific hormonal imbalance (identified through a Vega Test machine) and followed this up with several courses of acupuncture. Not only has the pain completely gone away but subsequent treatments have also resolved an irregular heart beat that I was experiencing and completely cured my severe allergy to horses! I am sold on the concept of an integrated natural approach to my health and have continued to go back from regular acupuncture since. I am also now riding horses every week 🙂
1. I’ve also seen acupuncture, on its own, cure heart arrhythmia, as well as successfully treat narcolepsy and early prostate problems. Powerful stuff!
28. Anyone familiar with the work and theories of Dr. Masaru Emoto? When water really does remember ‘structures’, could it be that homeopathic medicines work in a similar way?
29. dr fritz alfred popp’s work on biophotons has shown that water holds a memory of a substance that has been shaken in the water. biophoton work would be a plausible reason why homeopathy works. photons carry information… lots of papers on pubmed on this… maybe just a wait on the technology/ science to prove it and then science will be shown to have been very late to the game as in the case of acupuncture. (Again)
30. Hi, I am a German Pharmacist. I totaly 100% agree with the placebo effect. Also animals fall for placebo, specially because there is someone who is treating and caring for the problem to cure. Awareness that awakes in the body and boosts the own immun system is very powerful. For example, the treatment of wart with just alternating bath. The higher blood circulation in the area treatet will boost the immun answer.
But I still think it is a good kind of medicine! I call it myself a directed placebo effect. Many people like kids or pregnant women come into my pharmacy and seek for help in a situation, were I dont have anything “real” that is allowed for the patient. In this cases the homeopathic treatment is a great opportunity to at least do something.
Even if I dont agree with the mechanism of the postulatet healing principle, I think tell the people: If it works for you, use it! It is like the unusual basketball throwing style, if you make the points, it doesent matter how.
So my advice for you is to just stop thinking about it. Draw your personal conclusion and let everybody find their own. It is a question that will never be clearly answerd.
Kind regards from Frankfurt, Germany
31. Hey Tim, interesting conversation here.
It might as well be a Placebo, but if there is a drug that can trigger similar placebos every time. It is awesome, isn’t it. I do not have a strong inclination for or against the argument but I use homeopathic and for whatever reason it works for me. Every single time.
I have two examples, One is of myself – I had a motorbike accident some 10 years back and after a few days I developed vertigo. It was ‘extremely’ bad. I wasn’t able to sit or stand, I had lie down in one posture for days and none of the regular medicine helped. All sorts of medical tests CT Scan, MRIs and X rays were normal and doctors weren’t able to detect any anomaly. I tried a homeopath and he tried three different drugs on me. Within 3 days it was gone. What worked was Conium 200. I have a few recurrent instances after that and after several experiments, going through the ordeal couple of times, I found that this thing always works out.
Second case is my dad’s – he developed a slipped disc and it was recurrent. Every time he was required to take a month off, put Tractions on and wear a broad waist band. Someone suggested following combination. It was healed without regular medicine and it never occur again in ten years until he lived.
Here’s his prescription:
RhusTox 1m
Arnika 1m
Hypericim 1m
Rutag 1m
Symphytum 1m
Go figure!
32. Homeopathie works, don’t ask me why but I have years of experience with it personally and with my son. If you wan’t to KNOW: the next time you throw your back out get “Traumeel” tablets. After a car crash with whip lash etc. this remedy sorted me out after the hospital had failed to do so for many months. Best of luck and good health!
33. Very interesting. My mom used homeopathy prescribed by a homeopathic “doctor” for the treatment of hyperactive thyroid… And it was the only thing that worked!!! I believe it was purely placebo, however. Unfortunately, when she stopped, even when she started taking real medicine, things eventually got worse and she had to do a surgery. So I do recommend homeopathy, it’s harmless… But only if you believe in it and you’re not aware of the fact that it’s placebo.
34. I have used Classical homeopathy for 50 years for myself, my children, my animals.Those who talk about the placebo effect fail to explain how it can calm a baby screaming with colic within 5 minutes when the child has no idea that it has been medicated or how my animals respond even though it is in their food and they likewise don’t know..My present husband was really sceptical until an ear complaint he had had for years cleared up and I confessed that for a week I had been dosing him in his breakfast juice.
I refused an operation on ‘dangerous’ nodules on my thyroid, took the appropriate remedy and 3 months later they had shrunk.Another 3 months down the line, they had vanished. It is not always a quick-fix, depending on the complaint.
As we are all different, so our constitutional remedy varies. I am a typical ‘sulfur’ and, at the first sign of a cold or other complaint, this is what I take and within 24 hours all the symptoms have vanished.I don’t care that people rubbish it, none of us have had antibiotics apart from my son who had to take them when he was in the Army or he would be put on a charge,he secretly took his remedy alongside them though.
35. Perhaps a look into the work of Dr Emoto’s work from Japan and countless others who have documented the structuring and programming of water. The work of Marcel Vogel is particularly striking, describing powerful real-world effects from charging water with quartz technology. Fascinating stuff 😉
36. Unfair to troll this many “wooo” folk .. and yes I use that term dismissively. To date there is no peer reviewed credible evidence that homeopathy works at all. At best it is fraud at worst it is reckless endangerment.
37. A well balanced article. My only experience of homeopathy was when I contracted tonsillitis in Germany back in 1999. The doctor there (just a run of the mill general practitioner) prescribed me a homeopathic remedy – clearly it’s more readily accepted by the German mainstream, or at least it was back then. The remedy I was given was Apis Belladonna. At the time, I too was unaware that it was a homeopathic remedy I’d been given – I was just sent on my way with “here’s some medicine, it will make you better”. I took the dose as specified by the physician, but my recovery was no quicker or slower than from my usual bouts of tonsillitis. It didn’t even have a placebo effect on me – my N=1 prove very little.
38. Your article reminded me about something I read in “The Field: The Quest for the Secret Force of the Universe ” by Lynne McTaggart. There is a chapter about homeopathy, where she wrote that water that was in contact with active ingredient somehow copies and retains its properties on molecule level. So I will go with that there is an additional mysterious force working in 2 ways: on water itself and its molecules, and then by your brain believing it and changing the molecules in your body. It’s all quite mysterious and complex, I recommend that book though.
39. Homeopathic is used a lot over here in Germany.
I personally have not seen any results with me.
But what makes me write this very first comment is your approach to the topic.
You take 30C, although you don’t know anything about it.
And of course, you take more than you are advised to do.
Tim, with that mindset – you will NEVER find out what homeopathic stuff is about.
Its like you say you do a prayer, but then opened your eyes and God was not there… so you set up a camera to film him secretely…
If you believe in God or not – this mindset will not help you with that question.
And if you believe in homeopathic medicine or not – discussing about molecules in a 30C and testing it the way you can test normal medicine is not the way to get knowledge.
You talk like a guy that likes to touch things.
Who must see, smell, analyze.
That works with material things.
But using this approach with immaterial things … well, just won’t help.
40. It’s incredible how people are willing to believe in anything even being doctors!
All of them who believe in homeopathy say we dismiss because we are very closed minded in thinking that because there is no molecule of the active ingredient it doesn’t work.
NO! It’s dismissed because there is not a single serious study showing that compared to placebo it has a considerable effect. Plain and simple: You give sugar pills to someone (or an animal) telling him that it is homeopathy and you give homeopathy pills to another person or animal, and you won’t tell the difference between the two (and this is done with maaaany people or animals).
There are effects in science which doctors doesn’t know the mechanism and doesn’t make sense BUT they know it works because the statistics prove they work.
So please, homeopathy lovers, when saying it works, don’t say it worked for me or my dog because I got better when I gave the dog a pill, or don’t say we are close minded and science dismissed many things in the past, just cite a study proving that it works statistically and if the study is well projected all the entire world will believe you! As simple as that!
Homeopathy is ruled by companies, and it is very profitable. Really if they would want to prove it they have the money to do so and it would be in their interest. So why there isn’t a single validated STATISTICAL study showing that it works?
41. I spent 2 years at a birth center in Indonesia where they use homeopathy extensively. One could still argue the placebo effect, even with a language barrier – if a patient is handed a little white pill, they will assume it’s going to help them. But this doesn’t explain NEWBORN babies who revive, stop crying, calm down or go to sleep right after being given homeopathy. Or constipated babies, who haven’t pooped in a week, pooping minutes after homeopathy is administered. Or animals. I occasionally use homeopathy on my cats, which is extremely effective. I’ve had a cough that dragged on for 3 weeks clear up in 2 days with homeopathy – and I was an unbeliever at the time, so wonder how this speaks to the placebo effect. No one has been able to prove how acupuncture ‘works’ either, yet many people accept that acupuncture is effective against chronic pain, depression and a host of other ailments, not to mention greatly reducing recovery time from injury.
42. You Tim Ferriss,
are among the teachers that have sent me off on my own route when I was sailing as a navigational officer in the merchant marine.
Thank you!
Anthroposophical medicine uses homeopathic remedies. Rudolf Steiner, the founder of Anthroposophy(human wisdom and spiritual science), is quite the PolyMath, just like you Tim, and many other great personalities in history.
As a Waldorf Student, a human, a patient which has dealt with many different doctors, a friend, an entrepreneaur, a songwriter, a guitarist, parkour practitioner, a soon to be author, and a student of the occult(read student of the hidden: what everyone can know if they study it. Just as being an entrepreneur is very hidden for many untill they decide to study it), new worlds open which for the 5 senses are hidden.
Rudolf Steiner gave 6 specific exercises for this development. The development of the higher self which every human has the capacity to develop if they want to. And if developed gives you the ability to see the world very differently. As with you Tim, I doubt you see the world as the “normal” person does.
These are the six exercises.
– Thought. Developing the ability for absolutely clear thinking. No thought flickering og randomness. No running off into associations with random stuff.
– Will. The ability to choose your every movement, that then is defined with a verb as an action, which results in where you go in life. Deciding to go to college or hire college students to work for your company results in two very different life paths, a.k.a Destinations, a.k.a Destinies.
– Equinimity. The Stoic quality. To let the world be, and not be tossed around from top to bottom. Your mad that I didn’t take you out to dinner. Well You are the one experiencing madness, not I. I am simply observing your experience, not going with you on the ride.
– The ability to see beauty. Looking for beauty in even the ugliest thing. The rotting dog with beautiful teeth for example. Or another persons ability to express madness can be quite beautiful.
– Open mindedness. That there is a possibility. That something can be possible, that you didn’t yet know. Letting it be open to further investigation for the truth. Not to be confused with being mislead. When this is awakened, then you see the world again as it is, and not as you think it is with abstract thought. For the antonym of abstract is concrete. And the abstract must come from something concrete.
– Mix them in different combinations.
Each exercise arises a subtle feeling.
Basically spiritual stuff but done in a scientific method. But how can spiritual things be done scientifically. Well how can anything be done scientifically? We are the observers. And the whole matter of discussing art versus science is pointless. For art without science or science with out art is neither science nor art, but simply random acts of reckless movements. Just look at the world and reckless human movement is seen in many places.
Erkendung-Zoe-Bevegung-Iorta is a word I have made for a representation of this. As neither the Danish, Spanish, American, German, Latin og Greek languages have such words that point towards that which is there.
Art and Science. Form, function and purpose. Math, Chemistry, Physics, Language. Look into it and how it all connects and enjoy your discoveries.
Jake Baerentsen, Half american and faroese, living in Denmark.
Buenos Días.
I hope one day to meet you for a “whatever appears.”
43. I raised six children who never had to have antibiotics growing up, no tubes in their ears etc. They were very healthy. I attribute this to the homeopathic remedies I used. They work really well for children. I suggest finding a Homeopathic 101 class to learn more about using remedies… will be glad you did!
44. I am a veterinarian and a firm believer in Homeopathy .Treated numerous horses during a 30 year career as a racehorse vet with homeopathic remedies.Could measure the effect of the medicine not only in performance but also in numerous testing of bloodsamples pre and post applications.
Interesting link from a Nobel price winner about this subject:
45. I have found homeopathic remedies helpful. I also use Bach’s flower remedies which from what I have experienced–work on your subtle energy body. We are multidimensional beings. Science has not caught up with vibrational medicine. If something works…keep it. I watch TV and frankly I do not know “how” a TV really works. Skirting close to “faith” makes me uncomfortable too but I try to keep an open mind.
46. Dear Tim,
I am no expert in homooepathy – however I usually try homoepathic (or also herbal) remedies first before I go the conventional route and in deed it often works, I also had cases where conventional medicin didn’t work and I went for homoepathic… it worked . Placebo surely works to a certain or in fact high degree – our mind is more powerful than we often are aware of…and I wish to add another phenomena…psychomatic illnesses…it is absolute amazing to me to observe how often an illness of my daughter coincides with a stressful situation or even an argument I am having with someone to which she became exposed. And the moment she is out of it – she is so quickly back to normal, much quicker than the doctor said the illness needs to go off. Thought I add this ….
47. Love your blog however you should think about getting info from another source other than Wikipedia Any Jo schmo can add info to Wikipedia I have a chronic illness and almost every “homeopathic ” remedy I’ve tried has failed or shown little. Substantiveresults
48. Bad medicine by Ben Goldrace has good insights on this (essentially concluding it is mostly placebo that plays here). it’d be interesting for you to interview him; especially to get his down to earth views on all the experimentation you’ve been doing in recent years
49. My family uses homeopathic meds (in he true sense) and we have found they work. My son is on a regular course for some speech development issues and we have seen a marked improvement in him. The cold meds the same homeopath prescribed didn’t work for him but those that another gave worked. So that shows it is not placebo. Plus a 2 year old won’t know what a placebo effect is.
50. Tim I have your answer within two articles from which I pasted the most important part.
There is a subtle bio-energy that flows through all organic life. This energy is expressed as an electromagnetic vibrational frequency – and pure essential oils have the highest frequencies of any measured natural substance. Every atom in the universe has a specific vibratory or periodic motion. Each periodic motion has a frequency (the number of oscillations per second) that can be measured in Hertz
The following clips (that may be paraphrased) are from:
If parts of the body become imbalanced, they may be healed through projecting the proper and correct frequencies back into the body…This is why there are different forms of energy/vibrational healing including homeopathy.
The following clip is from:
What is interesting about Homeopathy is that each different remedy and potency has a particular frequency. In the same way that we tune a radio to a particular frequency and listen to our favorite radio station, homeopathic remedies are attuned to a specific frequency. The frequency obtained depends on the substance diluted, the scale of dilution, and the number of dilutions. Different dilutions are used for different things. One other point about homeopathy that I want to stress is its’ Healing and Vibrational nature and the dilution of the original substance. With each successive dilution the substance becomes infinitesimal and unrecognizable in the water. Only a Vibrational Energy is left and carried by the water and alcohol. This Vibrational Energy stimulates our bodies to heal.
More forms of vibrational healing:
• Homeopathic remedies
• Acupuncture
• Energy healing
• Bach flower remedies
• Gem and crystal elixirs
• Chromotherapy
• Sound healing
• Bio-Electric or Physioacoustic devices
• Magnetic
• Radionics
Haley L
51. Funny that people do not see placebo as the best type of cure but rather seem to dismiss it as a non-cure, as though only chemicals are real cures. A placebo is the best kind of cure. And yes, placebo effects can be had without any pills or creams. Even if the placebo effect is simply the speeding up of the bodies own norml healing mechanisms, if it works, it works. I can induce in my body the effects of most remedies just by energetically inducing the effect with my will. Call it placebo and dismiss it if you want, but it is real.
1. JJ, this is not the problem – indeed the placebo effect has a very well-founded place in modern medicine. The problem is when people who are not medically trained decide when something needs a placebo and when it needs something with a true biochemical effect. People have suffered greatly because they have been given sugar pills when in fact they needed real medicine. That is unacceptable. If it was simply a matter of some people making money from the placebo effect, it would be one thing, that people suffer unnecessarily because of it is another.
52. I don’t know.
There so many opinions about that.
Of course, I realize that it’s kinda unreliable stuff, but still…
If it helps people why not?
53. In my opinion illness is our systems own way to cure and get balanced. Normal muscle pain is just a sign, that you are getting stronger and need rest sometimes warm and cold exposures can speed up the healing process. Attention driven healing can over time heal an injury. Concerning the use of homeopati I think that our adaptive cells will create a solution to exposure of a low dose of medicine or equal. In some cases this adaptive behaviour will cure the illness. If you look at you body as a muscle some stress is needed to grow, this stress could also come from taking medicine. Forcing the cells to find a response as we all want to live. Muscle pain in it self will nearly always heal over time with rest and right digestion. Always a good idea to consult a doctor.
54. This post made me smile! Thank you! I remember growing up in Brazil, spending holidays at the farm and the beautiful green glass bottle filled with arnica leaves and rubbing alcohol. Whenever we would hurt ourselves someone would shout: “passa arnica!”. Beautiful memories. They have been studying it in Brazil that plant for as long as I can recall. They now use the gel to help burned victims heal faster. They also use it a lot in plastic surgery post-ops.
I love homeopathics. Even though my dad (doctor) and I argue about it far too often I see his point of view that not everything can be treated with this sort of medicine.
55. Tim, I am a little disappointed that Wikipedia is your source. Anyone can edit an entry so that is definately NOT the place to seek credible information. Like so many of the other posters, I also use homepathic medicine and have also experienced its positive effects. I am fortunate to have one of the best homeopaths in the US very close to me and can definately attest to the safety and credibility of this field. I had many maladies when I started seeing her and she was in fact able to treat a lot of emotional issues as well as physical ones. She uses a combination of accupuncture and remedies with astounding results. I got to the point that I allowed her to treat my body without even telling her what my problem was. She could pretty much figure out what was going on just by reading my energies. I would walk into her office with a nasty sinus issue and walk out an hour later with almost complete relief. She treated my 7 year old son who had developed stomach ulcers. I have complete trust and faith in her and never really asked what she was treating him with. However, the following month when we saw her, I mentioned that my son had become very verbal in the past month and was really talking back a lot. She kind of laughed and advised she had treated him (emotionally) to speak out rather than hold things in which had caused the ulcers. I did NOT know her treatment until after I noticed its effects but the ulcers were gone.
I can also attest that after 4 years of birth control, I ended up in the hospital and on the block to have a full histerectomy at 24 years old due to completely cistic/destroyed ovaries. I was given a medical diagnosis of never being able to get pregnant again and having a super high chance of miscarriage if I did; I had no expectations of ever having another child. Apparently my homepath began treating my ovaries along with my other maladies at my visits, completely unknown to me. And a year later I was pregnant with no complications, went full term and had a completely heathly baby. I thought it was a miracle until my husband revealed that my doctor had been treating my ovaries at my visits. I have seen my doctor completely wipe out autism, cancer and other major illnesses with my own eyes. I am a firm beleiver of homeopath medicine and this is one area that I do not need to fully understand to KNOW that it works. From my experience, your search for answers in this field should encompass a thorough study of eastern medicine, not just the typical experience of walking into a doctors office, telling them your problem, and walking out with a perscription/remedy.
56. Homeopathy worked for me and it worked for my children.
My little boy had a kidney stone when he was 3, probably due to his inability to properly assimilate Calcium. We could not identify any allopathic treatment available for him so we were just watching for more than half a year his hematuria happening from time to time, with the stone size unchanged on ecography. Doctor gave him something which was supposed to dissolve it, but it was designed for adults. After a while we dropped that treatment and went on homeopathy, a bit skeptic but also considering there was nothing else to try except to wait him grow up until surgery would be safe. After 4 months the stone fully disappeared. I can testify he did not grow up so much during that year to allow the stone to be released naturally – because of the same Calcium problem, no supplements allowed.
Later on, homeopathy cured his serous otitis which reduced significantly his ability to hear with that ear, in spite of several series of antibiotics and inhalating lots of chemicals for few months.
I have chronic rhinosinusitis for a very long time. After few years of homeopathy the quality of my life improved significantly: now I’m able to do scuba diving and do heavy metal lead singing with my friends.
I have an engineering background so I am also fully puzzled about how such small dillutions could provide these effects. On the other hand I also believe scientists are far from fully understanding the complexities of the human body and somehow we should try things (especially since they’re safe), measure results and interpret them statistically.
57. I practice as an Osteopath and Homeopath in the UK and Barbados and I use homeopathy extensively on my patients and children patients. In combination with Functional Diagnostic Medicine I have treated many patients, pop stars / top athletes/ olympians and 11 times World Surf Champ Kelly Slater using the EMF homeopathic approach. I use a homeopathic scanner whereby patients are hooked up to a computer scanner to zoom in accurately the correct dilution or frequency. Its great tool in the tool box.
I wrote about this in my first book as an aid to helping cancer patients inform their immune systems as what to do about the layers of rot going inside their bodies caused by chronic fungal overload. Clue = Chemotherapy is anti-fungal therapy and homeopathy allows us to discover how many fungal layers there are hiding in patients bodies.
The most impt thing to remember about homeopathy is that it works on an EMF level. There is no trace of the original substance beyond a couple of dilutions. Whats actually happening is the transference of the EMF signal into the water’s crystalline memory. What Dr. Hahnemann MD , originally a first class medical doctor who spoke many languages ( english, french, greek, latin) stumbled onto a way to treat patients using dilutions which really are variations in the EMF frequency bandwidths.
That’s how I look at it. Its actually more to do with physics than with anything physical. Its a subtle energy technique that kick starts the physiology in a nudging kind of way.
Top scents Jacques Beniviste ( said something along the same lines too and it was widely discussed in Lynda Taggarts “The Field” ( by the way an exceptionally good book).
In my opinion if cell phones work on an EMF level well then Homeopathy is a cell phone call to our cells and organs, informing them how to fix things and bring them into balance.
Balance is the key….and very few physicians really know how to do that. Sometimes less is really more buts tarts after being in the trenches for more than 20 years.
Best for now LM
58. Hi Tim, what a cool article! I have an Acupressure Mobile App Brand I would like to expand. However, I am not sure, how. Would you like to chat about that? Dr. Bargak
59. Hi Tim!
With Psora is not joking. Please always make a special doctor.
Psora often occurs with syphilis and Sycosis together. It is contagious and can spread.
Please carefully examine all leave!
No homeopathic experiments! It is dangerous and very contagious!
60. “Most readers and even many homeopaths will be surprised to learn that that has already happened! During the Third Reich the (mostly pro-homeopathy) Nazi leadership wanted to solve the homeopathy question once and for all. The research programme was carefully planned and rigorously executed. A report was written and it even survived the war. But it disappeared nevertheless – apparently in the hands of German homeopaths. Why? According to a very detailed eye-witness report [9 – 12], they were wholly and devastatingly negative.”
61. My mind says ‘no’ but my body says ‘yes’. I’ve used them on myself and my children with good results – particularly with eye infections for some reason. I’ve had babies in France and Germany where homoeopathic medicine is prescribed alongside standard medicines. The efficiency of homoeopathy continually stuns me, yet I remain quite cynical! I think there is something we don’t yet understand.
62. Tim, First of all Thank You to bring up such important issues up for discussion.
It is really a tragedy that Homeopathy is supported by most using “believe it, because it works” model rather than science.
It is not Samuel Hahnemann who should be be blamed. He invented this system at a period when western medicine looked more blasphemous than homeopathy now. His solution was to let time heal and avoid all painful and unscientific things practiced by western medicine in general like letting blood drain to let out diseases!
When chided by US Supreme Court that “if Homeopathy is works then biology, chemistry and physics are wrong”, leading Homeopathy practitioners retorted that water have memory!
Now tell me who is the real culprit- the innocent user? Poor Hahnemann who just wanted placebo effect? or the lobby behind this farce?
63. There’s a book called The Field by Lynne McTaggart that talks about this. Scientific explanation/evidence to homeopathy in chapter 6. But the whole book is kind of about scientific explanation to all things similar to homeopathy. Listening to the audio book now.
64. Tim, as you know, the BEST scientists are the most humble because they know that life and nature is a lot more complex than most of us realize. Second, did you know that many hormones in our bodies operate at such a LOW dose that most skeptics of homeopathy insist could not have any physiological effects (yes, most skeptics of homeopathy are embarrassingly uninformed and misinformed about reality).
As for scientific evidence for homeopathy, it is much more substantial than you may realize:
Below is but a very small # of studies published in LEADING conventional medical journals:
Chronic obstructive pulmonary disease: Frass, M, Dielacher, C, Linkesch, M, et al. Influence of potassium dichromate on tracheal secretions in critically ill patients, Chest, March, 2005;127:936-941. The journal, Chest, is the official publication of the American College of Chest Physicians.
Hayfever: Reilly D, Taylor M, McSharry C, et al., Is homoeopathy a placebo response? controlled trial of homoeopathic potency, with pollen in hayfever as model,” Lancet, October 18, 1986, ii: 881-6.
Asthma: Reilly, D, Taylor, M, Beattie, N, et al., “Is Evidence for Homoeopathy Reproducible?” Lancet, December 10, 1994, 344:1601-6.
Fibromyalgia: Bell IR, Lewis II DA, Brooks AJ, et al. Improved clinical status in fibromyalgia patients treated with individualized homeopathic remedies versus placebo, Rheumatology. 2004:1111-5. This journal is the official journal of the British Society of Rheumatology.
I could also list numerous meta-analyses that also show the efficacy of homeopathic medicines (once again, ALL published in leading medical journals:
Linde L, Clausius N, Ramirez G, Jonas W, “Are the Clinical Effects of Homoeopathy Placebo Effects? A Meta-analysis of Placebo-Controlled Trials,” Lancet, September 20, 1997, 350:834-843.
Kleijnen J, Knipschild P ter Riet G. Clinical trials of homoeopathy. BMJ 1991, 302, 316-23. Of the 22 best studies, 15 showed positive results from homeopathic treatment. The researchers concluded, “there is a legitimate case for further evaluation of homeopathy.”
65. In addition to the body of scientific clinical evidence, you and your readers would benefit from knowing that significant amounts of “nanodoses” remain in homeopathic solutions…and this has been verified by high-quality research published in a LEADING scientific journal actually published by the American Chemical Society, called “Langmuir.”
First, just because homeopathic drug manufacturers dilute their medicine by 1:10 or 1:100 does NOT mean that the active medicine “disappears.” In fact, your calculations are COMPLETE FANTASY and have NO basis in fact!
To clarify, homeopathic medicines are made with a double-distilled water in glass containers. Glass is used because it was assumed that glass is inert, as compared with metal. However, conventional research testing glass has discovered that the vigorous shaking in-between each dilution leads to silica fragments falling off the glass walls at a level of 6 parts per million.
Then, the vigorous shaking creates bubbles and “nanobubbles” that brings oxygen into the water and increases the water pressure to 10,000 atmospheres (according to Bill Tiller, PhD, former head of material sciences department, Stanford University). Therefore, whatever medicine is placed in the water is forced into the silica fragments, and each substance will interact with the silica fragments in their own idiosyncratic fashion.
The point here is that when the homeopathic drug manufacturers pours out the water from the test-tube, the silica fragments cling to the glass walls…and therefore, they persist…and in significant numbers! Further, there is solid scientific evidence for this:
This study that was replicated with SIX different medicinal agents and tested with THREE different spectroscopies found nanodoses of the original substances at doses that our hormones are KNOWN to operate and to have significant physiological actions. I sincerely hope that you, Tim, do not think that hormones are placebos. .
66. Tim,
I am a chiropractor who incorporates homeopathy, meridian therapy (TCM), nutritional therapy and applied kinesiology(AK) into my practice protocols. I have utilized homeopathic remedies in my practice for longer than you have been alive. Like you, I am also very intuitive and skeptical about just about everything until I have either personally experienced its benefits or observed my patients do the same, but I always keep an open mind to learning new things, as I realized long ago that, unlike my teenage children, I do not yet know everything! When it comes to homeopathy, I have witnessed amazing results and also cases where no apparent benefit was realized. I find that choosing remedies based on symptoms alone is hit or miss at best. I always look for a remedy to produce some type of observable change in the patient’s body before I will prescribe it, such as a weak muscle getting stronger, a particular organ meridian balancing, or even leg length discrepancies balancing. This seems to increase the odds of a remedy having a positive therapeutic effect dramatically. I tell my patients that homeopathic remedies are like the keys on a janitor’s belt. If he tries to open a door with the wrong key, the door won’t open, but at least he won’t damage the door. On the other hand, if he utilizes the right key, the door opens. Conventional medicine may open the door as well, but they often use grenades to accomplish this simple task! Going to a health food store and buying a homeopathic remedy for migraine headaches may work one in a hundred times, but the results increase dramatically if you first determine what is causing the headaches, such as gallbladder inflammation, colon toxicity, low blood sugar, emotional stress, adrenal overload, etc.. By first identifying the cause, the practitioner can then choose a more appropriate remedy that will increase the chances that homeopathy will achieve the desired effect. The bottom line, my friend, is that healing is an art. I can’t explain how the Chinese masters came up with the concept of meridians, but I do know that when I suffered from severe migraine headaches years ago, certain acupuncture points could relieve my headache within minutes. Like St. Anselm said in the 12th century regarding faith, “I do not seek to understand so as to believe, but rather I believe in the hope that one day I will understand”. I believe that applies to healing as well. We know so much less about this amazing body of ours than we think we do, so keep an open mind and be humble in your approach to learning. Will you also try to debunk Prayer as well? Enuf’ said. Best wishes to all! Dr. John
1. Hello “Dr”. Tell us how a homeopath/chiropractor/applied kinesiologist determines how a patient has gallbladder inflammation, colon toxicity, low blood sugar, or adrenal overload when the practitioner is not even qualified to do a simple blood test?
If there’s one thing more bs than homeopathy it’s applied kinesiology.
1. Actually, “Mr. Magoo”, chiropractors ARE qualified to do a simple blood test and much more,..but why confuse you with the facts when your simple mind is already made up. I learned long ago to not debate anything with geniuses like yourself who know everything about nothing. Best wishes. “Dr.” John
2. Not offended, been called Mr Magoo since I was 6. Laughable that you know everything about me after a minor blog exchange.
The origins of chiropractic medicine,(ha ha), are pretty dubious. Germ theory denialism, every ailment caused by sublaxation, (Sublaxation Theory, not accepted by orthopedic surgeons), and of course applied kinesiology, (not to be confused with kinesiology). In Australia chiros can massage your back only, no blood tests at all. Just wondering what a chiro in the US would do if you tested positive to Chlamydia. That would be an L4-L5 adjustment no doubt.
I’m just going off to fill my car with homeopathic fuel now.
Cheers “Dr”.
67. “There are more things under heaven and on earth, Horatio [Tim], than are dreamt of in your philosophy [science].”
From the snarky tone of this post, Tim, it appears that your ego-driven brain is mocking and belittling the empirical evidence of your body, in a desperate attempt to validate itself by proclaiming the infallibility of its thought process.
Fortunately, Tim’s body is still takin’ care of bizness; even if Tim’s brain hasn’t a clue how it happens, Tim still feels better. And that, no doubt, is why there’s no mention in your post of you tossing your arnica gel (which IS in fact just as much a homeopathic medicine as the 30C pills are) in the trash.
Just because something — anything — cannot be proven using currently understood tools, knowledge, and methods does NOT automatically prove that it is useless, invalid, or fraudulent. Can you see with your ears? Smell with your eyes? Prove the color blue exists with your taste buds?
All that said, I nonetheless appreciate your writing this post, because it certainly is stimulating some lively exchanges on homeopathy…and open-mindedness.
68. I noticed that most of the comments seem either anti-homeopathy or favor the “placebo” argument. One commenter even attributed these conclusions to the above average intelligence of Tim’s readers. I think we can dismiss the placebo explanation since homeopathy has treated children, animals and plants successfully, a group not likely to be susceptible to the placebo effect. If homeopathy is a hoax, then it has been a hoax for a very long time indeed. Developed at the turn of the 19th century by Samuel Hahnemann (Hahnemann was a remarkable individual and it is worth reading his life story), it gained widespread acceptance in Europe with the successful treatment of cholera patients. Reported mortality rates of those treated conventionally ranged from 40% to 80% while for those treated under homeopathy mortality rates reported were 7% to 10%. During
the Influenza Pandemic of 1918, an Ohio doctor who kept careful statistics in his region reported that 24,000 cases of flu treated allopathically had a mortality
rate of 28.2% while 26,000 cases of flu treated homeopathically had a
mortality rate of 1.05.
One case from the present time: Dr. Luc Montagnier’s (a Nobel Prize winning French virologist) current research is investigating the electromagnetic waves that he says emanate from the highly diluted DNA of various pathogens. A quote from Dr. Montagnier: “I can’t say that homeopathy is right in everything. What I can say now is that the high dilutions (used in homeopathy) are right. High dilutions of something are not nothing. They are water structures which mimic the original molecules.”
There are too many examples of homeopathy being taken seriously by highly intelligent people and institutions to classify it as hoax. In India there are entire hospitals are devoted to homeopathy. The Swiss government recently released a report recognizing homeopathy’s efficacy.
Although homeopathy does not conform to the “scientific method”, it is the essence of empirical science, having been developed based entirely on observations. So if one is serious about knowing about homeopathy, one needs to do more than read the wiki article that google found.
I believe that one day the use of drugs to suppress symptoms will be looked on as primitive and as more sophisticated analytical tools are developed, the “science” of homeopathy will become evident.
69. Ever heard of prana, chi, or ki? These are all words for “universal energy” (in Sanskrit, Chinese, and Japanese). I don’t know the answer of how homeopathy works for sure, but I do know about energy and my personal guess is that the liquid absorbs the prana from the substance. Different plants including healing plants have different types if prana which have various healing effects. (You can heal with these different energies directly using MCKS Pranic Healing, my energy healing modality of
choice. Others include Reiki and acupuncture.)
Always stay open-minded, folks–we don’t know everything! The miracles or even quackery of yesterday are the is the hard (and obvious) science of today.
“Miracles do not violate the laws of nauture but only what we know about nature.” –MCKS
70. Tim, I am glad you spoke on this topic. The placebo effect is very real. In fact, it can be great given its effectiveness without side-effects. Very likely contributed to observed outcomes. Of course, this is not to say “regression toward the mean” or another unexplained outcome did not contribute as well. As a researcher (I am a biological science postdoc at a University), we often try to control all other variables with exception of the independent variable being tested, which can be extremely difficult, especially in humans. Unfortunately, humans do not control the many other variables in their own lives (even if they think they do) that will impact their biology (e.g. diet, activity, sleep, etc., etc., etc.). Too bad for you that if you know it is a placebo, it loses its effectiveness. I honestly did not have time to read many responses despite my interest, but I did see one about an animal being treated. That too could very well be attributed to the placebo effect as you mention with your “sham” surgery. If the pet owner is administering/tending to the animal daily in a repeated behavior, this could parallel a human being treated. The care, the attention, and such can influence the animal mentally and physically. Again, the problem was that all variables were not controlled, thus the effect can not be attributed to a specific, biological action of the treatment. People often assume a placebo is a “sugar pill,” but that is just not the case. Co-incidents absolutely inflate the profitability of the supplement industry!
71. Wow, I really enjoyed this essay. Just listening to you makes me that much smarter than I was 10mins ago. Please, I’d like to hear your thoughts on, St. Johns Wort and Melatonin.
1. There have been proper clinical trials done on St John’s Wort. It is one of the few herbal remedies that actually works. The study showed there was a measurable improvement in people suffering mild depression.
72. is the Arinca cream legit or BS? If the pills are crap why is the logic behind the cream which also has some weird homeopathic dosage. Very confused, have liked the cream a lot.
73. Never found any meaningful improvement from homeopathy. MMJ however has done wonders for me. No wonder the pharmaceutical industry has done so much since 1937 to keep it illegal.
74. I’m a logical person, so get the dilution can’t possibly work type arguments and if only humans were involved then would favour of the placebo reason.
However…. our dog had a wart under its eye the vet couldn’t cure, they tried a number of things, so we took her to a homeopath and had a dilution prescribed and it cured it in 2 weeks.
I can’t believe this could ever be considered a placebo effect as other pills had been given to the dog and hadn’t worked, all I can say is I can’t explain why, but there was causation.
75. I find it fascinating that the intensity of your skepticism allows you to override the fundamental reality of your experience–which is to say that, to your own admission, the 30c actually worked better. Add to that countless other similar experiences by many others, in contrast to the resistance of those with no experience who nevertheless find a need to refute such experiences based on theory, not reality. All the mental gymnastics in the world does not negate the collective experiential evidence behind homeopathy.
76. This might help explain how homeopathy works.
What Nobel Laureates say about homeopathy……on-homeopathy
1. She’s the author of the article referencing all the findings of past nobel laureates.
2. She’s not a doctor and all her qualifications are from some bogus homeopathic university. So she writes an article that I’m sure all those past Nobel Laureates would find misrepresents all their findings.
77. If you burn yourself and you ask me for help, I can from far away with the help of a prayer make your pain go away. No possibility to explain why with science, no placebo effect, because you will not believe it. If you need it, just tell me where you burn yourself and how bigis it. With homeopatic remedy it works really good with animalś. science is great but is limited because the doors and windows are closes very tight. Time to open your mind if you want to be credible. Hi from Switzerland. Brigitte
78. My only disproof is that my wife uses a homeopathic [something vomica] for a vertigo, a real condition that has sent her to the emergency room, and it has absolutely worked.
79. “Ultrafast memory loss and energy redistribution in the hydrogen bond network of liquid H2O”
(I’m including a link but if it’s removed, just Google the title above.)
“Many of the unusual properties of liquid water are attributed to its unique structure, comprised of a random and fluctuating three-dimensional network of hydrogen bonds that link the highly polar water molecules. One of the most direct probes of the dynamics of this network is the infrared spectrum of the OH stretching vibration, which reflects the distribution of hydrogen-bonded structures and the intermolecular forces controlling the structural dynamics of the liquid.
“… Our results highlight the efficiency of energy redistribution within the hydrogen-bonded network, and that liquid water essentially loses the memory of persistent correlations in its structure within 50fs.”
As restated by Wikipedia, that means that liquid water essentially loses the memory of persistent correlations in its structure within fifty millionths of a nanosecond.
80. One cannot use Newtonian physics and the laws of thermodynamics to explain quantum events. The post synaptic receptor is 10 to the minus 29 angstroms in size. This is very small (quantic) and requires quantum physics to understand what is taking place at this level. And there are many studies showing that indeed a homeopathic dilution has been changed from the original medium and base substance.
There is imprinting of a message into the polymorphic structure of the water/alcohol mixture.
There is storage of information in the quantic states of the electrons, atoms, and molecules of the carrier fluid.
Analyzing the spectral reaction of the homeopathic to conductance, inductance, and capacitance gives us a trivector analysis of the electrical signature of the homeopathic.
One company studies the phenomenon of memory in water and alcohol through photon scattering tests, nuclear magnetic resonance and freezing.
Another common way to measure energetics is with Kirlian photography. It involves placing the product in a highly charred electrical field congaing rare gases. Each homeopathic produces its own unique fingerprint or pattern of colors to identify it.
1. Stop using quantum physics to justify magic. Kirlian photography has been debunked many times. I understand quantum computing, yes there is ‘storage’ of information in entanglement, which can be done by very specific processes, but it wouldn’t happen by diluting and shaking bottles.
2. If water or alcohol could be imprinted with the “vibration” or memory of a homeopathic substance it had been in contact with then wouldn’t that memory extend to every substance/mineral/compound the carrier had been in contact with?
With regard to Kirlian photography, we now know the reactions were due to moisture alone. Coins and other metallic or inanimate objects only produced the typical energy formation after being touched with sweaty fingers.
81. Tim,
Interesting piece. As a surgeon, I became curious about this several years ago after treating a patient with ruptured appendicitis. They were treated with antibiotics initially, with surgery planned for 6 weeks later after the inflammation had subsided. The patient chose to use several homeopathic remedies in lieu of oral antibiotics at home after discharge from the hospital. I was skeptical, but went along with it. Prior to surgery, she was scheduled for a CT scan to help plan the operation. Pretty much every patient has some scarring, inflammation, or even small abscesses in the area at this point. To my astonishment, that area of her abdomen looked pristine, like nothing had ever happened. I’ve never seen this occur before. She gave me a book explaining the concepts, with I read with interest.
Although the “science” used to develop the concepts (like which compound to use for what diagnosis) is very sketchy, I believe the effects can be real somehow. Perhaps placebo effect. But what if there is some effect related to quantum mechanics, where the compound molecules are “entangled”, and can appear to be in multiple places at once? That might explain why it’s possible to have an effect despite the usual calculations that there shouldn’t be any present at the given dilution. It still doesn’t account for why a particular substance should work in any given situation.
Curious, and I have to admit I can’t completely chalk it up to placebo.
1. With great respect, I’ll point out (as I’m sure you know) that correlation doesn’t imply causation. Check out the very funny graphs at FastCo:
The question I still have about “entanglement” or “imprinting” or whatever is, if matter or water can retain a “memory” (sorry about the quotes) of previous, er, relationships with other substances, why would the substance originally mixed by the homeopath – arnica or whatever – create a response in the patient, when there are doubtless a kabillion other material relationships experienced by that water?
The universe is old. Hydrogen and oxygen atoms have been around a long time. As another commenter implied, the water’s already been through a thousand bladders. Why wouldn’t other elements the water has mixed with also have some effect on the patient?
I tried homeopathic remedies prescribed for ulcerative colitis by an MD/naturopath nearly 40 years ago, when I was more willing to entertain Woo. The little sugar pills did not help. This has always been the case, sooner or later, when I’ve attempted to test some Woo precept. And I don’t understand the attempts to muddle homeopathy with quantum mechanics except that quantum theory is both new/hip and poorly understood by nearly everyone, so citing quantum effects functions as an ultimate conversation stopper.
82. Tim, I teach pharmacology at Washington University in St. Louis. Your comments outline exactly how I teach this topic! Attributing the longevity of the myth of homeopathy to magic gives it too much credit. It became legal only because of ancient politics, and remains that way.
Placebo is a powerful thing, and has been proposed to work on animals due to diminished owner anxiety and pet owner perception, e.g. “Yes, Grover seems to be walking better.”
83. In reality the major effect with any kind of medicine, be it allopathic, naturopathic or homeopathic, is due to placebo. Placebo is activation of your own internal energy resources via an external focus (gee I can’t really heal myself can I? It’s gotta be something out there!). Can i create change in my own health? Google epigenetics to see the possibilities. Is homeopathy real? I have had positive responses. but then I expect this lol. It’s as valid as any other form of medicine, depending upon your belief structure, and direct experiential knowledge is the only way to find out if it’s good for you 🙂
84. I have used homeopathy for years and find it to be incredibly powerful. Here’s a different way of looking at it that I think might help. If you think of vitamins and minerals, they are only catalysts for enzymes. Its the enzymes that do all the work. An example, zinc is important to take, but its because it stimulates over 300 activities in the body controlled by enzymes. For homeopathy, its the same thing, the substance is a catalyst for the immune system (and other systems). So though the substances might be diluted to the x degree, the immune system gets stimulated even with the smallest doses. There’s much more to it, but I think seeing the ingredients as catalysts makes it easier to understand.
85. Hey Tim, Instead of thinking “Now, if I could just forget what I read on the label, I could repeat it next time.”
Why not take the opposite approach? Because your expectation was most likely the cause of the faster healing then you will now heal this much faster every time. Even fast with the gel.
Maybe expect even more next time?
86. Hey Tim,
Regarding option #1 and #4, no reputable independent double blind study has shown homeopathy to work better beyond a placebo if it did it would be accepted by the science community, physics would also welcome it since it would mean new physics.
Some people here are mentioning ‘but it works on animals!’, which in this case is option #3, nothing more than a regression toward the mean…
87. While I do believe in the placebo effect and find it to be a worthwhile intention and/or by-product of some treatments, there is efficacy to homeopathy separate from the placebo effect–even though most do not understand it by conventional norms.
It has to do with vibration and frequency. Do you think there is a molecule of substrate from a tiny pill we take “as conventional medicine” after it has become diluted in our bodies? That too has become diluted beyond recognizable molecules in our bodies. How do you think THAT works?
Dis-ease carries vibrational patterns. Energetic “signatures” are carried by homeopathic remedies and these cancel out the dis-ease patterns–perhaps long enough for our bodies amazing healing system to take over and return us back to homeostasis. Read some of James Oschman’s energy medicine articles and books. Even today, one can be sent a computer file with specific frequency that when applied to the body can counteract dis-ease.
Frankly I’m surprised by the lack of research that went into your article, considering the ngorous exploration you usually do.
88. Look to the Russian Academy of Sciences particularly the work done by Staninslav Zenin on electromagnetic potential and clathrates in water, very interesting stuff. Also Masaru Emoto’s work on water crystal formation sheds some interesting light on the weird and wonderful world of water. Homeopathy gets confusing in terms of Newtonian physics but appears perfectly plausible in the Quantum realm. There is also the problem of it being a cheap,effective, widely available & un-patentable alternative to pharma.
89. I use homeopathic St John’s Wart for neuropathic pain. I find that 6x works and 30c doesn’t. I expected the 30c to work fine, so I don’t believe it’s placebo. I think it depends on the amount of material in it. For a discussion on the different scales see
BTW, Arnica rocks. We use Traumeel pills and lotions for various bruises, aches and pains. |
951df0ef0d8138b3 | Quantum state
In quantum physics, a quantum state is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. A mixture of quantum states is again a quantum state. Quantum states that cannot be written as a mixture of other states are called pure quantum states, while all other states are called mixed quantum states. A pure quantum state can be represented by a ray in a Hilbert space over the complex numbers,[1][2] while mixed states are represented by density matrices, which are positive semidefinite operators that act on Hilbert spaces.[3][4]
Pure states are also known as state vectors or wave functions, the latter term applying particularly when they are represented as functions of position or momentum. For example, when dealing with the energy spectrum of the electron in a hydrogen atom, the relevant state vectors are identified by the principal quantum number n, the angular momentum quantum number l, the magnetic quantum number m, and the spin z-component sz. For another example, if the spin of an electron is measured in any direction, e.g. with a Stern–Gerlach experiment, there are two possible results: up or down. The Hilbert space for the electron's spin is therefore two-dimensional, constituting a qubit. A pure state here is represented by a two-dimensional complex vector , with a length of one; that is, with
where and are the absolute values of and . A mixed state, in this case, has the structure of a matrix that is Hermitian and positive semi-definite, and has trace 1.[5] A more complicated case is given (in bra–ket notation) by the singlet state, which exemplifies quantum entanglement:
which involves superposition of joint spin states for two particles with spin 12. The singlet state satisfies the property that if the particles' spins are measured along the same direction then either the spin of the first particle is observed up and the spin of the second particle is observed down, or the first one is observed down and the second one is observed up, both possibilities occurring with equal probability.
A mixed quantum state corresponds to a probabilistic mixture of pure states; however, different distributions of pure states can generate equivalent (i.e., physically indistinguishable) mixed states. The Schrödinger–HJW theorem classifies the multitude of ways to write a given mixed state as a convex combination of pure states.[6] Before a particular measurement is performed on a quantum system, the theory gives only a probability distribution for the outcome, and the form that this distribution takes is completely determined by the quantum state and the linear operators describing the measurement. Probability distributions for different measurements exhibit tradeoffs exemplified by the uncertainty principle: a state that implies a narrow spread of possible outcomes for one experiment necessarily implies a wide spread of possible outcomes for another.
Conceptual description
Pure states
Probability densities for the electron of a hydrogen atom in different quantum states.
In the mathematical formulation of quantum mechanics, pure quantum states correspond to vectors in a Hilbert space, while each observable quantity (such as the energy or momentum of a particle) is associated with a mathematical operator. The operator serves as a linear function which acts on the states of the system. The eigenvalues of the operator correspond to the possible values of the observable. For example, it is possible to observe a particle with a momentum of 1 kgm/s if and only if one of the eigenvalues of the momentum operator is 1 kgm/s. The corresponding eigenvector (which physicists call an eigenstate) with eigenvalue 1 kgm/s would be a quantum state with a definite, well-defined value of momentum of 1 kgm/s, with no quantum uncertainty. If its momentum were measured, the result is guaranteed to be 1 kgm/s.
On the other hand, a system in a superposition of multiple different eigenstates does in general have quantum uncertainty for the given observable. We can represent this linear combination of eigenstates as:
The coefficient which corresponds to a particular state in the linear combination is a complex number, thus allowing interference effects between states. The coefficients are time dependent. How a quantum state changes in time is governed by the time evolution operator. The symbols and [lower-alpha 1] surrounding the are part of bra–ket notation.
Statistical mixtures of states are a different type of linear combination. A statistical mixture of states is a statistical ensemble of independent systems. Statistical mixtures represent the degree of knowledge whilst the uncertainty within quantum mechanics is fundamental. Mathematically, a statistical mixture is not a combination using complex coefficients, but rather a combination using real-valued, positive probabilities of different states . A number represents the probability of a randomly selected system being in the state . Unlike the linear combination case each system is in a definite eigenstate.[7][8]
The expectation value of an observable A is a statistical mean of measured values of the observable. It is this mean, and the distribution of probabilities, that is predicted by physical theories.
There is no state which is simultaneously an eigenstate for all observables. For example, we cannot prepare a state such that both the position measurement Q(t) and the momentum measurement P(t) (at the same time t) are known exactly; at least one of them will have a range of possible values.[lower-alpha 2] This is the content of the Heisenberg uncertainty relation.
Moreover, in contrast to classical mechanics, it is unavoidable that performing a measurement on the system generally changes its state.[9][10][lower-alpha 3] More precisely: After measuring an observable A, the system will be in an eigenstate of A; thus the state has changed, unless the system was already in that eigenstate. This expresses a kind of logical consistency: If we measure A twice in the same run of the experiment, the measurements being directly consecutive in time,[lower-alpha 4] then they will produce the same results. This has some strange consequences, however, as follows.
Consider two incompatible observables, A and B, where A corresponds to a measurement earlier in time than B.[lower-alpha 5] Suppose that the system is in an eigenstate of B at the experiment's beginning. If we measure only B, all runs of the experiment will yield the same result. If we measure first A and then B in the same run of the experiment, the system will transfer to an eigenstate of A after the first measurement, and we will generally notice that the results of B are statistical. Thus: Quantum mechanical measurements influence one another, and the order in which they are performed is important.
Another feature of quantum states becomes relevant if we consider a physical system that consists of multiple subsystems; for example, an experiment with two particles rather than one. Quantum physics allows for certain states, called entangled states, that show certain statistical correlations between measurements on the two particles which cannot be explained by classical theory. For details, see entanglement. These entangled states lead to experimentally testable properties (Bell's theorem) that allow us to distinguish between quantum theory and alternative classical (non-quantum) models.
Schrödinger picture vs. Heisenberg picture
One can take the observables to be dependent on time, while the state σ was fixed once at the beginning of the experiment. This approach is called the Heisenberg picture. (This approach was taken in the later part of the discussion above, with time-varying observables P(t), Q(t).) One can, equivalently, treat the observables as fixed, while the state of the system depends on time; that is known as the Schrödinger picture. (This approach was taken in the earlier part of the discussion above, with a time-varying state .) Conceptually (and mathematically), the two approaches are equivalent; choosing one of them is a matter of convention.
Both viewpoints are used in quantum theory. While non-relativistic quantum mechanics is usually formulated in terms of the Schrödinger picture, the Heisenberg picture is often preferred in a relativistic context, that is, for quantum field theory. Compare with Dirac picture.[12]:65
Formalism in quantum physics
Pure states as rays in a complex Hilbert space
Quantum physics is most commonly formulated in terms of linear algebra, as follows. Any given system is identified with some finite- or infinite-dimensional Hilbert space. The pure states correspond to vectors of norm 1. Thus the set of all pure states corresponds to the unit sphere in the Hilbert space, because the unit sphere is defined as the set of all vectors with norm 1.
Multiplying a pure state by a scalar is physically inconsequential (as long as the state is considered by itself). If a vector in a complex Hilbert space can be obtained from another vector by multiplying by some non-zero complex number, the two vectors are said to correspond to the same "ray" in [1]:50 and also to the same point in the projective Hilbert space of .
Bra–ket notation
Calculations in quantum mechanics make frequent use of linear operators, scalar products, dual spaces and Hermitian conjugation. In order to make such calculations flow smoothly, and to make it unnecessary (in some contexts) to fully understand the underlying linear algebra, Paul Dirac invented a notation to describe quantum states, known as bra–ket notation. Although the details of this are beyond the scope of this article, some consequences of this are:
• The expression used to denote a state vector (which corresponds to a pure quantum state) takes the form (where the "" can be replaced by any other symbols, letters, numbers, or even words). This can be contrasted with the usual mathematical notation, where vectors are usually lower-case latin letters, and it is clear from the context that they are indeed vectors.
• Dirac defined two kinds of vector, bra and ket, dual to each other.[lower-alpha 6]
• Each ket is uniquely associated with a so-called bra, denoted , which corresponds to the same physical quantum state. Technically, the bra is the adjoint of the ket. It is an element of the dual space, and related to the ket by the Riesz representation theorem. In a finite-dimensional space with a chosen basis, writing as a column vector, is a row vector; to obtain it just take the transpose and entry-wise complex conjugate of .
• Scalar products[lower-alpha 7][lower-alpha 8] (also called brackets) are written so as to look like a bra and ket next to each other: . (The phrase "bra-ket" is supposed to resemble "bracket".)
The angular momentum has the same dimension (M·L2·T−1) as the Planck constant and, at quantum scale, behaves as a discrete degree of freedom of a quantum system. Most particles possess a kind of intrinsic angular momentum that does not appear at all in classical mechanics and arises from Dirac's relativistic generalization of the theory. Mathematically it is described with spinors. In non-relativistic quantum mechanics the group representations of the Lie group SU(2) are used to describe this additional freedom. For a given particle, the choice of representation (and hence the range of possible values of the spin observable) is specified by a non-negative number S that, in units of Planck's reduced constant ħ, is either an integer (0, 1, 2 ...) or a half-integer (1/2, 3/2, 5/2 ...). For a massive particle with spin S, its spin quantum number m always assumes one of the 2S + 1 possible values in the set
As a consequence, the quantum state of a particle with spin is described by a vector-valued wave function with values in C2S+1. Equivalently, it is represented by a complex-valued function of four variables: one discrete quantum number variable (for the spin) is added to the usual three continuous variables (for the position in space).
Many-body states and particle statistics
The quantum state of a system of N particles, each potentially with spin, is described by a complex-valued function with four variables per particle, corresponding to 3 spatial coordinates and spin, e.g.
Here, the spin variables mν assume values from the set
where is the spin of νth particle. for a particle that does not exhibit spin.
The treatment of identical particles is very different for bosons (particles with integer spin) versus fermions (particles with half-integer spin). The above N-particle function must either be symmetrized (in the bosonic case) or anti-symmetrized (in the fermionic case) with respect to the particle numbers. If not all N particles are identical, but some of them are, then the function must be (anti)symmetrized separately over the variables corresponding to each group of identical variables, according to its statistics (bosonic or fermionic).
Electrons are fermions with S = 1/2, photons (quanta of light) are bosons with S = 1 (although in the vacuum they are massless and can't be described with Schrödinger mechanics).
When symmetrization or anti-symmetrization is unnecessary, N-particle spaces of states can be obtained simply by tensor products of one-particle spaces, to which we will return later.
Basis states of one-particle systems
As with any Hilbert space, if a basis is chosen for the Hilbert space of a system, then any ket can be expanded as a linear combination of those basis elements. Symbolically, given basis kets , any ket can be written
where ci are complex numbers. In physical terms, this is described by saying that has been expressed as a quantum superposition of the states . If the basis kets are chosen to be orthonormal (as is often the case), then .
One property worth noting is that the normalized states are characterized by
and for orthonormal basis this translates to
Expansions of this sort play an important role in measurement in quantum mechanics. In particular, if the are eigenstates (with eigenvalues ki) of an observable, and that observable is measured on the normalized state , then the probability that the result of the measurement is ki is |ci|2. (The normalization condition above mandates that the total sum of probabilities is equal to one.)
A particularly important example is the position basis, which is the basis consisting of eigenstates with eigenvalues of the observable which corresponds to measuring position.[lower-alpha 9] If these eigenstates are nondegenerate (for example, if the system is a single, spinless particle), then any ket is associated with a complex-valued function of three-dimensional space
[lower-alpha 11] i.e. (a Dirac delta function), which means that </ref>
This function is called the wave function corresponding to . Similarly to the discrete case above, the probability density of the particle being found at position is and the normalized states have
In terms of the continuous set of position basis , the state is:
Superposition of pure states
As mentioned above, quantum states may be superposed. If and are two kets corresponding to quantum states, the ket
is a different quantum state (possibly not normalized). Note that both the amplitudes and phases (arguments) of and will influence the resulting quantum state. In other words, for example, even though and (for real θ) correspond to the same physical quantum state, they are not interchangeable, since and will not correspond to the same physical state for all choices of . However, and will correspond to the same physical state. This is sometimes described by saying that "global" phase factors are unphysical, but "relative" phase factors are physical and important.
One practical example of superposition is the double-slit experiment, in which superposition leads to quantum interference. The photon state is a superposition of two different states, one corresponding to the photon travel through the left slit, and the other corresponding to travel through the right slit. The relative phase of those two states depends on the difference of the distances from the two slits. Depending on that phase, the interference is constructive at some locations and destructive in others, creating the interference pattern. We may say that superposed states are in coherent superposition, by analogy with coherence in other wave phenomena.
Another example of the importance of relative phase in quantum superposition is Rabi oscillations, where the relative phase of two states varies in time due to the Schrödinger equation. The resulting superposition ends up oscillating back and forth between two different states.
Mixed states
A pure quantum state is a state which can be described by a single ket vector, as described above. A mixed quantum state is a statistical ensemble of pure states (see quantum statistical mechanics). Mixed states inevitably arise from pure states when, for a composite quantum system with an entangled state on it, the part is inaccessible to the observer. The state of the part is expressed then as the partial trace over .
A mixed state cannot be described with a single ket vector. Instead, it is described by its associated density matrix (or density operator), usually denoted ρ. Note that density matrices can describe both mixed and pure states, treating them on the same footing. Moreover, a mixed quantum state on a given quantum system described by a Hilbert space can be always represented as the partial trace of a pure quantum state (called a purification) on a larger bipartite system for a sufficiently large Hilbert space .
The density matrix describing a mixed state is defined to be an operator of the form
where is the fraction of the ensemble in each pure state The density matrix can be thought of as a way of using the one-particle formalism to describe the behavior of many similar particles by giving a probability distribution (or ensemble) of states that these particles can be found in.
A simple criterion for checking whether a density matrix is describing a pure or mixed state is that the trace of ρ2 is equal to 1 if the state is pure, and less than 1 if the state is mixed.[lower-alpha 12][13] Another, equivalent, criterion is that the von Neumann entropy is 0 for a pure state, and strictly positive for a mixed state.
The rules for measurement in quantum mechanics are particularly simple to state in terms of density matrices. For example, the ensemble average (expectation value) of a measurement corresponding to an observable A is given by
where are eigenkets and eigenvalues, respectively, for the operator A, and "tr" denotes trace. It is important to note that two types of averaging are occurring, one being a weighted quantum superposition over the basis kets of the pure states, and the other being a statistical (said incoherent) average with the probabilities ps of those states.
According to Eugene Wigner,[14] the concept of mixture was put forward by Lev Landau.[15][16]:3841
Mathematical generalizations
States can be formulated in terms of observables, rather than as vectors in a vector space. These are positive normalized linear functionals on a C*-algebra, or sometimes other classes of algebras of observables. See State on a C*-algebra and Gelfand–Naimark–Segal construction for more details.
See also
1. Sometimes written ">"; see angle brackets.
2. To avoid misunderstandings: Here we mean that Q(t) and P(t) are measured in the same state, but not in the same run of the experiment.
3. Dirac (1958),[11] p. 4: "If a system is small, we cannot observe it without producing a serious disturbance."
4. i.e. separated by a zero delay. One can think of it as stopping the time, then making the two measurements one after the other, then resuming the time. Thus, the measurements occurred at the same time, but it is still possible to tell which was first.
5. For concreteness' sake, suppose that A = Q(t1) and B = P(t2) in the above example, with t2 > t1 > 0.
6. Dirac (1958),[11] p. 20: "The bra vectors, as they have been here introduced, are quite a different kind of vector from the kets, and so far there is no connexion between them except for the existence of a scalar product of a bra and a ket."
7. Dirac (1958),[11] p. 19: "A scalar product B|A now appears as a complete bracket expression."
8. Gottfried (2013),[12] p. 31: "to define the scalar products as being between bras and kets."
9. Note that a state is a superposition of different basis states , so and are elements of the same Hilbert space. A particle in state is located precisely at position , while a particle in state can be found at different positions with corresponding probabilities.
10. Landau (1965),<ref name='Landau (1965)' group=''>
11. In the continuous case, the basis kets are not unit kets (unlike the state ): They are normalized according to [lower-alpha 10] p. 17: "∫ ΨfΨf* dq = δ(f′ − f)" (the left side corresponds to f|f′〉), "∫ δ(f′ − f) df′ = 1".
12. Note that this criterion works when the density matrix is normalized so that the trace of ρ is 1, as it is for the standard definition given in this section. Occasionally a density matrix will be normalized differently, in which case the criterion is
1. Weinberg, S. (2002), The Quantum Theory of Fields, I, Cambridge University Press, ISBN 978-0-521-55001-7
2. Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 978-0-13-111892-8
3. Holevo, Alexander S. (2001). Statistical Structure of Quantum Theory. Lecture Notes in Physics. Springer. ISBN 3-540-42082-7. OCLC 318268606.
4. Peres, Asher (1995). Quantum Theory: Concepts and Methods. Kluwer Academic Publishers. ISBN 0-7923-2549-4.
5. Rieffel, Eleanor G.; Polak, Wolfgang H. (2011-03-04). Quantum Computing: A Gentle Introduction. MIT Press. ISBN 978-0-262-01506-6.
6. Kirkpatrick, K. A. (February 2006). "The Schrödinger-HJW Theorem". Foundations of Physics Letters. 19 (1): 95–102. arXiv:quant-ph/0305068. doi:10.1007/s10702-006-1852-1. ISSN 0894-9875. S2CID 15995449.
7. Statistical Mixture of States
8. "The Density Matrix". Archived from the original on January 15, 2012. Retrieved January 24, 2012.
9. Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics'. Also translated as 'The physical content of quantum kinematics and mechanics' at pp. 62–84 by editors John Wheeler and Wojciech Zurek, in Quantum Theory and Measurement (1983), Princeton University Press, Princeton NJ.
10. Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement April 14 1928, 121: 580–590.
11. Dirac, P.A.M. (1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press, Oxford UK.
12. Gottfried, Kurt; Yan, Tung-Mow (2003). Quantum Mechanics: Fundamentals (2nd, illustrated ed.). Springer. ISBN 9780387955766.
13. Blum, Density matrix theory and applications, page 39.
14. Eugene Wigner (1962). "Remarks on the mind-body question" (PDF). In I.J. Good (ed.). The Scientist Speculates. London: Heinemann. pp. 284–302. Footnote 13 on p.180
15. Lev Landau (1927). "Das Dämpfungsproblem in der Wellenmechanik (The Damping Problem in Wave Mechanics)". Zeitschrift für Physik. 45 (5–6): 430–441. Bibcode:1927ZPhy...45..430L. doi:10.1007/bf01343064. S2CID 125732617. English translation reprinted in: D. Ter Haar, ed. (1965). Collected papers of L.D. Landau. Oxford: Pergamon Press. p.818
16. Lev Landau; Evgeny Lifshitz (1965). Quantum Mechanics Non-Relativistic Theory (PDF). Course of Theoretical Physics. 3 (2nd ed.). London: Pergamon Press.
Further reading
The concept of quantum states, in particular the content of the section Formalism in quantum physics above, is covered in most standard textbooks on quantum mechanics.
For a discussion of conceptual aspects and a comparison with classical states, see:
For a more detailed coverage of mathematical aspects, see:
• Bratteli, Ola; Robinson, Derek W (1987). Operator Algebras and Quantum Statistical Mechanics 1. Springer. ISBN 978-3-540-17093-8. 2nd edition. In particular, see Sec. 2.3.
For a discussion of purifications of mixed quantum states, see Chapter 2 of John Preskill's lecture notes for Physics 219 at Caltech.
For a discussion of geometric aspects see:
|
0001d7baf1a5daa7 | Skip to main content
Chemistry LibreTexts
8.7: Spin-Orbitals and Electron Configurations
• Page ID
• The wavefunctions obtained by solving the hydrogen atom Schrödinger equation are associated with orbital angular motion and are often called spatial wavefunctions, to differentiate them from the spin wavefunctions. The complete wavefunction for an electron in a hydrogen atom must contain both the spatial and spin components. We refer to the complete one-electron orbital as a spin-orbital and a general form for this orbital is
\[ | \varphi _{n,l,m_l , m_s} \rangle = | \psi _{n,l,m_l} (r, \theta , \psi ) \rangle | \sigma ^{m_s}_s \rangle \label {8.7.1}\]
A spin-orbital for an electron in the \(2p_z\) orbital with \(m_s = + \frac {1}{2} \), for example, could be written as
\[ | \psi _{2pz_\alpha} \rangle = | \psi _{2,1,0} (r, \theta \psi) \ | \alpha \rangle \label{8.7.2}\]
A common method of depicting electrons in spin-orbitals arranged by energy is shown in Figure \(\PageIndex{1}\), which gives one representation of the ground state electron configuration of the hydrogen atom.
Figure \(\PageIndex{1}\): Electron configuration of a ground-state hydrogen atom depicted on an energy-level diagram. The electron is represented by an arrow in the 1s orbital.
On the energy level diagram in Figure \(\PageIndex{1}\), the horizontal lines labeled 1s, 2s, 2p, etc. denote the spatial parts of the orbitals, and an arrow pointing up for spin \(\alpha\) and down for spin \(\beta \) denotes the spin part of the wavefunction.
An alternative shorthand notation for electron configuration is the familiar form 1s1 to denote an electron in the 1s orbital. Note that this shorthand version contains information only about the spatial wavefunction; information about spin is implied. Two electrons in the same orbital have spin \(\alpha\) and \(\beta \), e.g. 1s2, and one electron in an orbital is assumed to have spin \(\alpha\). Hydrogen atoms can absorb energy and the electron can be promoted to higher energy spin-orbitals. Examples of such excited state configurations are \(2p_1\), \(3d_1\), etc.
Contributors and Attributions |
1f08c24b9a6f05d5 | Spin projected unrestricted Hartree–Fock ground states for harmonic quantum dots.
U. De Giovannini F. Cavaliere Dipartimento di Fisica, Università di Genova, LAMIA CNR–INFM, Via Dodecaneso 33, 16146 Genova, Italy R. Cenni Istituto Nazionale di Fisica Nucleare – Sez. Genova
Dipartimento di Fisica, Università di Genova, Via Dodecaneso 33, 16146 Genova, Italy
M. Sassetti Dipartimento di Fisica, Università di Genova, LAMIA CNR–INFM, Via Dodecaneso 33, 16146 Genova, Italy B. Kramer I. Institut für Theoretische Physik, Universität Hamburg, Jungiusstraße 9 20355 Hamburg,
and Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany
July 22, 2021
We report results for the ground state energies and wave functions obtained by projecting spatially unrestricted Hartree Fock states to eigenstates of the total spin and the angular momentum for harmonic quantum dots with interacting electrons including a magnetic field. The ground states with the correct spatial and spin symmetries have lower energies than those obtained by the unrestricted method. The chemical potential as a function of a perpendicular magnetic field is obtained. Signature of an intrinsic spin blockade effect is found.
73.23.Hk, 73.63.Kv
I Introduction
Systems like atoms, metal clusters, trapped bosons and quantum dots show several universal features. reimann For example, strongly interacting electrons in quantum dots arrange themselves in a rotating Wigner molecule. reimann Rotating boson molecules have been predicted to exist in ion traps. RBM Furthermore, symmetric potentials can induce a shell structure in atoms, bohrmottelson metal clusters, clusterrmp and quantum dots. kouwenhoven ; reimann In the latter, signatures of shell structure have been experimentally probed, tarucha ; taruchasci leading to Hund’s rules for the total spin of the electron ground state. The spin in quantum dots kouwenrev also affects the electron transport. It can lead to spin blockade effects weinmann ; weinmann2 and negative differential conductance in nonlinear transport, weinmann ; cavaprlb ; RHCSK2006 ; ciorgaapl ; datta and it induces periodic modulations of the positions of the Coulomb peaks in the linear conductance as a function of an applied magnetic field. roggesb ; hawrylak3 ; hawrylak4 ; RHCSK2006 More recently, the effect of the spatial distribution of the spins on the Kondo phenomenon has been probed. kondorog ; roggekondo ; kondokou ; kondokel
Electron and spin states of quantum dots have been theoretically studied with various techniques. reimann For small electron numbers , exact diagonalization (ED), dineykhan ; pfannkuche ; merkt ; mikhailov2 ; mikhailov ; haw1 ; haw2 ; wojs ; szafran ; maksymB ; kyriakidis ; nishi configuration interaction (CI), wensauer ; rontani and stochastic variational methods varga allow for determining ground and excited state energies and their quantum numbers with high accuracy. For larger , the size of the many-body basis set increases exponentially. With presently available computational technology, reliably converged “exact” results can be obtained only for electron numbers up to electrons. rontani
For , Quantum Monte Carlo (QMC) pederiva ; egger ; harju ; bolton ; ghosalnp ; ghosal ; pederiva2 ; esa methods have been used. They can provide accurate estimates for ground and excited states energies. With these techniques, the shell structure, Hund’s rules, Wigner crystallization and the occurrence of “magic” angular momenta have been investigated. mikhailov ; harju ; maksym ; ruan ; filinov ; bao ; ghosalnp ; ghosal Most of the results for higher particle numbers have been restricted to zero magnetic field. It is believed that QMC provides better estimates for the energies of the ground states for larger electron numbers as compared to the “exact” methods.
For larger and/or in the presence of magnetic field , methods like Hartree Fock (HF) landmanprl ; landman3 ; reusch0 ; reusch ; lipparini ; hawrylak1 ; hawrylak2 and the density functional theory manninen ; hirose ; gattobigio ; harju2 ; esa1 ; esa2 ; esa3 have been used. Generally, these seem to provide less accurate estimates for the ground states which also can have unphysical broken symmetries due to incomplete ansatz wave functions. For instance, neglecting correlations, the straightforward HF method starts from a single Slater determinant as a variational many-body wave function which not necessarily is an eigenstate of the total spin. szabo Spatially unrestricted HF methods (UHF)landmanprl ; reusch0 systematically use symmetry breaking in order to obtain better estimates for the ground state energy. This may lead to wrong results for the total angular momentum and the total spin. For instance, UHF calculations sometimes seem to fail predicting the total spin resulting from Hund’s rule, in contradiction to the more accurate methods. Violations of Hund’s rules for relatively weak Coulomb interactions have been reported landmanprl for .
Projection techniques, pioneered in the 60th of the last century, loewdin1 ; ring ; loewdin2 can be applied for introducing the correct spatial symmetries. In quantum dots, they have been used for obtaining wave functions corresponding to specific angular momenta. landman2 ; landman4 ; landman5 ; koonin ; YL2002 Recently, the random phase approximation has been used to restore the rotational symmetry of wave functions obtained by UHF. serrarpa
Restoring the spin symmetry has received much less attention, and has been used only for very few (up to ) electrons. landman2 For , the spin singlet symmetry has been approximately restored with the Lipkin-Nogami approach. serrarpa Larger have seldom been treated with the projection technique. degio In view of the recent discussion of spin effects in the transport spectra of quantum dots, information about the total spin is, however, necessary. Additionally, by restoring the symmetries correlations are introduced into the ground state wave function that are absent in a single UHF Slater determinant. This leads to a better estimate for the ground state energy.
In this paper, we apply a projection technique to the states obtained by UHF for estimating the ground state energy of a circular quantum dot with electrons, including a magnetic field. Starting from an UHF Slater determinant with broken rotational symmetry, a first estimate for the ground state energy and the wavefunction is obtained. Then, both the total spin and the angular momentum of the UHF variational wave function are introduced by projecting on the corresponding subspaces. We show that, after restoring all of the symmetries, the energies and the wave functions are improved and show physical features which are not included in the UHF method.
We discuss the efficiency of the projected HF method (PHF) by comparing our results with those of ED, CI, and QMC. We determine the ground state energies as a function of a magnetic field, and obtain the chemical potential that can be measured in transport experiments. Our main findings are:
(i) By projecting the UHF wave functions on the total angular momentum and on the total spin , the ground state energy is successively lowered. The correction due to the spin projection is generally smaller than the one associated with the angular momentum, but still necessary for determining the correct ground state and its quantum numbers.
(ii) The quantum numbers and are correctly reproduced, if the strength of the interaction is not too large. Especially, for , the first Hund’s rule — namely that is maximized for open shells — is recovered for electrons, except for , discussed below. Hund’s rule has been claimed earlier to be violated on the basis of UHF results landmanprl .
(iii) By comparing the results with CI and QMC, we estimate a correlation energy, defined as the difference between PHF and “exact” energies, of about 2% of the ground state energy.
(iv) With increasing interaction strength the correlation energy decreases. Nevertheless, for stronger interaction, and larger , the PHF ground state tends to be spin polarized in contrast to more exact results. This is consistent with earlier conjectures, namely that UHF tends to overestimate the influence of the exchange. reusch0
(v) As a function of , several crossovers between ground states with different total spins and angular momenta are found that are absent in UHF. These are associated with characteristic changes in the electron densities. The onset of the singlet–triplet transition hawrylak1 occurring for dot filling factor and even is recovered. Features that lead to an intrinsic spin blockade are predicted.
In the next Section, details of the UHF method are outlined. The consequences of the broken symmetries are described and the projection technique is discussed, with special emphasis on the total electron spin. In Sect. III results for zero and non-zero magnetic field are presented and discussed.
Ii Model and method
ii.1 The model
Consider electrons in a two-dimensional (2D) quantum dot confined by an in–plane harmonic potential and subject to a perpendicular magnetic field . The Hamiltonian is ()
the 2D polar coordinates, the Coulomb interaction potential, , effective electron mass, confinement frequency, effective -factor and the Bohr magneton. The -component of the -th spin is , the electron charge and () the vacuum (relative) dielectric constant. The single-particle term in (1) yields the Fock–Darwin fd (FD) spectrum
with eigenfunctions , where () is the spinor corresponding to () and fd
Here, and are principal and angular momentum quantum numbers, the characteristic oscillator length and the generalized Laguerre polynomial. The cyclotron frequency , and the effective confinement frequency are introduced.
At , expressing energies in units and lengths in units , the Hamiltonian (1) depends only on the dimensionless parameter
which represents the relative strength of the interaction.
ii.2 The unrestricted Hartree-Fock method
In HF the Schrödinger equation for a given value of total is solved by using orbitals
with () denoting spin up (down) and is the number of electrons with spin . They are obtained as the solutions of the coupled integro–differential equations
where is the HF density
For a given , an initial guess for the orbitals with is made. Then, HF densities are evaluated and Eqs. (7) are solved to obtain updated orbitals. This is iterated until self-consistency is achieved. The many body wave function is a single Slater determinant, eigenfunction of ,
that corresponds to a stationary point of the UHF energy szabo ; ring
In order to numerically solve (7) we expand the orbitals in the FD basis (Eq. (4))
where are complex coefficients. The truncation of the basis to states is necessary in order to numerically implement the procedure. We have used the lowest FD states for each value of . This led to fair convergence (see Sec. II.5). Introducing the density matrices
connected to (8) by
it is possible to show that equation (7) is equivalent to the coupled nonlinear Pople–Nesbet eigenvalue problem
Here are the Fock matrices,
and the two-body interaction matrix elements
can be evaluated analytically. rontani The energy (10) is then
We use spatially unrestricted initial conditions landmanprl ; landman2 ; reusch0 with a random distribution of initial . This implies initial orbitals without circular symmetry, and leads to better energy estimates. However, symmetry broken Slater determinants are in general neither eigenfunctions of the total angular momentum nor of (total spin ). szabo
The most general UHF solution is a linear superposition of eigenfunctions of and
For given and , many initial conditions are used. Correspondingly, several stationary points are found. They form a sequence () with energies . For a given , the process is iterated until the lowest is found. The UHF ground state is defined as,
ii.3 Spin and angular momentum projection
In order to obtain states with specific and we act on the UHF Slater determinant with operators ring and which project on and , respectively. They satisfy commutation rules . Their simultaneous action yields an eigenfunction of and , . The corresponding energy is (Appendix A)
The spin projector
annihilates all the components of (9) with spin different from loewdin1 Its action is written as loewdin1 ; projS1
where and
are the Sanibel coefficients. projS1 ; ruitz The term is the sum of all
Slater determinants obtained by interchanging, without repetition, all the possible spinor pairs with opposite spins in . By definition .
For example, consider , (),
This state is a linear superposition of all spin eigenstates with . The spin projection selects a specific spin
Summing up equations (24)—(26) results in the original determinant , since .
The projector on is given by ring
where acts on rotating by around the axis all spatial parts of the orbitals
We denote this by . Using (20) and (27) we get
The projected state (28) is a sum of many Slater determinants (Appendix A). This indicates that correlation has been introduced by the projection.
The main computational effort is due to the evaluation of two-body matrix elements in (18). Projecting an -particle UHF state with to a state with total spin requires to evaluate terms,
For even (odd), the worst case is ().
For the angular momentum projection we use a fast Fourier transform (FFT) and partition the integration interval in points; is determined by the angular momentum range for which good convergence (relative error ) of the PHF energies is required. We have checked that for is needed. Using FFT, all energy values for given and are simultaneously available, which considerably accelerates the calculation with respect to performing distinct computations for each value of . The total number of two-body matrix elements is .
2 512
4 1536 19774szafran
6 5120 661300rontani
8 17920
10 64512
12 236544
14 878592
16 3294720
Table 1: Numbers of two-body matrix elements used for PHF, required to evaluate for even , , , and (see text). Last column: some numbers used in other methods.
Table 1 shows for the case of even and in the worst case . This is the value used in the paper. Although quickly increases as a function of , especially because (29) grows exponentially for large , it still compares favorably with respect to exact methods. For example, previously reported ED calculations szafran used a basis of 19774 Slater determinants for , , , and . CI calculations rontani for , , , need 661300 configurational state functions (linear superposition of Slater determinants).
ii.4 Determining the PHF ground state
To determine the ground state, it is generally not sufficient to project only the UHF ground state. If several UHF solutions (Sec. II.2) are almost degenerate, all of the have to be projected. The PHF ground state is then defined by
One can show that projecting arbitrary UHF Slater determinants on and always leads to energies that are not lower than the exact ground state energy, thus satisfying the variational principle.
As an example, we consider for , with confinement energy meV. We assume the standard GaAs parameters, and . The confinement corresponds to . For each , we have used more than 500 UHF initial conditions. We found two solutions with , and one for and . The corresponding energies are given in Tab. 2.
0 19.612 19.356 0 0
19.404 1 1
19.641 0 1
19.515 2 0
1 19.608 19.342 0 1
19.394 1 1
2 19.581 19.516 2 2
Table 2: PHF for for a GaAs quantum dot with , meV, , , ; 2nd column: UHF energy (units ); 3rd column: PHF energy (units ); 4th column: PHF quantum numbers. Ground states are indicated by .
The UHF ground state corresponds to . Applying the projection to each UHF state, we obtain with different quantum numbers and : the lowest two are given. The PHF ground state corresponds to and . The latter is obtained by projecting the energetically higher UHF minimum with .
ii.5 Some comments about errors
The major systematic error of the UHF approximation is the neglect of correlations. By projecting the slater determinant on fixed angular momentum and spin PHF attempts to correct for these effects. A second systematic effect is due to the uncertainty if the self consistent HF procedure has converged towards the absolute minimum of the energy.
In determining the UHF ground state energies, we have checked that the convergence with respect to the size of the basis set is better than .
For getting insight into the above systematic effects one can start from wave functions with the same but originating from UHF states with different . They should be degenerated at . In the example of Tab. 2, these are the pairs , and , . Their energetic differences are and , respectively. This corresponds to a relative uncertainty of . Similar estimates for the “degeneracy error” is obtained from data for different and . We attribute the degeneracy error mainly to UHF: different UHF states in different sectors approximate the true states with different precision. Therefore, their projection on the same sector does not yield exactly degenerate states.
By comparing our results with other works (see below), the PHF ground state energies for remain about 2% higher than those obtained with ED and QMC. This can be attributed to correlations beyond those introduced by the projection. This also is the limiting factor for the ground state quantum numbers in the regime and where too high polarization are obtained.
When several PHF energies are almost degenerate, one can improve further the ground state: linear superposition of the almost degenerate states may result in further lowering of the energy. Here, we have not systematically investigated this effect.
Iii Results
iii.1 Zero magnetic field
iii.1.1 Ground state energies
2 1.89 3.817 0 0 3.649 0 0
2 3.885 0 0 3.7295 0 0
4 4.983 0 0 4.8502 0 0
3 1.89 8.154 1 1/2 7.978 1 1/2
2 8.337 1 1/2 8.1671 1 1/2
4 11.131 0 3/2 11.043 1 1/2
4 1.89 13.554 0 1 13.266 0 1
2 13.899 0 1 13.626 0 1
4 19.330 0 1 19.035 0 1
5 1.89 20.264 1 1/2 19.764 1 1/2
2 20.811 1 1/2 20.33 1 1/2
4 29.501 1 1/2 28.94 1 1/2
6 1.89 27.905 0 0 27.143 0 0
2 28.703 0 0 27.98 0 0
4 41.187 0 3 40.45 0 0
7 1.89 36.627 2 1/2 35.836 2 1/2
2 37.698 2 1/2
4 54.497 0 5/2 (54.68) 53.726 (2)2 (1/2)1/2
8 1.89 46.260 0 1 45.321 0 1
2 47.659 0 1 47.14 46.679 0 1
4 69.479 0 4 70.48 0 1
9 1.89 56.853 0 3/2 55.643 0 3/2
10 1.89 68.245 0 0 66.8785 2 1
(68.283) 2 1 (66.8789) 0 0
11 1.89 80.444 0 1/2 78.835 0 1/2
12 1.89 93.661 0 0 91.556 0 0
Table 3: Ground state energies from PHF for and , , with corresponding , (, and ) together with results from CI rontani , and DMC (Ref. pederiva, for , Ref. ghosal, for ). All energies are in units .
Table 3 summarizes our results for the ground state energies at , for and , , . Results obtained with Diffusion Monte Carlo (DMC) pederiva ; ghosal and CI rontani are included.
For , angular momenta and total spins of the ground states obtained by PHF agree with DMC and CI. The total spin fulfills Hund’s first rule: a singlet state for the filled shells (), a triplet for and for . Only for , Hund’s rule is not fulfilled since we find instead of . However, here the degeneracy error is , larger than the energy distance between the ground and the first excited state. Also DMC pederiva predicts an extremely small energy gap between the singlet and the triplet, though it yields an ground state.
Increasing the interaction strength () PHF still produces energies consistent with CI and DMC. However, for incorrect quantum numbers are predicted with a tendency towards polarization. Whenever polarization occurs the ground states have low angular momenta in PHF.
For , preliminary results indicate deviations of PHF with respect to CI, DMC. They are reminiscent of the tendency of HF to predict spin polarized ground states due to overestimating the exchange as compared to correlations.
The relative deviation forpederiva and is shown in Fig. 1; is largest for the closed shells . Except for , %. The inset shows for (squares, with ), (dots, with ), and (triangles, with ) within . A decrease with according to a power law is observed, . By numerically fitting the data, one finds , and .
Figure 1: Deviations between PHF and DMC pederiva for , with (Tab. 3). Inset: double logarithmic plot of from PHF and QMCpederiva (), QMC ghosal and CI rontani () for (squares), (dots) (triangles). Lines: best fits to data.
48.150 0 47.842 0 47.659 1 46.679
0 48.031 2
48.088 2 47.790 0 46.875
2 47.799 1
47.971 1 47.817 2 46.917
1 48.028 1
48.076 4 47.777 0 46.779
48.237 0 47.981 0 47.805 0 46.807
48.025 1 47.910 1
48.131 1 47.796 2 47.742 1 46.756
47.887 1 47.806 2 46.917
1 47.985 1
48.022 0 47.977 2 47.406
0 47.997 1
48.243 2 47.896 1 47.881 2
48.335 3 48.129 3 48.126 3 47.404
Table 4: Comparison of the lowest energies obtained from UHF, followed by projection on angular momentum, , and total spin, for , (, ). Last column: energies from DMC. ghosal The ground state has . Superscripts denote “degenerated” energies with the same quantum numbers but originating from different . All energies are in units .
Table 4 and Fig. 2 illustrate the effect of angular momentum projection alone followed by spin projection for starting with UHF states with . The energy projected on angular momentum is
where . Only the lowest energies are included in the table.
The typical energy gain obtained by angular momentum projection is about . The spin projection induces corrections of the same order of magnitude, which can even change the sequence of energies (Fig. 2).
Influence of the projection procedure on the
energy levels (unit
Figure 2: Influence of the projection procedure on the energy levels (unit ) for and . Only the two lowest UHF states (left) directly involved in the determination of the PHF ground state (right) are shown.
From the UHF state with and , which is not the UHF ground state, projection on yields . After projection on the total spin we obtain the energy of the ground state, and an excited state at . On the other hand, the energetically lowest UHF minimum turns out to yield the first excited PHF state at .
Thus, PHF not only introduces a lowering of the energies but can also restore the correct ordering of energy levels. This can be seen from the last column of Tab. 4, which contains the results obtained by DMC ghosal . Restoring the spin plays a crucial role in obtaining all correct quantum numbers for the ground state including Hund’s rule. degio For example with angular momentum projection alone, one would have predicted for the ground state, in contrast to the correct result.
The degeneracy error for this case is approximately (some example of almost degenerate states are included in Tab. 4). The distance between ground state and the first excited state is . This suggests that the ground state for has , consistent with DMC. Even the quantum numbers of the first three excited states turn out to be reproduced correctly while the 4th and the 5th appear to be interchanged.
iii.1.2 Ground state densities
For the spin-resolved densities
we first consider and (Figs. 3 and 4) for intermediate () and strong () interaction. Increasing the interaction strength leads to a shift of the maximum of the densities towards higher , consistent with earlier findings by ED. mikhailov ; mikhailov1 This is clearly observed in the spin up density for (Fig. 3). For , the ground state (, , ) densities (Fig. 4) agree very well with ED mikhailov for large . Generally, deviations occur near .
Spin resolved densities
Figure 3: Spin resolved densities (thick line: ; thin line: ) for a GaAs quantum dot with , , , and for interaction strengths (solid), (dashed). Density unit: . Data from ED: mikhailov1 squares , circles .
Spin resolved densities
Figure 4: Spin resolved densities for a GaAs quantum dot with , , , and for (solid line), (dashed line). Density unit: . Data from ED: mikhailov squares , circles .
Figure 5(a) shows the total electron density for , , for . For weak interaction, (solid line), we find good agreement with CI rontani5 (squares). For (dashed) small deviations near are found.
Densities for
Figure 5: Densities for , , and . (a) Total density (units ) for (solid), (dashed), (dashed-dotted). Squares, circles and triangles: data from CI. rontani5 (b) Spin resolved densities (solid: , dashed: , units ) for . Squares, circles: data from CI. rontani5 (c) Same as (b) but .
Figure 5(b) indicates that the spin-down density is responsible for the small deviation from the exact result around for .
iii.2 Finite magnetic field
In this section, we show results for in the presence of a magnetic field, , corresponding to a dot filling factor ( has been discussed in Ref. degio, ). We assume here a confinement meV (corresponding to ) and . For , due to the Zeeman term, the PHF ground state always has . Therefore, we do not specify in the following.
Ground state energy
Figure 6: Ground state energy (units ) as a function of magnetic field (units T) for . Solid: UHF; dashed: angular momentum projection; dashed-dotted: PHF. Here, and in the following figures, , , , and meV.
We start with (Fig. 6) and (Fig. 7). We show the UHF ground state energy (solid line), the energy obtained from angular momentum projection (dashed, Eq. (31)), and the PHF energy (dashed-dotted, Eq. (30)). The highest energy gain is here due to the angular momentum projection. Spin projection leads to a further decrease of the ground state energy. Obviously, UHF and PHF results behave completely differently with .
For instance, for (Fig. 6) the UHF ground state shows crossovers at and at . In contrast, the PHF energy has total spin in the entire magnetic field region. The state with , not compatible with the total spin , is certainly an artifact of UHF. The crossover with increasing magnetic field at agrees quantitatively with the earlier results obtained by ED. tavernier
Ground state energy
Ground state energy
For (Fig. 7), UHF (solid line) displays no transitions. When rotational symmetry is restored, two crossovers, (a) and (b) appear. Performing the spin projection, singlet states corresponding to and are found, and for is obtained. Singlets have the largest energy gain, leading to a shift of the features found with angular momentum projection (Fig. 7). Also here, the PHF quantum numbers agree with the earlier results obtained by ED, tavernier including the magnitudes of the crossover fields at and respectively.
The singlet-triplet crossover occurring for at , corresponds to a filling factor , and is a peculiar feature which is confirmed by several experimental and theoretical studies. hawrylak3 ; hawrylak4 ; hawrylak1 ; tarucha2 Also for preliminary data indicate such a crossover near . These crossovers are completely absent in UHF (Fig. 7).
Most interesting is (Fig. 8): near the ground state has . This can only be obtained including the spin projection and leads eventually to a spin blockade in the transport (see below).
Scheme of the quantum numbers
Figure 9: Scheme of the quantum numbers of the PHF ground state as a function of the magnetic field (units T) for .
In Fig. 9 we show the scheme of the ground state quantum numbers for , as obtained by PHF. They qualitatively agree with previous calculations, maksymB ; tavernier performed for . In the region of , where the ground state of has the state with is a singlet. Since between the two ground states a spin blockade in the transition can be expected near the edge of for electrons, for . We note in passing that the lowest excited states for , with and , are at most meV ( mK) higher in energy. Therefore, it may be difficult to experimentally observe this blockade.
The chemical potential traces obtained by PHF when varying are experimentally accessible via Coulomb blockade.
Chemical potentials
Figure 10: Chemical potentials , , (units ) as a function of (unit T). Arrows: edge of filling factor for (bottom panel), (center), (top). Red line: region of intrinsic spin blockade (see text).
Figure 10 shows , and . Arrows indicate the onset of for the configuration with (bottom panel), (center), (top). The chemical potentials exhibit features related to the above discussed crossovers between ground states. At the onset of , the chemical potentials exhibit a cusp. For even , this corresponds to the above mentioned singlet-triplet transition. hawrylak1 Generally, the chemical potentials show kinks when quantum numbers of the ground states change (Figs. 9 and 10).
Iv Conclusion
We have described a systematic procedure to overcome some of the limitations of UHF approach. Using angular momentum and total spin projections, we have introduced correlations that provide lower estimates for the ground state energies, besides determining the spin and the angular momentum. Several sources of errors have been discussed. In particular, a degeneracy error has been found to be useful for deciding whether or not the estimate for the ground state is plausible.
The procedure yields results consistent with earlier findings for interaction strengths which corresponds to experimentally relevant confinement energies meV for kouwenhoven ; tarucha
For and , we have confirmed Hund’s first rule for the dot total spin, except for . In this case, the ground state is ambiguous, since the energy gap between ground and first excited state is smaller than the degeneracy error, consistent with other results. For stronger interaction, , deviations from Hund’s rules are obtained, accompanied by the well–known exchange induced tendency of HF–based methods to favor ground states with higher spins and zero angular momenta.
We have shown that PHF predicts correctly the features of the ground state energy as a function of . We have found a spin blockade in the transport between and , occurring at a filling factor .
Given the slower increase in computational effort with particle number described in Sec. II.3 (Tab. 1), as compared to other methods, we hope by parallelization of our code to obtain in the future results for higher number of particles (), varying , for interaction strengths relevant to quantum dot experiments, .
That the densities are correctly reproduced suggests that tunneling rates between the quantum dot and attached leads needed for electron transport can be reasonably well estimated when using PHF wave functions. This might be useful for providing quantitative results for predicting the heights of the Coulomb blockade peaks as a function of .roggekondo ; hawrylak3 ; RHCSK2006
This work has been supported by the Italian MIUR via PRIN05, by the European Union via MRTN-CT-2003-504574 contract and by the SFB 508 “Quantenmaterialien” of the Universität Hamburg.
Appendix A Some details of the implementation
We provide some technical details about the implementation of the projection technique outlined in Sec. II.3. In order to obtain (18), we have to evaluate the overlaps
and the Hamiltonian matrix elements |
627a0b8d25041a9d | Mapping the Electronic Surface Potential of Nanostructured Surfaces
Pascal Ruffieux, EMPA, nanotech@surfaces Laboratory, Feuerwerkerstr. 39, 3602 Thun
A detailed knowledge of the local electronic properties of solids and their surfaces is crucial for an understanding of a large variety of processes, ranging from electron scattering in transport phenomena to catalytic reactions, which show pronounced site specificity. Recently, site-specific adsorption on inorganic surfaces has been reported for organic molecules on laterally inhomogeneous surfaces consisting mainly of varying stacking sequences of the outermost atomic layers. However, the understanding of the relevant local physical properties responsible for the site-specific substrate-adsorbate interactions remains very poor.
Among the first important parameters investigated for surfaces is the work function, which defines the minimum energy required for removing an electron from a metal to infinity at 0 K. However, a number of important surface-related phenomena, such as catalytic processes and electron emission, cannot be described with this macroscopic work function but require knowledge of local variations of the electrostatic potential close to the surface. This directly implies that local probes have to be used for the surface potential determination relevant for the above-mentioned phenomena. The pioneering work enabling the experimental determination of the local surface potential was based on photoemission of adsorbed xenon (PAX), which probes the surface potential at about 0.2 nm above the surface via the Xe electron states, which are pinned to the local vacuum level due to a weak coupling between the Xe atoms to the surface [1]. Various techniques based on scanning probe microscopy, including Kelvin probe force microscopy and local barrier height measurements, are currently applied in order to overcome the missing imaging capability of PAX, which is crucial for complex nanostructured surfaces. However, both methods are either limited in spatial resolution or in the ability to quantitatively determine the local surface potential.
An alternative way to characterize the local surface potential is the analysis of the field emission resonances (FERs), which are detected with scanning tunnelling microscopy (STM) when applying voltages larger than the tip work function. Their sensitivity to variations of the surface potential has first been discussed by Binnig et al. [2] and has been applied for the qualitative description of surface potential modulations of thin ionic and oxide films grown on metal surfaces. Recently we showed [3] that the combination of the local detection of the FERs with STM and their modelling using a 1D model potential between STM tip and sample allows for a quantitative determination of the surface potential with a spatial resolution of ~1 nm. The method thus uses the marked proximity of the FERs to the surface and their sensitivity to local variations of the surface potential. It is applied for examining the site-specific interactions of C60 molecules deposited on a nanostructured template surface formed by deposition of two monolayers (MLs) of silver on Pt(111).
Experiments have been performed with a low-temperature STM (Omicron) working under ultrahigh vacuum conditions (base pressure 2·10-10 mbar). The Ag/Pt(111) strain relief pattern has been prepared by depositing 2 ML of silver on a clean Pt(111) surface and by subsequent annealing of the sample to 530°C. C60 fullerenes were deposited from a resistively heated quartz crucible where the deposition rate was determined with a quartz microbalance [4,5].
The annealing of two MLs of silver deposited on Pt(111) leads to the formation of a strain relief pattern exhibiting a high degree of long-range order. The apparent depressions in the STM topography image (Fig. 1) separate the three stacking domains labelled hcp1, hcp2, and fcc. They evolve from the relieving of strain built up by the lattice mismatch of ~4 % between Ag and Pt.
Regarding molecular deposition on the strain relief pattern we find a highly inhomogeneous immobilization of the molecules on the different surface domains. Deposition of C60 molecules at ~150 K followed by an annealing step to room temperature yields to the formation of stable molecular clusters, which are preferentially located in the hcp1 region [Fig. 1(b)]. This brings up the question of which local physical property determines the site-selective adsorption of C60 on the Ag/Pt strain relief pattern.
Local surface potential determination
The energy positions of the FERs are determined by locally acquiring z(V) spectra with the feedback loop closed, i.e. under constant current conditions. Accordingly, during the voltage ramp, the tip is further retracted from the surface when new states contribute to the tunnelling current. This results in a stair-shaped z(V) curve, which, by numerical differentiation, directly reveals the energy position of the lowest FERs (Fig. 2). Recording a large number of such spectra along a line crossing the different surface domains reveals a pronounced variation of the energy level of the lowest FERs. For the first state we find a difference of 0.23 V when comparing spectra recorded in the fcc and the hcp1 domain (Fig. 3).
In order to relate these energy shifts to variations in the local surface potential we numerically solve the one-dimensional Schrödinger equation in the direction perpendicular to the surface. The electrostatic potential V(z) between surface and tip takes into account the applied sample bias Vs, the varying tip-sample distance zt, the image plane zi at the sample surface and varying surface potential Δφ [Fig. 2(b)], leading to the expression
The unknown parameter such as tip work function and image plane position zi are determined by fitting the energy position of the simulated FERs to the four measured ones in the hcp2 region where the smallest lateral variation of the energy position is observed. These parameters are then kept constant and only the local surface potential Δφ is varied in the model potential in order to get the best agreement with the measured FERs.
Due to the increasing spatial extent of the FERs for higher order resonances we limit our analysis to the lowest two states. With this restriction a lateral resolution of the order of 1 nm is achieved for the FER-based surface potential determination. According to this analysis we can determine the surface potential at positions where spectra have been recorded (Fig. 3). We find a surface potential variation of 0.35 eV when comparing the fcc region with the hcp1 region. A similar analysis has been performed for a set of spectra acquired on a dense two-dimensional (2D) grid in the vicinity of the hcp1 region allowing the determination of the 2D surface potential landscape [Fig. 4(a)].
In order to further corroborate the appropriateness of the proposed method for a quantitative determination of the surface potential landscape, we have performed PAX experiments [3]. This method is well established as a local work function probe for heterogeneous surfaces with the limitation of the missing simultaneous imaging of the surface. Figure 4(b) shows a Xe4d PAX spectrum for a 0.7(2) ML Xe coverage. In contrast to PAX spectra recorded on Ag(111), on the Ag/Pt strain relief pattern both spin-orbit split states reveal a broad distribution characterized by two main contributions (FWHM 0.30 eV). Since the binding energies in PAX are directly linked to local variations of the surface potential by ΔEFB(i,j) ≅ -Δφ(i,j) at two dissimilar surface sites i and j [1], this indicates a broad distribution of the local work function within the unit cell with two main contributions separated by 0.30 eV. These results can be compared to the FER-based analysis of the surface potential landscape by performing a histogram analysis of the local surface potential across the different surface domains. This analysis yields a broad distribution with two dominant contributions centred at -0.08 eV (mainly hcp1 and hcp2) and 0.2 eV (fcc). Their separation of 0.28 eV is in excellent agreement with PAX.
The observed variations of the local surface potential are firmly related to local changes of the in-plane lattice parameter a. From a careful analysis of atomic resolution STM images, we find average lattice parameters of 299(5), 289(4) and 180(4) pm for the hcp1, hcp2 and fcc domains, respectively. From an electron (and hence dipole) point of view this directly suggests a lowering of the work function for increasing lattice parameter, in agreement with the FER-based analysis of the local work function.
The local analysis of the FERs based on scanning tunnelling spectroscopy allows the determination of the surface potential landscape with simultaneous imaging of the nanostructures. The lateral resolution of the method depends on the spatial extent of the FERs and can thus be limited by choosing the lowest FERs. When restricting the analysis to the two lowest FERs this yields a resolution of the order of ~1nm. Regarding the understanding of site-selective interactions on nanostructured surfaces this gives access to an important surface property that is involved in adsorbate-substrate interactions with partial ionic bonding character Furthermore, it allows the determination of lateral electric fields, which is relevant for the description of site-specific interactions of polarisable adsorbates.
[1] K. Wandelt, Thin Metal Films and Gas Chemisorption (Elsevier, Amsterdam, 1987).
[2] G. Binnig et al., Phys. Rev. Lett. 55, 991 (1985).
[3] P. Ruffieux et al., Phys. Rev. Lett. 102, 086807 (2009).
[4] K. Aït-Mansour et al., Nano Lett. 8, 2035 (2008).
[5] K. Aït-Mansour et al., Phys. Chem. C, in press (2009), DOI:10.1021/jp901378v
[Released: May 2009] |
3bb12cfbf4ecf3d8 | Mathematics and Spiritual Philosophy: Exploring the True Nature of Reality
Mathematician Thomas J. McFarlane unpacks conceptual frameworks to understand the deepest nature of reality
Pure mathematics is commonly viewed as the pinnacle of rigorous formal reasoning. It relates abstract concepts to each other, with no necessary connection to any world, whether physical or metaphysical. But it was not always this way. The roots of mathematics reach deep into the foundations of the cosmos. It was only through a modern process of radical abstraction and refinement that mathematics came to be viewed as independent of any connection to the world.
Geometry originated as an empirical science of earth measurement (which is the etymological meaning of the word geo-metry). It was a practical tool for surveying plots of land. When Euclid formalised geometry as a mathematical system, it was understood as an axiomatic science of physical space. The Pythagorean theorem was not merely a result of pure mathematics. It was viewed as a law describing spatial properties of the real world. And this view lasted for over two thousand years. Galileo, writing about natural philosophy, put it this way in Assayer:
It was not until the modern mathematical discovery of non-Euclidean geometry that the meaning of geometry was abstracted from physical space to include mathematical spaces of various kinds. Similarly, modern mathematics also abstracted the meaning of number from natural numbers and rational numbers. These number systems were originally physical sciences of counting. They arose from the practical need to count objects (such as animals, goods, and products) and periods of time (such as days, months, and years). In modern mathematics, however, the notion of number was abstracted to include other types of numbers, such as complex numbers and transfinite numbers which have counter-intuitive properties.
Free from its tether to the physical world, pure mathematics exploded into a marvellous and infinite realm of abstraction. Surprisingly, some of this new mathematics turned out to be remarkably effective in the physical sciences. This suggests that, despite its disconnection from its empirical roots, there remained a profound and mysterious link between pure mathematics and the deep order of the world.
When I began university studies, my ambition was to become a physicist so that I could understand the deep order of things, the nature of reality. So, when I took my first course in quantum mechanics, I was not content merely to solve the Schrödinger equation. I wanted to understand what quantum mechanics was saying about physical reality. After courses in the philosophy of physics, however, I realised that physics could never ultimately satisfy my deep yearning to understand the nature of reality. For example, I realised that there were dozens of different interpretations of quantum mechanics, but no possible way to empirically determine which one was correct. Physics could never provide a definitive answer to the philosophical question of what kind of world quantum mechanics was describing.
Meanwhile, I was also reading more widely in philosophy, both Western and Eastern, philosophers such as Kant, Nagarjuna, Plato, Shankara, Proclus, Eriugena, and Nicholas of Cusa. As a result of these studies, together with contemplative investigations, it became clear to me that the ultimate nature of reality could not be contained in any philosophical system. Just as there are many philosophical interpretations of quantum mechanics, but no way to empirically determine which one is correct, similarly, there are many philosophical descriptions of reality, but no way to know with certainty which one is true. A pure conceptual system can be internally consistent, but there is no way to know for certain whether it is making true statements about the world. This parallels the situation from mathematics, which is why Einstein wrote in his 1921 essay “Geometry and Experience,”
My yearning for certain knowledge of reality could not be fulfilled by any conceptual system at all. Conceptual systems are, in the end, relationships between pure concepts that have no certainty external to themselves. Although there is freedom to choose definitions and assumptions, and truths can be known with certainty within the limited context defined by those choices, any claim that those concepts describe some reality external to themselves can never be certain. Reality, in other words, cannot be known with certainty through concepts. In short, it is ineffable.
This insight, I came to understand, was in harmony with the testimony of many spiritual philosophers. For example, the Upanishads declare, “He comes to the thought of those who know him beyond thought, not to those who imagine he can be attained by thought.” Similarly, Lao Tzu writes, “The Tao that can be told is not the eternal Tao.” On the other hand, the statement ‘reality is non-conceptual’ is itself a conception about reality. So, reality must transcend even the duality between conceptuality and non-conceptuality. As it says in the Heart Sutra, “Form is emptiness, emptiness is not different from form, neither is form different from emptiness, indeed emptiness is form.” Just as Shiva’s dance is not different from Shiva, and waves are not different from water, conceptual form is not ultimately separate from reality.
So, although concepts and thoughts may seem to hide ultimate reality, they are also manifestations of it. Concepts that are confused serve to veil, while concepts that are clear serve to reveal. This is why the rigorously clear concepts of mathematics have been viewed as having spiritual significance. Like a perfect crystal, they refract the light of reality with orderly purity. This was my motivation for embarking on graduate studies in mathematics. It was not a search for some true conceptual understanding, but a practice of harmonising the mind with the most refined and transparent forms of manifestation.
The roots of mathematics extend not only to the earth and its measurement, but also to the principles governing the heavens. The mathematical term logic and the religious word logos share the same etymological root. Both point to a principle fundamental to the creation and order of the world. In the Gospel According to John, for example, it is said that the logos, translated as the Word, is a generative power in creation:
The remarkable effectiveness of mathematics in the physical sciences suggests that there is, indeed, a deep connection between the manifested world and this logos. Similarly, in the Pythagorean tradition, the ordering principle of this cosmos is the logos, or, more specifically, number. Thus, it is said in the Pythagorean tradition that number is the principle, source, and root of all things. This order is exemplified in the harmonies of musical vibrations, which correspond exactly with the quantitative ratios of numbers, as well as in the ‘music of the spheres’, i.e., a musical harmony of the heavens. In the Pythagorean tradition, the fundamental principles of number are at the root of manifestation: Number in time is music. Number in space is geometry. Number in space and time is astronomy. Legend has it that above the door of Plato’s academy was the inscription “Let no one ignorant of geometry enter.”
The study of mathematics, according to Plato, is a prerequisite to a life of philosophy, whose ultimate purpose is to bring to birth, in the soul, a transcendent vision of the Form of the Good. Because mathematics is a domain of pure intelligible truths that are not contingent upon time, place, or sensory experience, the study of mathematics helps to develop the power of abstract thought and to turn the soul away from the transient world of the senses and toward the transcendent world of eternal forms (Republic 527b). So, for Plato, the study of mathematics is an essential spiritual practice.
This view of the sacred significance of mathematics can be found centuries later in the writings of Nicholas of Cusa. Cusa says in his De Docta Ignorantia that mathematics plays a special role in the power of the mind:
In modern times, the 20th century American mystic and philosopher Franklin Merrell-Wolff also recognised the spiritual significance of mathematics. Echoing Plato, Merrell-Wolff writes in Pathways Through To Space
Thus, the remarkable effectiveness of mathematics in the physical sciences is a sign of an even more profound role in mirroring order at the deepest levels of reality, beyond the physical, where it reflects the fundamental order of creation in all its diverse possibilities.
|
74710f58fdab9cdb | Splitting the Universe | Aeon
Photo courtesy ESA/Hubble/NASA, Fillipenko, Jansen
Photo courtesy ESA/Hubble/NASA, Fillipenko, Jansen
Splitting the Universe
by Sean Carroll + BIO
Photo courtesy ESA/Hubble/NASA, Fillipenko, Jansen
One of the most radical and important ideas in the history of physics came from an unknown graduate student who wrote only one paper, got into arguments with physicists across the Atlantic as well as his own advisor, and left academia after graduating without even applying for a job as a professor. Hugh Everett’s story is one of many fascinating tales that add up to the astonishing history of quantum mechanics, the most fundamental physical theory we know of.
Everett’s work happened at Princeton in the 1950s, under the mentorship of John Archibald Wheeler, who in turn had been mentored by Niels Bohr, the godfather of quantum mechanics. More than 20 years earlier, Bohr and his compatriots had established what came to be called the ‘Copenhagen Interpretation’ of quantum theory. It was never a satisfying set of ideas, but Bohr’s personal charisma and the desire on the part of scientists to get on with the fun of understanding atoms and particles quickly established Copenhagen as the only way for right-thinking physicists to understand quantum theory.
In the Copenhagen view, we distinguish between microscopic quantum systems and macroscopic observers. Quantum systems exist in superpositions of different possible measurement outcomes, called ‘wave functions’. A spinning electron, for example, has a wave function describing a superposition of ‘spin-up’ and ‘spin-down’. It’s not merely that we don’t know the spin of the electron, but that the value of the spin does not exist until it is measured. An observer, by contrast, obeys all the rules of familiar classical physics. At the moment that an observer measures a quantum system, that system’s wave function suddenly and unpredictably collapses, revealing some definite spin or whatever has been measured.
There are apparently, therefore, two completely different ways in which quantum systems evolve. When we’re not looking at them, wave functions change smoothly according to the Schrödinger equation, written down by Erwin Schrödinger in 1926. But when we do look at them, wave functions act in a totally different way, collapsing onto some particular outcome.
If this seems unsatisfying, you’re not alone. What exactly counts as a measurement? And what makes observers so special? If I’m made up of atoms that obey the rules of quantum mechanics, shouldn’t I obey the rules of quantum mechanics myself? Nevertheless, the Copenhagen approach became enshrined as conventional wisdom, and by the 1950s it was considered somewhat ill-mannered to question it.
That didn’t bother Everett. The seeds of his visionary idea, now known as the Many-Worlds formulation of quantum mechanics, can be traced to a late-night discussion in 1954 with fellow young physicists Charles Misner (also a student of Wheeler’s) and Aage Peterson (an assistant of Bohr’s, visiting from Copenhagen). All parties agree that copious amounts of sherry were consumed on the occasion.
Under Wheeler’s guidance, Everett had begun thinking about quantum cosmology: the study of the entire Universe as a quantum system. Clearly, he reasoned, if we’re going to talk about the Universe in quantum terms, we can’t carve out a separate classical realm. Every part of the Universe will have to be treated according to the rules of quantum mechanics, including the observers within it. There will be only a single quantum state, described by what Everett called the ‘universal wave function’.
If everything is quantum, and the Universe is described by a single wave function, how is measurement supposed to occur? It must be, Everett reasoned, when one part of the Universe interacts with another part of the Universe in some appropriate way. That is something that’s going to happen automatically, he noticed, simply due to the evolution of the universal wave function according to the Schrödinger equation. We don’t need to invoke any special rules for measurement at all; things bump into each other all the time.
Imagine that we have a spinning electron in some superposition of up and down. We also have a measuring apparatus, which according to Everett is a quantum system in its own right. Imagine that it can be in superpositions of three different possibilities: it can have measured the spin to be up, it can have measured the spin to be down, or it might not yet have measured the spin at all, which we call the ‘ready’ state.
The world has ‘branched’ into a superposition of these two possibilities
The fact that the measurement apparatus does its job tells us how the quantum state of the combined spin + apparatus system evolves according to the Schrödinger equation. Namely, if we start with the apparatus in its ready state and the electron in a purely spin-up state, we are guaranteed that the apparatus evolves to a pure measured-up state, like so:
The initial state on the left can be read as ‘the electron is in the up state, and the apparatus is in its ready state’, while the one on the right, where the pointer indicates the up arrow, is ‘the electron is in the up state, and the apparatus has measured it to be up’.
Likewise, the ability to successfully measure a pure-down spin implies that the apparatus must evolve from ‘ready’ to ‘measured down’:
What we want, of course, is to understand what happens when the initial spin is not in a pure up or down state, but in some superposition of both. The good news is that we already know everything we need. The rules of quantum mechanics are clear: if you know how the system evolves starting from two different states, the evolution of a superposition of both those states will just be a superposition of the two evolutions. In other words, starting from a spin in some superposition and the measurement device in its ready state, we have:
The final state now is an entangled superposition: the spin is up and it was measured to be up, plus the spin is down and it was measured to be down. This is the clear, unambiguous, definitive final wave function for the combined spin + apparatus system, if all we do is evolve it according to the Schrödinger equation. The world has ‘branched’ into a superposition of these two possibilities.
Everett’s insight was as simple as it was brilliant: accept the Schrödinger equation. Both of those parts of the final superposition are actually there. But they can’t interact with each other; what happens in one branch has no effect on what happens in the other. They should be thought of as separate, equally real worlds.
This is the secret to Everettian quantum mechanics. We didn’t put the worlds in; they were always there, and the Schrödinger equation inevitably brings them to life. The problem is that we never seem to come across superpositions involving big macroscopic objects in our experience of the world.
The traditional remedy has been to monkey with the fundamental rules of quantum mechanics in one way or another. The Copenhagen approach is to disallow the treatment of the measurement apparatus as a quantum system in the first place, and to treat wave-function collapse as a separate way the quantum state can evolve. As Everett would later put it: ‘The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics … as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm.’
The Many-Worlds formulation of quantum mechanics removes once and for all any mystery about the measurement process and collapse of the wave function. We don’t need special rules about making an observation: all that happens is that the wave function keeps chugging along in accordance with the Schrödinger equation. And there’s nothing special about what constitutes ‘a measurement’ or ‘an observer’ – a measurement is any interaction that causes a quantum system to become entangled with the environment, creating a branching into separate worlds, and an observer is any system that brings about such an interaction. Consciousness, in particular, has nothing to do with it. The ‘observer’ could be an earthworm, a microscope or a rock. There’s not even anything special about macroscopic systems, other than the fact that they can’t help but interact and become entangled with the environment. The price we pay for such a powerful and simple unification of quantum dynamics is a large number of separate worlds.
Everett’s theory was a direct assault on Bohr’s picture, and he enjoyed illustrating this assault in vivid language
Even in theoretical physics, people do sometimes get lucky, hitting upon an important idea more because they were in the right place at the right time than because they were particularly brilliant. That’s not the case with Everett; those who knew him testify uniformly to his incredible intellectual gifts, and it’s clear from his writings that he had a thorough understanding of the implications of his ideas. Were he still alive, he would be perfectly at home in modern discussions of the foundations of quantum mechanics.
What was hard was getting others to appreciate those ideas, and that included his advisor. Wheeler was personally very supportive of Everett, but he was also devoted to his own mentor Bohr, and was convinced of the basic soundness of the Copenhagen approach. He simultaneously wanted Everett’s ideas to get a wide hearing, and to ensure that they weren’t interpreted as a direct assault on Bohr’s way of thinking about quantum mechanics.
Yet Everett’s theory was a direct assault on Bohr’s picture. Everett himself knew it, and enjoyed illustrating the nature of this assault in vivid language. In an early draft of his thesis, he used an analogy of an amoeba dividing to illustrate the branching of the wave function:
Wheeler was put off by the blatantness of this (quite accurate) metaphor, scribbling in the margin of the manuscript: ‘Split? Better words needed.’ Advisor and student were constantly tussling over the best way to express the new theory, with Wheeler advocating caution and prudence while Everett favoured bold clarity.
In 1956, as Everett was working on finishing his dissertation, Wheeler visited Copenhagen and presented the new scenario to Bohr and his colleagues, including Petersen. He attempted to present it, anyway; by this time, the wave-functions-collapse-and-don’t-ask-embarrassing-questions-about-exactly-how school of quantum theory had hardened into conventional wisdom, and those who accepted it weren’t interested in revisiting the foundations when there was so much interesting applied work to be done. Letters from Wheeler, Everett and Petersen flew back and forth across the Atlantic, continuing when Wheeler returned to Princeton and helped Everett to craft the final form of his dissertation. It omitted many of the juicier sections Everett had originally composed, including examinations of the foundations of probability and information theory, and an overview of the quantum measurement problem, focusing instead on applications to quantum cosmology. (No amoebas appear in the published paper, but Everett did manage to insert the word ‘splitting’ in a footnote added in proof while Wheeler wasn’t looking.)
But Everett decided not to continue the academic fight. Before finishing his PhD, he accepted a job at the Weapons Systems Evaluation Group for the US Department of Defense, where he studied the effects of nuclear weapons. He would go on to do research on strategy, game theory and optimisation, and played a role in starting several new companies. It’s unclear to what extent Everett’s conscious decision not to apply for professorial positions was motivated by criticism of his upstart new theory, or simply by impatience with academia in general.
He did, however, maintain an interest in quantum mechanics, even if he never published on it again. After he defended his PhD and was already working for the Pentagon, Wheeler persuaded Everett to visit Copenhagen for himself and talk to Bohr and others. The visit didn’t go well; afterward Everett judged that it had been ‘doomed from the beginning’.
Bryce DeWitt, an American physicist who had edited the journal where Everett’s thesis appeared, wrote a letter to him complaining that the real world obviously didn’t ‘branch’, since we never experience such things. Everett replied with a reference to Copernicus’s similarly daring idea that the Earth moves around the Sun, rather than vice-versa: ‘I can’t resist asking: do you feel the motion of the Earth?’ DeWitt had to admit that was a pretty good response.
After mulling over the matter for a while, by 1970 DeWitt had become an enthusiastic Everettian. He put a great deal of effort into pushing the theory, which had languished in obscurity, toward greater public recognition. His strategies included an influential article in Physics Today in 1970, followed by an essay collection in 1973 that included at last the long version of Everett’s dissertation, as well as a number of commentaries. The collection was called simply The Many-Worlds Interpretation of Quantum Mechanics, a vivid name that has stuck ever since.
Presumably nature doesn’t work like that; it’s just quantum from the start
In 1976, Wheeler retired from Princeton and took up a position at the University of Texas, where DeWitt was also on the faculty. Together they organised a workshop in 1977 on the Many-Worlds theory, and Wheeler coaxed Everett into taking time off from his defence work in order to attend. The conference was a success, and Everett made a significant impression on the assembled physicists in the audience. Wheeler went so far as to propose a new research institute in Santa Barbara where Everett could return to full-time work on quantum mechanics, but ultimately nothing came of it.
Everett died in 1982, aged 51, of a sudden heart attack. He had not lived a healthy lifestyle, over-indulging in eating, smoking and drinking. His son Mark Oliver Everett (who would go on to form the band Eels) has said that he was originally upset with his father for not taking better care of himself. He later changed his mind:
I realise that there is a certain value in my father’s way of life. He ate, smoked and drank as he pleased, and one day he just suddenly and quickly died. Given some of the other choices I’d witnessed, it turns out that enjoying yourself and then dying quickly is not such a hard way to go.
But physics hasn’t forgotten him; if anything, Everett’s ideas are more relevant than ever. His attempts to understand quantum cosmology were ahead of their time, but modern physics has made slow but steady progress on appreciating how to reconcile gravity with quantum theory. And Everett was right; once the whole Universe is your subject of study, it doesn’t make much sense to carve out a special place for a classical observer.
In my own research, I’ve gone even farther, arguing that the quest for quantum gravity is being held back by physicists’ traditional strategy of taking a classical theory (such as Albert Einstein’s general relativity) and ‘quantising’ it. Presumably nature doesn’t work like that; it’s just quantum from the start. What we should do, instead, is start from a purely quantum wave function, and ask whether we can pinpoint individual ‘worlds’ within it that look like the curved spacetime of general relativity. Preliminary results are promising, with emergent geometry being defined by the amount of quantum entanglement between different parts of the wave function. Don’t quantise gravity; find gravity within quantum mechanics.
That approach fits very naturally into the Many-Worlds perspective, while not making much sense in other approaches to quantum foundations. Niels Bohr might have won the public-relations race in the 20th century, but Hugh Everett appears ready to pull ahead in the 21st.
This is an edited extract from ‘Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime’ by Sean Carroll, published by Dutton, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC, a division of Penguin Random House LLC. Copyright © 2019 by Sean Carroll.
PhysicsHistory of scienceQuantum theory
Aeon is not-for-profit and free for everyone
Make a donation
Get Aeon straight to your inbox
Join our newsletter |
0aa19582100b087a | We gratefully acknowledge support from
the Simons Foundation and member institutions.
Authors and titles for physics.optics in Jan 2019
[ total of 229 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 226-229 ]
[1] arXiv:1901.00071 [pdf, other]
[2] arXiv:1901.00096 [pdf, ps, other]
Title: Controllable spin-Hall and related effects of light in an atomic medium via coupling fields
Comments: 11 pages, 8 figures
Subjects: Optics (physics.optics)
[3] arXiv:1901.00281 [pdf]
Title: Generation of high-energy clean multicolored ultrashort pulses and their application in single-shot temporal contrast measurement
Comments: 20 pages, 8 figures
Subjects: Optics (physics.optics)
[4] arXiv:1901.00291 [pdf]
Title: Modeling Optical Fiber Space Division Multiplexed Quantum Key Distribution Systems
Comments: 16 pages, 7 figures
Subjects: Optics (physics.optics)
[5] arXiv:1901.00332 [pdf, other]
Title: Controlling Nanoscale Air-Gaps for Critically Coupled Surface Polaritons by Means of Non-Invasive White-Light Interferometry
Subjects: Optics (physics.optics)
[6] arXiv:1901.00334 [pdf, other]
Title: Efficient determination of bespoke optically active nanoparticle distributions
Comments: 6 figures
Journal-ref: Journal of Optics 20(8) p.085003 (2018)
Subjects: Optics (physics.optics)
[7] arXiv:1901.00346 [pdf]
Title: Topological non-Hermitian origin of surface Maxwell waves
Comments: 12 pages, 3 figures
Journal-ref: Nature Commun. 10, 580 (2019)
[8] arXiv:1901.00454 [pdf, other]
Title: Absorption Enhancement for Ultra-Thin Solar Fuel Devices with Plasmonic Gratings
Comments: 4 Figures 18 Pages. Supporting Information Included (7 Figures 11 pages)
Journal-ref: ACS Appl. Energy Mater. 1 p.5810-5815 (2018)
[9] arXiv:1901.00496 [pdf, other]
Title: Three-dimensional-subwavelength field localization, time reversal of sources, and infinite-asymptotic degeneracy in spherical structures
Authors: Asaf Farhi
Journal-ref: Phys. Rev. A 101, 063818 (2020)
Subjects: Optics (physics.optics)
[10] arXiv:1901.00498 [pdf]
Title: Tilt-invariant scanned oblique plane illumination microscopy for large-scale volumetric imaging
Comments: 6 figures, 5 pages
Journal-ref: Optics Letters 44 (7), 1706-1709 (2019)
[11] arXiv:1901.00538 [pdf, other]
Title: What is the maximum differential group delay achievable by a space-time wave packet in free space?
Journal-ref: Opt. Express 27, 12443-12457 (2019)
Subjects: Optics (physics.optics)
[12] arXiv:1901.00681 [pdf, other]
Title: Field Correlations in Surface Plasmon Speckle
Journal-ref: Sci. Rep. 9, 8359 (2019)
Subjects: Optics (physics.optics)
[13] arXiv:1901.00918 [pdf, other]
Title: Analytical Theory of Second Harmonic Generation from a Nanoparticle with a Non-Centrosymmetric Geometry
Journal-ref: Phys. Rev. B 99, 125418 (2019)
Subjects: Optics (physics.optics)
[14] arXiv:1901.01033 [pdf]
Title: Giant optical nonlinearity cancellation in quantum wells
[15] arXiv:1901.01181 [pdf, other]
Title: Spatio-angular fluorescence microscopy II. Paraxial 4$f$ imaging
Comments: 21 pages, 12 figures. Portions of this work previously appeared as arXiv:1812.07093, which was split during refereeing
Subjects: Optics (physics.optics); Quantitative Methods (q-bio.QM)
[16] arXiv:1901.01290 [pdf, other]
Title: Theory of chiral edge state lasing in a two-dimensional topological system
Journal-ref: Phys. Rev. Research 1, 033148 (2019)
Subjects: Optics (physics.optics)
[17] arXiv:1901.01400 [pdf, other]
Title: Synchronization of two DFB lasers using frequency-shifted feedback for microwave photonics
[18] arXiv:1901.01408 [pdf]
Title: Topological theory of non-Hermitian photonic systems
Comments: 44 pages, accepted for publication in Phys. Rev. B
Journal-ref: Phys. Rev. B 99, 125155 (2019)
Subjects: Optics (physics.optics)
[19] arXiv:1901.01482 [pdf]
Title: Extremely broadband topological surface states in a photonic topological metamaterials
Subjects: Optics (physics.optics)
[20] arXiv:1901.01496 [pdf, other]
Title: Excitation of whispering gallery modes with a "point-and-play", fiber-based, optical nano-antenna
Subjects: Optics (physics.optics)
[21] arXiv:1901.02130 [pdf]
Title: Long-range optical pulling force device based on vortex beams and transformation optics
Journal-ref: J. Opt. 21 (2019) 065401
Subjects: Optics (physics.optics)
[22] arXiv:1901.02191 [pdf]
Title: Linear Schrödinger equation with temporal evolution for front induced transitions
Subjects: Optics (physics.optics)
[23] arXiv:1901.02243 [pdf, other]
Title: Topologically non-trivial photonic surface degeneracy in a photonic metamaterial
Journal-ref: Phys. Rev. B 99, 235423 (2019)
Subjects: Optics (physics.optics)
[24] arXiv:1901.02279 [pdf]
Title: All semiconductor enhanced high-harmonic generation from a single nano-structure
Subjects: Optics (physics.optics)
[25] arXiv:1901.02288 [pdf]
Title: Nested-capillary anti-resonant silica fiber with mid-infrared transmission and low bending sensitivity at 4000 nm
Comments: 6 pages, 5 figures
Journal-ref: Optics Letters Vol. 44, Issue 17, pp. 4395-4398 (2019)
Subjects: Optics (physics.optics)
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, physics, 2209, contact, help (Access key information) |
31f3e11ebe375db4 | Difference between Orbit and Orbital
Differences between Orbit and Orbital
Differences between Orbit and Orbital
The Differences between Orbit and Orbital is given here. An orbit and an orbital are similar terms that identify two different concepts for both physics and chemistry.
keep reading…
For physics, an orbit is nothing more than the path that a physical object describes around another being under the influence of a gravitational force or central force. They were analyzed mathematically for the first time by Johannes Kepler, resulting in the three laws that govern planetary motion.
The first found that the orbits of the planets in the solar system are elliptical and not circular and the sun is not in the center of the orbits but in one of the foci. Isaac Newton succeeded in showing that Johannes Kepler’s laws came from the theory of gravity. When two objects orbit each other, the periastron is the point at which they are closest to each other and the poster the point at which they are farthest. An orbit is formed when a force pulls an object into a curved path while it tries to maintain straight flight.
The atomic orbitals are a region of space defined by a particular, independent, and spatial solution of the Schrödinger equation for that electron subjected to a Colombian potential. An orbital also represents the time-independent position of an electron in a specific molecule. This is known as a molecular orbital. A combination of the atomic orbitals gives rise to the electronic cortex which is represented by the layer model. Each layer adjusts to each chemical element depending on the corresponding electronic configuration. suggested video: Orbit vs Orbital
Differences between Orbit and Orbital
• The orbit is the shape that is created when a body is drawn by a force towards a curved path while it tries to stay in a straight line flight.
• An orbital is the position of an electron without depending on the time in a specific molecule. A set of orbitals forms an electronic shell.
Related Articles
Leave a Reply
Your email address will not be published.
Back to top button |
0e10fa76673b3253 | Notes: Density Functional Theory
Posted by Jingyi on March 11, 2018
Thanks for the notes of course MSAE6085 by Renata Wentzcovitch, and wiki page DFT.
DFT is a computational quantum mechanical modelling method,
• investigate the electronic structure (principally the ground state)
• obtain an approximate solution of the Schrödinger Equation for many-body systems.
graph TD; A("Quantum many body problem")-->B("Wavefunction"); A-->C("Density"); B-->B1("Hartree-Fock"); B-->B2("QMC"); B-->B3("Coupled Cluster"); C-->C1("Density Functional Theory");
Schrödinger Equation
The time-independent Schrödinger Equation:
\[H \Psi(x_1,x_2,...,x_N,R_1,R_2,...,R_M) = E \Psi(x_1,x_2,...,x_N,R_1,R_2,...,R_M)\]
Where $H$ is the Hamiltonian for a system with $M$ nuclei and $N$ electrons.
\[H = -\frac{1}{2m_e}\sum^{N}_{i=1}\nabla^2_i-\frac{1}{2m_n}\sum^{M}_{j=1}\nabla^2_j \\ + \frac{1}{2}\sum^{N}_{i=1} \sum^{N}_{k=1,k\neq i} \frac{e^2}{\mid r_i-r_k \mid} - \sum^{N}_{i=1} \sum^{M}_{j=1,i\neq} \frac{Z_J e^2}{\mid r_i-r_k \mid} \\ + \frac{1}{2}\sum^{M}_{j=1} \sum^{M}_{w=1,w\neq j} \frac{Z_j Z_w e^2}{\mid r_j-r_w \mid}\]
And it can be simplifed as
\[H = T_e + T_n + V_{ee} + V_{en} + V_{nn}\]
• $T_e$ is the kinetic energy of electrons,
• $T_n$ is the kinetic energy of nuclei,
• $V_{ee}$ is the electron-electron potential energy,
• $V_{en}$ is the electron-nuclei potential energy,
• $V_{nn}$ is the nuclei-nuclei potential energy.
Born-Oppenheimer (BO) Approximation: fixed nuclei
Born–Oppenheimer (BO) Approximation is the assumption that the motion of atomic nuclei and electrons in a molecule can be separated.
In many-body electronic structure calculations, the nuclei of are treated as fixed, by generating a static external potential $V$ in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation.
With $m_n \ll m_e$, $T_n = 0$, $V_{nn} = Constant$.
Thus, the new Hamiltonian for many body system is:
\[H = T_e + V_{ee} + V_{en}\]
Kohn-Sham Theory
Solve selfconsistent one-electron Schroedinger equations for the obitals or one-electron wavefunctions –> Ground-state density and energy of a many-electron system.
Approximation methods in early history
Lowest energy solution of Schroedinger equation ↓ Grounnd-state energy $E$ and electron density $n(\overrightarrow{r})$ of N-electron system
Thomas-Fermi Approximation
Replaces wavefunctions as the variational object by electron density $n(\overrightarrow{r})$.
Disadvantage: No adequate density-functional approximation for the kinetic energy functional
Hartree-Fock Approximation
Replaces the wavefunctions by a single Slater determinant of the best possible orbitals or one-electron wavefunctions ↓ Solutions of a one-electron Schroedinger equation with an effective one-electron potential
Disvantage: Include exact exchange, but no correlation
Hohenberg-Kohn (H-K) Theorems
Hohenberg-Kohn (H-K) Theorem relate to any system consisting of electrons moving under the influence of an external potential $v_{ext}(r)$.
The first H–K theorem demonstrates that the ground state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of N electrons with 3N spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second H–K theorem defines an energy functional for the system and proves that the correct ground state electron density minimizes this energy functional.
Exchange-Correlation Functionals
The exchange-correlation energy is a negative energy that represents the lowering of the energy of the system due to the fact that the electrons avoid each other as they move through the density.
In DFT the key variable is the electron density $n(\overrightarrow{r})$, which for a normalized $\Psi$ is given by
\[n(\overrightarrow{r}) = N \int d^3 r_2 \int d^3 r_3 ... \int d^3 r_N \mid \Psi(\overrightarrow{r},\overrightarrow{r_2},\overrightarrow{r_N}) \mid ^2\]
graph TD; A("LDA")-->B("GGA"); B-->C("Meta-GGA"); C-->D("Hybrid"); D-->E("RPA-like");
Local Density Approximation (LDA): $n(\overrightarrow{r})$
\[E^{LDA}_{XC}[n] = \int \varepsilon_{XC}(n) n(\overrightarrow{r})d^3r\]
the exchange–correlation energy is typically separated into the exchange part and the correlation part:
\[\varepsilon_{XC} = \varepsilon_{X} + \varepsilon_{C}\]
Generalized Gradient Approximations (GGA): $\nabla n(\overrightarrow{r})$
\[E^{GGA}_{XC}[n\uparrow , n\downarrow] = \int \varepsilon_{XC}(n\uparrow , n\downarrow, \nabla n\uparrow , \nabla n\downarrow) n(\overrightarrow{r})d^3r\]
Software: Quantum ESPRESSO
• view my notes: here |
c9179e66af76e224 | atomic theory of quantum mechanics
atomic theory of quantum mechanics :- Atomic theory is a type of scientific theory that deals with the nature of matter, and it says that stuff is made up of the smallest units called atoms.
Within the atom, there are several theories, one of which is the atomic theory of quantum mechanics, which will be examined today. What precisely does quantum mechanical atomic theory imply? Examine the following details.
atomic theory of quantum mechanics
According to experts, the atomic theory of quantum mechanics
Here are various explanations provided by professionals and scientists about the atomic theory of quantum mechanics.
Louis-Victor de Broglie
The first expert, Louis Victor de Broglie, offered his thoughts on quantum mechanical atomic theory, arguing that the motion of particles with various wavelength qualities (particle-wave dulity), such as electrons, imposes the following wave law.
Law of waves: = h/p = h/(mv)
Werner Heinsberg
Werner Heisenberg, the second expert, published his thoughts on quantum mechanical atomic theory, claiming that the location and momentum of an electron cannot be exactly measured at the same time, a procedure known as the uncertainty principle.
This is what distinguishes the electrons that surround the nucleus; their distance from the nucleus can only be established by considering several alternatives.
Erwin Schrödinger’s theory
In his perspective on the atomic theory of quantum mechanics, the third expert, Erwin Schrödinger, argued that electrons may be thought of as waves of matter whose speed can be equivalent to wave speed. His assertion is known as wave mechanics or quantum mechanics.
Erwin Schrödinger also claimed that the location of electrons in an atom cannot be known with certainty, and that the only thing that can be determined is the region of possibility or likelihood that it exists. An orbital is a region of space where an electron is most likely to be discovered.
advancement of atomic theory
Democritus, a Greek philosopher, originally introduced and developed the notion of atomic theory, according to Hendry Kensari Yeni.
Grammede can first comprehend the development of atomic theory from the information below before analysing current atomic theory, also known as quantum mechanical atomic theory.
The Atomic Dalton Theory
Dalton’s atomic theory is the earliest atomic theory. The notion of the present atom began to evolve with the advent of Democritus’ philosophy. A physicist called John Dalton was the first to discover the atomic hypothesis.
His notion was eventually known as Dalton’s atomic theory. According to the hypothesis, the atom is the smallest particle and cannot be redistributed.
Thomson’s atomic theory
Thomson’s atomic theory is the second atomic theory. Many more atomic theories have evolved since the advent of Dalton’s atomic theory, one of which being Thomson’s atomic theory, which is an upgraded variant of Dalton’s atomic model.
Thomson also had a notion about the atom, claiming that it is a solid ball made of positively charged matter with electrons dispersed like raisin bread.
The Atomic Theory of Rutherford
Rutherford’s atomic theory is the third atomic theory, which Rutherford developed to improve on Thomson’s prior atomic theory. According to Rutherford’s atomic theory, an atom consists of a very tiny atomic nucleus with a positive charge surrounded by negatively charged electrons.
Theory Atom Niels Bohr
The fourth atomic theory is Niels Bohr’s atomic theory, which is a variation of the atomic theory developed by Niels Bohr since, in his opinion, Rutherford’s prior atomic theory still had flaws.
Niels Bohr built upon the prior idea with his atomic model, which suggested that atoms contain energy levels or shells.
Furthermore, the atomic model is evolving into the quantum mechanical atomic model in that it describes the concept of orbitals. This hypothesis came to be recognised as the most recent atomic theory.
Atomic theory of quantum mechanics, often known as modern atomic theory
The contemporary atomic theory, commonly known as the atomic theory of quantum mechanics, is the fifth atomic theory. This contemporary atomic theory addresses the most recent atomic models in relation to other atomic theories.
Erwin Schrödinger, an Austrian physicist, proved this idea. He stated that atoms contain a positively charged nucleus surrounded by negatively charged electrons.
The orbital field in atomic quantum mechanics is classified into four types of orbitals, namely s, p, d, and f.
What Is the Difference Between the Bohr Atomic Model and the Quantum Mechanics Atomic Model?
The following material explains the various differences between the Bohr atomic model and the quantum mechanical atomic model.
Electrons circle the atomic nucleus and have different energy levels in the Bohr model of the atom. In contrast to the quantum mechanical atomic model, electrons circle the atomic nucleus via specific orbitals that comprise the atomic shell.
The electrons in the Bohr model of the atom travel in their orbits and create a circle, similar to how the planets orbit the Sun. In contrast to the quantum mechanical atomic model, electrons travel in orbitals and waves.
The location of an electron travelling around its own atom may be estimated using the Bohr model of the atom. In contrast to the quantum mechanical atom model, where the position of the electron travelling around the atomic nucleus cannot be predicted with confidence.
Bohr was unable to explain the influence of a magnetic field on the hydrogen atom in atomic theory, for example, why there is an extra line of the hydrogen spectrum when affected by a magnetic field.
In contrast to the quantum mechanical atomic model, which may be used to describe the nature of atoms and molecules with more than one electron, and based on thorough observation, the spectrum of hydrogen gas that exists does not consist of just one line, but numerous lines separated from each other. pressed together According to this, the existing path is made up of sub-paths where electrons can be located.
Atomic Quantum Mechanical Model
atomic theory of quantum mechanics
[Note: This image has been modified from]
At the atomic level, an electron may be thought of as a wave phenomenon with no permanent position in space. The highest likelihood of detecting an electron in space denotes an electron’s location.
To get a thorough and general description of the general structure, the theory of wave-particle dualism is applied. Electron motion is presented as a wave phenomenon in this case.
The Schrödinger equation, which specifies the function of waves formed by electrons, has superseded Newton’s equation of dynamics, which is often employed to explain the motion of electrons.
As a result, the atomic model based on this concept is sometimes referred to as the quantum mechanical atomic model.
The Schrödinger equation for electrons in atoms can also be an acceptable solution; it can occur when three integers are set to three distinct parameters and three quantum numbers are generated.
The main, orbital, and magnetic quantum numbers are the three quantum numbers. As a result, a set of these quantum numbers can represent the image of electrons in an atom.
Learn physical chemistry, theory, and a variety of other topics linked to nuclear chemistry by reading Encyclopedia of Chemistry Volume 3: Physical and Theoretical Chemistry, Nuclear Chemistry.
the quantum number
The quantum numbers contained in the wave equation can be used to identify the location of electrons; for an explanation of these numbers, see the following material.
The Principal Quantum Number, abbreviated as n
The primary quantum number can express an existing atom’s energy level. The energy level may alternatively be expressed as the number of orbitals and electron pathways that the atom possesses.
The number of electrons in an atom represents the atom’s energy value. The bigger the value of an atom’s initial quantum number, the higher its energy level.
It begins with the prime quantum number or n 1, 2, 3, 4, 6, 8, and so on.
Quantum Azimuth Number or I
The azimuth quantum number can be used to denote the atom’s subshell where the electron is positioned. This is not the same as the primary quantum number used to denote the atomic shell.
The value of the azimuth quantum number is determined by the value of the original quantum number. If an atom has two shells (n=2), electrons in the azimuth numbers 2s and 2p subshells can also be 0 and 1.
Magnetic Quantum Number, abbreviated as M
Magnetic quantum numbers are used to indicate electron direction. This is due to the magnetic quantum’s presence in the magnetic field.
The magnetic quantum number is directly proportional to the number of quantum azimuths. Whereas if the azimuth quantum number is 1, the magnetic quantum numbers are -1, 0 and 1.
Spin quantum number, abbreviated as s
The spin quantum number is used to represent the direction of rotation of the electron and has nothing to do with the wave equation.
There are two spin quantum numbers: -12 clockwise and -12 counterclockwise.
The benefits and drawbacks of quantum mechanics’ atomic theory
Erwin Schrödinger’s current quantum mechanical atomic theory and model succeeded in bridging some of the gaps in Bohr’s atomic theory and bringing up new insights into atomic structure and electron mobility in atoms.
Here are some pros and downsides of the quantum mechanical atom’s theory and model.
The Benefits of Quantum Mechanical Atoms
• The first benefit of quantum mechanics is that atoms can describe the condition of electron probability.
• Another advantage of quantum mechanics is that atoms can explain electron orbital positions.
• A further benefit of quantum mechanics is that atoms can quantify both excitation and emission energy transfer.
• The ability to identify protons and neutrons in the nucleus while electrons are in orbit is a fourth benefit of quantum mechanics atoms.
Atomic Quantum Mechanical Disadvantages
• The first disadvantage of atomic quantum mechanics is that these equations can only be applied to particles in a box and atoms containing an electron.
• A second disadvantage of atomic quantum mechanics is that it is difficult to use in macroscopic systems containing atom collections, such as mammals.
size of an atomic orbital
The size of an atomic orbital is determined by its azimuthal quantum number, or I. Orbitals with the same azimuthal quantum number and value will have the same size.
Orbital s
The initial atomic orbital is a s orbital with a sphere-like s subshell; wherever electrons go, they will be the same distance from the nucleus.
Orbital P
Another type of nuclear orbital is the p orbital, which is the electron density dispersed with the atomic nucleus in opposing directions. The atomic nucleus is located at a node with a 0 electron density.
The p orbital has the appearance of a folded balloon. Because this orbital size has three m values (-1, 0, 1), there are three varieties of p orbitals: px, py, and pz.
Orbital d
The d orbital, or orbital with 1 = 2, is the third kind of atomic orbital. The d orbital contains five sorts of orientations, with m having five potential values: -2, -1, 0, 1, and 2.
Four of the five d orbitals (dxy, dxz, dyz, and dx2-y2) contain four lobes and are clover-shaped. The fifth d orbital, dz2, has two major lobes placed on the z-axis and a section in the middle that resembles a doughnut shape.
Orbital f
The f orbital, or orbital with 1 = 3, is the fourth kind of atomic orbital. As there are seven potential values of m (2l 1 = 7), there are seven kinds of orientations in this f orbital.
The seven f orbitals themselves have a complicated structure with several lobes. This orbital form is only applicable to transition elements with deep space.
electron arrangement
Grammedes understood the link between the presence of electrons in an atom and orbitals in quantum mechanics’ atomic theory. The electron configuration is the component of electrons in multi-electron atom shell orbitals.
Here are some examples of electron configurations and reasons behind them.
ASA Composition
The first form of electron configuration, known as the Aufbau rule, stipulates that electrons occupy distinct orbitals in the sequence of subshells, beginning with the lowest energy level and progressing higher. The energy level itself begins with 1s, 2s, 2p, and so on.
Pauli’s Principle of Exclusion
The Pauli exclusion principle, which asserts that no two electrons in an atom have the same four quantum numbers, is the second kind of electron configuration. Each orbital has a maximum that can be filled by just two electrons with opposing spins.
Hund’s Law
Hund’s law asserts that if two orbitals have the same energy level, the electron configuration with the lowest energy has the greatest amount of unpaired electrons with parallel spin.
This is an explanation of quantum mechanics’ atomic theory. We appreciate your perseverance in reading all the way to the conclusion. And I hope you understand how the atomic theory of quantum mechanics evolved.
Leave a Reply
Your email address will not be published. |
23b2d84b7567256f | Difference between revisions of "Schrödinger equation"
From Encyclopedia of Mathematics
Jump to: navigation, search
m (moved Schroedinger equation to Schrödinger equation over redirect: accented title)
(No difference)
Revision as of 07:55, 26 March 2012
A fundamental equation in quantum mechanics that determines, together with corresponding additional conditions, a wave function characterizing the state of a quantum system. For a non-relativistic system of spin-less particles it was formulated by E. Schrödinger in 1926. It has the form
where is the Hamilton operator constructed by the following general rule: in the classical Hamilton function the particle momenta and their coordinates are replaced by operators that have, respectively, the following form in the coordinate representation and in the momentum representation :
For charged particles in an electromagnetic field, characterized by a vector potential , the quantity is replaced by . In these representations the Schrödinger equation is a partial differential equation, for example, for particles in the potential field ,
Discrete representations are possible, in which the function is a multi-component function and the operator has the form of a matrix. If a wave function is defined in the space of occupation numbers, then the operator is represented by some combinations of creation and annihilation operators (the second quantization representation, cf. Annihilation operators; Creation operators).
The generalization of the Schrödinger equation to the case of a non-relativistic particle with spin (a two-component function ) is called the Pauli equation (1927); to the case of a relativistic particle with spin (a four-component function ) — the Dirac equation (1928); to the case of a relativistic particle without spin — the Klein–Gordon equation (1926); with spin 1 (the function is a vector) — the Proca equation (1936); etc.
The solution of the Schrödinger equation is defined in the class of functions that satisfy the normalization condition for all (the brackets mean integration or summation over all values of ). To find the solution it is necessary to formulate initial and boundary conditions, corresponding to the character of the problem under consideration. The most characteristic among such problems are:
1) The stationary Schrödinger equation and the determination of admissible values of the energy of the system. Assuming that and requiring in conformity with the normalization condition and the condition of absence of flows at infinity that the wave function and its gradients vanish when , one obtains an equation for the eigenvalues and eigenfunctions of the Hamilton operator:
Characteristic examples of the exact solution to this problem are: the eigenfunctions and energy levels for a harmonic oscillator, a hydrogen atom, etc.
2) The quantum-mechanical scattering problem. The Schrödinger equation is solved under boundary conditions that correspond at a large distance from the scattering centre (described by the potential ) to the plane waves falling on it and the spherical waves arising from it. Taking into consideration this boundary condition, the Schrödinger equation can be written as an integral equation, the first iteration of which with respect to the term containing corresponds to the so-called Born approximation. This equation is also called the Lippman–Schwinger equation.
3) The case where the Hamiltonian of the system depends on time, , is usually considered in the framework of time-dependent perturbation theory. This is a theory of quantum transition, the determination of the system's reaction to an external perturbation (dynamic susceptibility) and characteristics of relaxation processes.
To solve the Schrödinger equation one usually applies approximate methods, regular methods (different types of perturbation theories), variational methods, etc.
[1] A. Messiah, "Quantum mechanics" , 1 , North-Holland (1961)
[2] L.D. Landau, E.M. Lifshitz, "Quantum mechanics" , Pergamon (1965) (Translated from Russian)
[3] L.I. Schiff, "Quantum mechanics" , McGraw-Hill (1955)
A comprehensive treatise on the mathematics of the Schrödinger equation is [a4].
[a1] R.P. Feynman, R.B. Leighton, M. Sands, "The Feynman lectures on physics" , III , Addison-Wesley (1965)
[a2] S. Gasiorowicz, "Quantum physics" , Wiley (1974)
[a3] J.M. Lévy-Lehlond, "Quantics-rudiments of quantum physics" , North-Holland (1990) (Translated from French)
[a4] F.A. Berezin, M.A. Shubin, "The Schrödinger equation" , Kluwer (1991) (Translated from Russian)
How to Cite This Entry:
Schrödinger equation. Encyclopedia of Mathematics. URL: |
d92ac78c318614eb | Quantum Monte Carlo for Chemistry @ Toulouse
From Qmcchem
Jump to navigation Jump to search
This website is devoted to the scientific and software activities of the quantum Monte Carlo (QMC) group of Toulouse, France. The grand objective of our project is to make of QMC an alternative and efficient tool for electronic structure in chemistry. Our group -- headed by Michel Caffarel -- is located at the Laboratoire de Chimie et Physique Quantiques, CNRS and Université Paul Sabatier.
QMC in a few words
Quantum Monte Carlo (QMC) is a set of probabilistic approaches for solving the Schrödinger equation. In short, QMC consists in simulating the probabilities of quantum mechanics by using the probabilities of random walks (Brownian motion and its generalizations). During the simulations each electron is moved randomly and quantum averages are computed as ordinary averages.
In practice, the major steps of a QMC simulation are as follows (See, Figure):
Input: The molecular geometry, the number of electrons, and an approximate electronic trial wave function, ψT, obtained from a preliminary DFT or ab initio wave function-based calculation.
At each Monte Carlo step : The values of ψT, its gradient, and its Laplacian calculated at each spatial configuration (r1,r2, ...,rN).
Output: Quantum averages as ordinary averages along stochastic trajectories.
Key property of QMC : Fully parallelizable.. This property could be critical in making QMC successful.
Much more about QMC here
QMC an alternative to DFT or post-HF methods ?
Two standard approaches in computational chemistry:
-Density Functional Theory (DFT)
-Post-Hartree-Fock (post-HF) methods.
Attractive features of DFT
i.) In DFT the fully-correlated N-body electronic problem is replaced by an effective one-body problem (nuclei attraction + average electrostatic electronic repulsion + exchange-correlation potential). The only approximation made is the choice of the exchange-correlation potential, a point which leads to various levels of accuracy for DFT :local DFT, gradient-corrected DFT, hybrid DFT, etc.. Such a one-body framework is particularly attractive at the conceptual level since electronic processes can be interpreted in a simple manner using one-electron pictures, a point which is clearly in sharp contrast with wavefunction-based approaches (post-HF methods) where (very) large determinantal expansions have almost no physical meaning.
ii.) Thanks to this one-body formalism the computational effort of DFT has also a very good scaling, the typical scaling being of order <math>O(N^3)</math> where N is the number of electrons.
iii.) The various exchange-correlation potentials developped in the last years have now reached a point where reasonable quantitative results can be obtained, even for a large molecular systems.
Limitation of DFT
However, DFT has also a strong limitation related to the fact that the error made in such calculations is basically not controlled and that there exists no known procedure to reduce it in a systematic way.
Regarding post-Hartree-Fock methods, they are quite different from DFT and are based on the expansion of the wavefunction over a sum of antisymmetrized products of one-particle orbitals, the various parameters entering the expansion being optimized by using the variational principle. Many variants of these methods exist. Among the most famous ones we can cite the CCSD(T) approaches well-adapted to systems having a strong mono-configurational character and the MRCI approaches used when multi-configurational effects are significant. In contrast with DFT, the error is now much more easy to control but, unfortunately, the price to pay for that is in general too high. Indeed, typical scalings [for example, N7 for CCSD(T)] forbid to attack systems beyong those of intermediate sizes (let us say more than one hundred active electrons).
In conclusion, it can be legitimately considered that there does not exist a satisfactory electronic approach combining both efficiency and accuracy for (very) large molecular systems.
The project presented here is an attempt to promote an alternative third way: the quantum Monte Carlo approach
The advantages of QMC are indeed attractive:
i.) Like DFT, the method is simple to implement and has a very favorable scaling (typicallly, O(N3) for a general system).
ii.) Like post-HF methods, the accuracy is in general very good.
iii.) Unlike DFT and post-HF methods, QMC is particularly well-adapted to High Performance Computing (HPC): central memory requirements are very modest and bounded (no increase of memory as a function of some parameter like the basis set size in post-HF), the Input/Output flows are very limited, and the codes are perfectly parallelized (QMC codes can be easily implemented on massively parallel machines, on heterogeneous grids, etc. ).
Unfortunately, QMC has also some strong limitations :
i.) Besides the usual statistical error inherent to any Monte Carlo scheme and which can be easily controlled (for example, by making longer and longer simulations), there is some systematic error left, known as the fixed-node error. Although this error is small in terms of total energis, it can play a central role when differences of energies are considered. Unfortunately, it is well-known that differences of energies are at the very center of chemistry (e.g., electronic affinities, ionization potentials, binding energies, reaction barriers, etc.). Numerical experience has shown that the compensations of errors at work in both DFT and post-HF schemes are in general much important than in Fixed-Node QMC calculations.
ii.) In contrast with DFT and post-HF there does not exist yet a general and robust algorithm for computing forces in QMC (gradients of total energy with respecti to nuclear coordinates).
iii.) For large molecular systems, there is no simple and systematic way of constructing trial wavefunctions of good quality without reoptimizing for each system a very large number of variational parameters. This aspect forbids to apply QMC approach in a "black-box" way, thus strongly hampering the diffusion of QMC techniques into the general computational chemistry community.
In short, the main objectives of our project are to circumvent the previous limitations to make of QMC a popular approach in computational chemistry. |
7106bb97815dd428 | Semiclassical limit for the Schrödinger equation with a short scale periodic potential. (English) Zbl 1052.81039
In this paper, the dynamics generated by the time Schrodinger equation with the potential consisting of a lattice periodic potential plus an external potential which varies slowly on the scale of the lattice spacing is investigated. Theorems proved in this paper show that for very slow changes of the external potential the time position operator and, more generally, semiclassical observables converge to a limit given by the semiclassical dynamics. Results are given for isolated bands only (no band crossing is considered).
81Q99 General mathematical topics and methods in quantum theory
35J10 Schrödinger operator, Schrödinger equation
46N50 Applications of functional analysis in quantum physics
47N50 Applications of operator theory in the physical sciences
82B99 Equilibrium statistical mechanics
Full Text: DOI arXiv |
42053ba66bf4627c | Vol. 42 No. 4 - Highlights
Electroweak model without a Higgs particle (Vol. 42, No. 4)
Thanks to the great accuracy in predicting experimental data, the standard model of particle physics is widely considered to be a building block of our current knowledge of the structure of matter. In spite of this success, we are still lacking an essential piece of evidence, namely the detection of the Higgs boson, a hypothetical massive elementary particle whose existence makes it possible to explain how most of the known elementary particles become massive. In this paper, an alternative electroweak model is presented that assumes running coupling constants described by energy-dependent entire functions. Contrary to the conventional formulation the action contains no physical scalar fields and no Higgs particle, even if the foreseen masses for particles are compatible with known experimental values. In addition the vertex couplings possess an energy scale for predicting scattering amplitudes that can be tested in current particle accelerators. As a result the paper provides an essential alternative to the current established knowledge in the field and addresses an issue that might soon be resolved, as the Large Hadron Collider could provide the experimental evidence of the existence or non-existence of the Higgs boson.
Ultraviolet complete electroweak model without a Higgs particle
J.W. Moffat, Eur. Phys. J. Plus, 126, 53 (2011)
Atomic photoionization: When does it actually begin? (Vol. 42, No. 4)
image The crest position of the electron wave packet after the end of the XUV pulse is fitted with the straight line, which corresponds to the free propagation. In the inset, extrapolation of the free propagation inside the atom is shown. The XUV pulse is over-plotted with the black dotted line.
Among other spectacular applications of the attosecond streaking technique, it has become possible to determine the time delay between subjecting an atom to a short XUV pulse and subsequent emission of the photoelectron. This observation opened up a question as to when does atomic photoionization actually begin.
We address this question by solving the time dependent Schrödinger equation and by carefully examining the time evolution of the photoelectron wave packet. In this way we establish the apparent "time zero" when the photoelectron leaves the atom. At the same time, we provide a stationary treatment to the photoionization process and connect the observed time delay with the quantum phase of the dipole transition matrix element, the energy dependence of which defines the emission timing.
As an illustration of our approach, we consider the valence shell photoionization of Ne and double photoionization (DPI) of He. In Ne, we relate the opposite signs of the time delays t0(2s)<0 and t0(2p)<0 (Figure) with energy dependence of the p and d scattering phases which is governed by the Levinson-Seaton theorem. In He, we demonstrate that an attosecond time delay measurement can distinguish between the two leading mechanisms of DPI: the fast shake-off (SO) and the slow knockout (KO) processes. The SO mechanism is driven by a fast rearrangement of the atomic core after departure of the primary photoelectron. The KO mechanism involves repeated interaction of the primary photoelectron with the remaining electron bound to the singly charged ion.
Timing analysis of two-electron photoemission
A.S. Kheifets, I.A. Ivanov and Igor Bray, J. Phys. B: At. Mol. Opt. Phys. 44, 101003 (2011)
Practical limits for detection of ferromagnetism (Vol. 42, No. 4)
image Ferromagnetic saturation moment of a ZnO substrate measured in five consecutive stages, exemplifying two of the most common sources of ferromagnetic contamination and showing a type of reversibility upon annealing under different atmospheres, which is often observed in some of the recently discovered nanomagnets mentioned in the text (the detection of ferromagnetism below 5 10-7 emu is hindered by setup-related artefacts).
Over the last ten years, signatures of room-temperature ferromagnetism have been found in thin films and nanoparticles of various materials that are non-ferromagnetic in bulk. The implications of such high temperature ferromagnetism are in some cases so extraordinary, e.g. dilute magnetic semiconductors (DMS) with carrier-mediated ferromagnetism well above room temperature would revolutionize semiconductor-based spintronics, that they triggered an enormous volume of materials research and development. However, the magnetics community soon started realizing the dangers of measuring the very small magnetic moments of these nanomagnets (nanometer sized materials with nano-emu magnetic moments). Pushing state-of-the-art magnetometers to their sensitivity limits, where extrinsic ferromagnetic signals originating from magnetic contamination and measurement artefacts are non-negligible, these new nanomagnets raise a number of challenges to magnetometry techniques and, most of all, to its users' methods and procedures. While new nanomagnets continue being "discovered" based on magnetometry measurements, the general opinion is moving towards the notion that finding a signature of ferromagnetism by means of magnetometry, i.e. a magnetic hysteresis, is only necessary but not sufficient to claim its existence.
Through an extensive analysis of various materials subject to different experimental conditions, the authors aim at re-establishing the reliability limits for detection of ferromagnetism using high sensitivity magnetometry. The paper provides a roadmap describing how extrinsic ferromagnetism can be avoided or otherwise removed, its magnitude when such optimum conditions cannot be guaranteed, and to what extent its characteristics may or may not be used as criteria to distinguish it from intrinsic ferromagnetism.
Practical limits for detection of ferromagnetism using highly sensitive magnetometry techniques
L.M.C. Pereira, J.P. Araújo, M.J. Van Bael, K. Temst and A. Vantomme, J. Phys. D: Appl. Phys. 44, 215001 (2011)
Classical and quantum approaches to the photon mass (Vol. 42, No. 4)
image In new effects of the Aharonov-Bohm type, coherent superpositions of particles possessing opposite electromagnetic properties are used. For the one shown in this figure, charged particles interact with the magnetic vector potential A of a solenoid. If the photon mass is not zero, the electromagnetic interaction is modified. Measuring the corresponding change of quantum phase shift with an interferometer leads to an estimate of mγ.
Since Proca's prediction in 1936 that the rest mass of the photon, mγ, may not be zero, there have been several searches for evidence for a possible finite photon mass. In fact, for even a very small value of mγ, fascinating physical implications arise such as breakdowns of Coulomb's law, wavelength dependence of the speed of light in free space, existence of longitudinal electromagnetic waves, presence of an additional Yukawa potential for magnetic dipole fields, and effects that a photon mass may have during early-universe inflation and the resulting magnetic fields on a cosmological scale.
Traditionally, limits on mγ of < 10-49g have been obtained by means of classical approaches, such as searches for departures from Coulomb's law. What happens if we instead exploit quantum approaches? Could better limits be achieved? This is the novel objective of the present work, in which quantum physics is applied to the photon mass question. We first examine the implications that the Aharonov-Bohm class of quantum effects (Figure) have on searches for mγ, and then move on to explore the quantum electrodynamics scenario with an approach that employs measurements of the electron's g-factor. Within the quantum framework, we show that competitive new lower limits on the photon mass may reach the range 10-54 < mγ < 10-53g. We provide an assessment of the state of the art in these areas and a prognosis for future work.
A survey of existing and proposed classical and quantum approaches to the photon mass
G. Spavieri, J. Quintero, G.T. Gillies and M. Rodriguez, Eur. Phys. J. D 61, 531 (2011)
UV absorption spectroscopy to monitor reactive plasma (Vol. 42, No. 4)
image Absorbance of the HBr gas at three pressures, as used in silicon gate etching processes.
A new high sensitivity technique is developed by extending the broad-band absorption spectroscopy to the vacuum ultraviolet (VUV) spectral region. It is well adapted for the detection and density measurement of closed-shell molecules that have strong electronic transitions in the 110-200 nm range. Among them, molecules such as Cl2, HBr, BrCl, Br2, HCl, BCl3, SiCl4, SiF4, CCl4, SF6, CH2F2 and O2, used in the microelectronics industry for etching or deposition processes, are of prime interest. In our system, the light of a deuterium lamp crosses a 50 cm diameter industrial etch reactor containing the gas of interest. The transmitted light is recorded with a 20 cm focal length VUV scanning spectrometer backed with a photomultiplier tube (PMT). The attached figure shows the absorbance at three pressures of the HBr gas, which is used in silicon gate etching processes. Peaks at 137, 143 and 150 nm, which show a non-linear, but very strong absorbance, correspond to transitions to Rydberg states of the molecule and can be used for the detection of very small HBr densities. In our present experiment, an absorption rate of 2%, corresponding to about 0.03 mTorr of HBr, can be easily detected on the 143 nm absorption peak. Replacing the PMT detector by a VUV sensitive CCD camera, would permit to reach the same signal to noise ratio with a few seconds acquisition time. For HBr pressures in the 1 to 100 mTorr range, the continuum part of the absorption spectrum (160-200 nm), which shows a weak but linear absorbance can be used. The technique is applied to monitor in Cl2-HBr mixture the dissociation rate of HBr and the amount of Br2 molecule formation at different plasma conditions.
Vacuum UV broad-band absorption spectroscopy: a powerful diagnostic tool for reactive plasma monitoring
G Cunge, M Fouchier, M Brihoum, P Bodart, M Touzeau and N Sadeghi, J. Phys. D: Appl. Phys. 44, 122001 (2011)
Flexibility and phase transitions in zeolite frameworks (Vol. 42, No. 4)
image Detail of a zeolite structure built from corner-sharing tetrahedral units.
The zeolites are a group of minerals whose complex and beautiful atomic structures are formed by different arrangements of a very simple building block- a group of four oxygen atoms forming a tetrahedron, with a silicon or aluminium atom at the centre. Each oxygen atom belongs to two tetrahedra, so the structure can be viewed as a network of tetrahedra linked at the corners.
Zeolites have found widespread applications in chemical industry, particularly as catalysts. Their chemical properties depend on the shape of the pores and channels that run through the structure, containing water molecules, ions and even small organic molecules. More than a hundred different frameworks are known to exist in natural minerals or have been synthesised by chemists.
A fundamental geometric question is whether it is possible for the tetrahedra of the framework to exist in an undistorted, geometrically ideal form, or whether distortions are inevitably caused by the linking together of the tetrahedral units to form the structure. A new study links this question to the compression behaviour of zeolites in the analcime group. Four different structures display a common behaviour: they exist in a high-symmetry form at low pressures when the tetrahedra can exist without distortions, but transform to low-symmetry forms under pressure when distortions become inevitable. A deeper understanding of the rules governing the formation of zeolite structures may one day allow us to synthesise structures with specific properties on demand. New insights into the physics and geometry of frameworks are an important step in this direction.
Flexibility windows and phase transitions of ordered and disordered ANA framework zeolites
S. A. Wells, A. Sartbaeva and G. D. Gatta, EPL, 94, 56001 (2011)
Molecular motors in the rigid and crossbridge models (Vol. 42, No. 4)
image Examples of spontaneous oscillations of motor assemblies in the crossbridge model (red) and the rigid model (blue).
In cells, motor proteins use chemical energy to generate motion and forces. Motors often interact and form clusters because they are connected to a single rigid backbone. In a muscle the backbone is made by association of the motor tails. The backbone motion results from the action of all the motors, and feeds back on each motor. Previous works suggest that motor assemblies are endowed with complex dynamical properties, including dynamic instabilities and spontaneous oscillations, which may play a role in the mechanisms of heartbeat, flagellar beating, or hearing. In this paper, we study two models of motor assemblies: the rigid two-state model and the classical crossbridge model widely used in muscle physiology.
Both models predict spontaneous oscillations. In the rigid two-state model, they can have a "rectangular'' shape or a characteristic "cusp-like'' shape that resembles cardiac sarcomere and "stick-slip'' oscillations. The oscillations in the vicinity of the Hopf bifurcation threshold can be much faster than the chemical cycle. This property, not found in the crossbridge model where protein friction slows down the motion, could be important for the description of high frequency oscillations, such as insect wingbeat. Experiments based on the response of a motor assembly to a step displacement are also well described by both theories, which predict non-linear force displacement relations, delayed rise in tension and "sarcomere give''. This suggests that these effects are not directly dependent on molecular details. We also relate the collective properties of the motors to their microscopic properties accessible in single molecule experiments: we show that a three state state crossbridge model predicts the existence of instabilities even in the case of an apparent load decelerated detachment rate.
Dynamical behaviour of molecular motor assemblies in the rigid and crossbridge models
T. Guérin, J. Prost and J-F. Joanny, Eur. Phys. J. E, 34, 60 (2011) |
c384cfcef2c69749 | American Journal of Modern Physics
Volume 3, Issue 6, November 2014, Pages: 247-253
Unified field theory and topology of atom
Zhiliang Cao1, 2, *, Henry Gu Cao3, Wenan Qiang4
1Wayne State University, College of Engineering W Warren Ave, Detroit, USA
2Shanghai Jiaotong University, School of Materials Science and EngineeringShanghai, China
3Northwestern University, Weinberg College of Arts and Sciences Clark St, Evanston
4Northwestern University, Robert H Lurie Medical Research Center Room, E Superior, Chicago
Email address:
(Zhiliang Cao)
(H. G. Cao)
(Wenan Qiang)
To cite this article:
Zhiliang Cao, Henry Gu Cao, Wenan Qiang. Unified Field Theory and Topology of Atom. American Journal of Modern Physics. Vol. 3, No. 6, 2014, pp. 247-253. doi: 10.11648/j.ajmp.20140306.18
Abstract: The paper "Unified Field Theory and the Configuration of Particles" opened a new chapter of physics. One of the predictions of the paper is that a proton has an octahedron shape. As Physics progresses, it focuses more on invisible particles and the unreachable grand universe as visible matter is studied theoretically and experimentally. The shape of invisible proton has great impact on the topology of atom. Electron orbits, electron binding energy, Madelung Rules, and Zeeman splitting, are associated with proton’s octahedron shape and three nuclear structural axes. An element will be chemically stable if the outmost s and p clouds have eight electrons which make atom a symmetrical cubic.
Keywords: Unified Field Theory, Quantum Field Theory, Standard Model, Zeeman Effects, Madelung Rules
1. Introduction
Unified Field Theory (UFT) [1-8] predicts that a proton has an octahedron [1] shape. The positive electronic charge forces from nuclei are vertical to the faces of the octahedron. The electrons of an atom are interacting with the tri-axes of the protons.
Figure 1. Octahedron Proton and Electron Movements
Two electrons with mirrored moving direction and speed at the opposite side of the nucleus are paired electrons. The pairing increases the structural stabilities of nucleus. The ground state electrons in an atom will first meet pairing requirements before taking a new orbit.
Three axes of octahedron proton bounce back the electron’s mass wave and create triangle grid wave pattern. The electron "orbits" are nodes on the grids. Each node maps to a Zeeman splitting line. When electrons on nodes along the central line of the triangle wave pattern have positive spin, the Zeeman splitting lines go downward when strong magnetic field strength increases, otherwise, Zeeman splitting lines go upward.
Madelung rules can be explained by the propensity based on main orbit number plus azimuthal distortion times the interactive forces acting on axes of proton. The exceptions of the rule are results of unsymmetrical nuclear topological bases.
A simple experiment is designed to demonstrate the diamond grid interference pattern. A camcorder, and squared water bucket can be used to perform such experiment.
2. Results
2.1. Electron Movements
If the proton is in octahedron shape [2], the face of octahedron proton not only gives nuclei their topologies [2], but also carves the configuration of electrons. One of the issues associated with Schrödinger equation is that the equation does not consider the topology of the nuclei as nuclear structure is not "clear". In UFT [1-4], the waves are part of state that is stabilized by resonance. In atom, mass waves of electron under nuclear topology create stable wave pattern the electron "orbit" is based on. If the nuclei are in octahedron shape [1-2], the electronic mass waves’ primary direction is vertical to the eight faces of the octahedron. Logically, an electron collides with one of eight faces of proton and bounces like a ball. Even though the mass waves are possibility waves, an electron has a steady straight line "orbit" from nucleus to a fixed high possibility point of wave pattern.
When the atomic number is one, the nuclei of hydrogen isotopes are shaped as dot or line. The movement of electron makes the nucleus constantly spinning in the gas state in normal temperature. Helium isotopes have line shaped nucleus. When the atomic number (proton count) is greater than two, the nuclei become 2D symmetrical plate or 3D octahedron piles.
2.2. Squared Atom
The electrons are bouncing against a face of nuclear octahedron pile as their main "orbit".
When each octahedron face has an outer shell electron vertical to the face, the outer shell electrons form a cubic. Only noble gas elements with eight out most s/p clouds (except helium) form perfect cubic shape.
2.3. Axes Diffraction and Electron Orbits
The three axes of nucleus interact with the electron mass waves and create diffraction pattern that is similar to the crystalline diffraction. The charged axis has electronic field interaction as well, it double the wavelength of mass wave associated with charged axis. The proton/electron diffraction pattern shifts to charge oriented pattern, and topology center of the grid shifts to center of p cloud. Various electron clouds are related to the layers of triangle strips. Each triangle node represents an "orbit". Each node is mirrored with a node on the opposite side of proton face.
p: 3*2, d: 5*2, f: 7*2
Figure 2. Electron Diffraction Triangle Grids and Zeeman Splitting
Electrons are evenly distributed vertical to eight octahedron faces. The possibility of the various electron clouds are based on the strength of electron’s mass wave. The cloud s has highest priority for its high possibilities. The priority order is:
s, p, d and f
2.4. Spin
Based on the topology location of an "orbit" in triangle grid, an electron can be located topology balanced location or unbalanced ones. The electrons are interacting with weak waves. Smallest unit of weak wave (1/137.036)* (1/137.036) is ½ spin.
2.5. Zeeman Splitting
During Zeeman splitting [9-12], when element is in gas or liquid phase, a strong magnetic redshift up spin photonic electron photonic spectrum and blueshift down spin electron photonic spectrum due magnetic resonance of nucleus and electron.
2.6. Quantum Numbers
The main movement of the electrons in atom is vertical to one of eight faces and each electron collides with a proton as one proton can associate with multiple electrons. Each face of octahedron proton or neutron is a triangle hole with axes as boundaries. The charged field of electron crashes into the triangle hole, interacts with axes, and comes out with opposite moving direction.
Therefore, an electron’s movements can be studied using Spherical coordinate system FIG 1. Assume that Z axis is the charged axis, X and Y are uncharged ones.
When the electron is in the space, where x > 0, y>0 and z>0 and its movement is not perfectly vertical to the octahedron faces, q and j is direction and r is distance to nucleus:
The movement in three directions interacts with the nuclear octahedron axes, x, y and z. The quantum number along the r direction interacting with octahedron face is n, along q direction interacting with z is l and along j interacting with x and y is ml.
In the ground state, the n wave is the energy level, or main wave:
Along q, an electron positions on the grid line of n-1, n-2,…, 0. This is quantum number l.
Once the direction of n and l is decided, the interference high possibility nodes can be l, l-1, l-2, 0, -1, -2,…-l. This is quantum number ml.
2.7. Pauli Exclusion Principle
When proton is in octahedron shape, the electron mass waves are interfering with any face except the opposite face. Assume that A(x, x) represents movement along an octahedron face O, since movement is vertical to O:
A(x, x) = 0
A paired electrons on the opposite side of octahedron face are A(x, y) and A(y, x). The movements are opposite:
A(x, y) = -A(y, x)
It follows Pauli exclusion principle [13-17].
Even though Pauli exclusion principle is mainly for wavefunctions [18-30], the physical mass waves are used as above to discuss the principle.
When the same wave on two octahedron faces that are not opposite to once another, they are for the same electron interaction.
2.8. Energy Levels
The energy of an electron is:
0.510998928 * 106 eV
When an electron collides with the nucleus, it interacts with the weak waves [1-3] in the nucleus with energy of:
(0.510998928 * 106)/(137.036*137.036) =27.211 eV(38)
Electron loses half of energy above as electron’s charge energy is half of total energy. The rest of energy participates weak interaction:
27.211/2 = 13.605 eV(69)
One electron, one proton (atomic number=1), lowest energy:
E = -13.605 eV(69)
One electron, more proton (atomic number=Z), closest to nucleus. More proton increases weak wavelength [1-3] to factor of Z:
2E = - (0.510998928 * 106)/((137.036/Z)*(137.036/Z))eV
E = -Z2 13.605 eV(69)
When an electron interacts with Z protons in nucleus at n shell, the energy will be:
2.9. Hydrogen’s Binding Energy Study
Hydrogen’s electron binding energy experimental value is 13.59844 eV.
A proton mass is 1836.15 times of electron mass. When an electron collides with proton, the momentum is 1/1836.15 of electron momentum relative to electron and the absolute momentum is 1/1837.15. When the electron momentum is 1, the total momentum is (1+1/1837.15). The binding energy is related to absolute momentum, and has correction factor of 1/(1+1/1837.15).
Proton has a weak resonance binding wave of (5/4)/(137*137*2*3) [1]. The weak wave provides additional binding energy by factor of (5/4)/(137*137*2*3).
13.60569*(1+(5/4)/(137*137*2*3))/(1+1/1837.15) = 13.59844 eV
The above calculated value based on electron collision model matches the experimental result well.
2.10. Madelung Rules
The following electron atomic and molecular orbital figure summarized the Madelung rules [31-36].
The main factor of propensity is main orbit number n.
When electron hits three axes, (2/3)1/2 of the straight force acts as a force on axes. The distortions on all directions are same due to resonance. Therefore, the propensity add a factor of (2/3) ½ times distortion (l+1), while l is azimuthal number.
The azimuthal propensity is:
n + (2/3) 1/2 (l+1)=n + (1.6233/2)*(l+1)
Table 1. Orbit Propensity
Orbit Propensity
1s 1.8165
2s 2.8165
2p 3.6233
3s 3.8165
3p 4.6233
4s 4.8165
3d 5.43495
4p 5.6233
5s 5.8165
4d 6.43495
5p 6.6233
6s 6.8165
4f 7.2466
5d 7.43495
6p 7.6233
7s 7.8165
5f 8.2466
6d 8.43495
7p 8.6233
8s 8.8165
5g 9.05825
6f 9.2466
In above table, the increasing order of propensity and Madelung Rules are consistent.
The propensity is reverse proportional to mass wave strength of the orbit.
3. Discussion
3.1. Weak Interactions
The UFT introduced new interpretation for the existing weak interaction of particle physics. The inner structure of electron has resonance waves known as resonance weak interactive waves. UFT gives structural meanings behind weak interaction. The electron structure predicted in UFT is in a form of mass formula as result of electron structural waves that add up to smallest prime number:
3*5*8 + 3*5 + 2 = 137
The shape corrected value [3] of the above is 137.036.
The structural waves in electron are everywhere in the sub atomic particles as the electron structural wave is fundamental wave of the particles [1]. The weak interaction factor is the product of the two weak waves’ strength (in the unit of electron mass):
(1/137.036)* (1/137.036)
When an electron interacts with the proton, the interactive energy is:
(0.510998928 * 106/2)/(137.036*137.036)
=13.60569 eV
3.2. Electron and Proton Collision
Even though this paper assumes that the proton has octahedron shape. The main structure feature of proton is three axes of proton plus a resonance wave of 2x3. If an electron can reach to the surface of proton octahedron, it needs double its energy. Therefore, the electron cannot reach to the surface of proton and the collision happens conceptually.
When electron approaches a proton, its mass waves failed to bind with proton. After interacting with proton, it decays from proton. As electron mass waves are bounced back by proton, the electron follows the mass waves’ echo and move backward.
The UFT predicted that proton has a resonance weak interactive wave for 2x3 wave [1]:
Hydrogen has one electron and one proton. The above interaction wave contributes to binding energy of hydrogen:
The resonance weak interactive wave contributes to the binding energy of hydrogen as a factor, rather than simple addition.
3.3. Weak Waves and Multi-Proton Nuclei
Each proton in a nucleus has a unit charge that adds to the overall charge strength of the nucleus. As a weak charge wave, the weak energy formula stays the same:
The unit strength is proportional to the total charge Z of nucleus:
When the two waves interact, the energy will be:
(Z/137.036) * (Z/137.036)
the node, it associates with the quantum numbers of the node.
3.4. Weak Magnetic Field Zeeman Splitting
When magnetic field is relatively weak, the factor of magnetic shift is:
at lower nodes on grid
at higher nodes on grid
Quantum solution for weak magnetic field is:
Figure 3. Interference Grid and Zeeman Splitting.
When there is low external magnetic field, the interference grids are distorted. The Zeeman splitting is mainly based on the electron quantum location. Please note that mirror locations on the shell and opposite side of shell are equal while right hand nodes represent the side that reverses to the magnetic field.
3.5. Strong Magnetic Field Zeeman Splitting
The electrons are interacting with weak waves. Smallest unit of weak wave (1/137.036)* (1/137.036) is ½ spin. Methodologies of quantum can be used with help of the above weak wave to spin mapping. Since proton has three axes, the p cloud can either have 1/2 spin to interact with one axis, or 3/2 spin to interact with three axes.
4. Methods
4.1. Three Walls Water Interference
a) A digital camcorder
b) A five to ten gallons water plastic bucket;
c) Half gallon water.
Pour half gallon water into water bucket. Hold water bucket with left hand and put one corner of the bottom on a flat hard stable ground, tilt it 45 degrees.
Hold a digital camcorder with right hand and ready to take picture.
Shake the water bucket by knocking the bucket against the ground and take video for a few seconds while making knocking motions.
Select a picture with clear interference patterns.
Figure 4. Tri-wall Water Bucket Experiment.
The above picture provides valuable insights:
The interference pattern has a triangle shape and inner grids are diamond shaped when the waves are stabilized.
When mass distortions [3] collide with the axes of proton, the waves bounce back and form mass waves the same way as water bounce off the wall.
The electron interacts with axes of proton are similar to the water interactions with tri-wall water bucket. The left graph of FIG 4 represents a resonance point of electronic mass wave with proton tri-axes while each line of interference pattern in FIG 4 represents the resonance points of the waves. The intersections of three wave lines are quantum nodes (orbits) the electron has to follow. Once an electron associates with the node, it associates with the quantum numbers of the node.
4.2. Pauli Exclusion Principle
The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state |x> and the other in state |y>:
The antisymmetry under exchange means that A(x,y) = −A(y,x). This implies that A(x,x) = 0, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an antisymmetric rank-two tensor.
Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction component:
is necessarily antisymmetric. To prove it, consider the matrix element:
This is zero, because the two particles have zero probability to both be in the superposition state. But this is equal to
The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey:
5. Conclusions
Since the electrons are bouncing on octahedron shape protons, a heavy atom needs eight out most s/p cloud electrons to be stable. A proton has three internal structural axes. The mass wave of electron interacts with axes and form triangle grids. Each quantum state of electron is a node on the triangle grids. Weak interaction of electron and proton axes can explain Zeeman Splitting.
1. Cao, Zhiliang, and Henry Gu Cao. "Unified Field Theory and the Configuration of Particles." International Journal of Physics 1.6 (2013): 151-161.
2. Cao, Zhiliang, and Henry Gu Cao. "Unified Field Theory and Topology of Nuclei." International Journal of Physics 2, no. 1 (2014): 15-22.
3. Zhiliang Cao, Henry Gu Cao. Unified Field Theory. American Journal of Modern Physics. Vol. 2, No. 6, 2013, pp. 292-298. doi: 10.11648/j.ajmp.20130206.14.
4. Cao, Zhiliang, and Henry Gu Cao. "Unified Field Theory and the Hierarchical Universe." International Journal of Physics 1.6 (2013): 162-170.
5. Cao, Zhiliang, and Henry Gu Cao. "Non-Scattering Photon Electron Interaction." Physics and Materials Chemistry 1, no. 2 (2013): 9-12.
6. Cao, Zhiliang, and Henry Gu Cao. "SR Equations without Constant One-Way Speed of Light." International Journal of Physics 1.5 (2013): 106-109.
7. Cao, Henry Gu, and Zhiliang Cao. "Drifting Clock and Lunar Cycle." International Journal of Physics 1.5 (2013): 121-127.
8. Cao, Zhiliang, and Henry Gu Cao. "Unified Field Theory and Foundation of Physics." International Journal of Physics 2, no. 5 (2014): 158-164.
9. Mehul Malik, Mohammad Mirhosseini, Martin P. J. Lavery, Jonathan Leach, Miles J. Padgett & + et al. Direct measurement of a 27-dimensional orbital-angular-momentum state vector. Nature Communications, 2014, 5, doi:10.1038/ncomms4115
10. H. T. Yuan, M. B. Saeed, K. Morimoto, H. Shimotani, K. Nomura, R. Arita, Ch. Kloc, N. Nagaosa, Y. Tokura, and Y. Iwasa. Zeeman-Type Spin Splitting Controlled with an External Electric Field. Nat. Phys. 2013, 9, 563–569.
11. A. Rahimi-Iman, C. Schneider, J. Fischer, S. Holzinger, M. Amthor, S. Höfling, S. Reitzenstein, L. Worschech, M. Kamp, and A. Forchel. "Zeeman splitting and diamagnetic shift of spatially confined quantum-well exciton polaritons in an external magnetic field." Phys. Rev. B 84, 165325 – 2011, October
12. D. Kekez, A. Ljubiic & B. A. Logan. An upper limit to violations of the Pauli exclusion principle. Nature 348, 224-224 doi:10.1038/348224a0 (1990)
13. Zoran Hadzibabic. Quantum gases: The cold reality of exclusion. Nature Physics 6, 643-644 doi:10.1038/nphys1770 (2010)
14. June Kinoshita. Roll Over, Wolfgang? Scientific American 258, 25-28 doi:10.1038/scientificamerican0688-25 (1988)
15. Tony Sudbery. Exclusion principle still intact. Nature 348, 193-194 doi:10.1038/348193a0 (1990)
16. R. C. Liu, B. Odom, Y. Yamamoto & S. Tarucha. Quantum interference in electron collision. Nature 391, 263-265 doi:10.1038/34611 (1998)
17. George Gamow. The Exclusion Principle. Scientific American 201, 74-86 doi:10.1038/scientificamerican0759-74 (1959)
18. B. Poirier, Chem. Phys. 370, 4 (2010).
19. A. Bouda, Int. J. Mod. Phys. A 18, 3347 (2003).
20. P. Holland, Ann. Phys. 315, 505 (2005).
21. P. Holland, Proc. R. Soc. London, Ser. A 461, 3659 (2005).
22. G. Parlant, Y.-C. Ou, K. Park, and B. Poirier, "Classical-like trajectory simulations for accurate computation of quantum reactive scattering probabilities," Comput. Theor. Chem. (in press).
23. D. Babyuk and R. E. Wyatt, J. Chem. Phys. 124, 214109 (2006).
24. Jeremy Schiff and Bill Poirier. Quantum mechanics without wavefunctions. THE JOURNAL OF CHEMICAL PHYSICS 136, 031102 (2012)
25. J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, NJ, 1932).
26. D. Bohm, Phys. Rev. 85, 166 (1952).
27. P. R. Holland, The Quantum Theory of Motion (Cambridge University Press, Cambridge, England, 1993).
28. R. E. Wyatt, Quantum Dynamics with Trajectories: Introduction to Quantum Hydrodynamics (Springer, New York, 2005).
29. H. Everett III, Rev. Mod. Phys. 29, 454 (1957).
30. M. F. González, X. Giménez, J. González, and J. M. Bofill, J. Math. Chem. 43, 350 (2008).
31. Oganessian, Yu. T. et al. (2002). Results from the first 249Cf+48Ca experiment. JINR Communication (JINR, Dubna).
32. Nash, Clinton S. (2005). "Atomic and Molecular Properties of Elements 112, 114, and 118". Journal of Physical Chemistry A 109 (15): 3493–3500.
33. K.Umemoto, S.Saito, Electronic configurations of superheavy elements,
34. Journal of the physical society of Japan, vol.65, no.10, 1996, p.3175-3179
35. Hartmut M. Pilkuhn, Relativistic Quantum Mechanics, Springer Verlag, 2003.
36. E.Loza, V.Vaschenko. Madelung rule violation statistics and superheavy elements electron shell prediction.
Article Tools
Follow on us
Science Publishing Group
NEW YORK, NY 10018
Tel: (001)347-688-8931 |
4601df8f80c2bcce | Is quantum mechanics simpler than classical physics?
I want to make a few very fundamental comparisons between classical and quantum mechanics. I’ll be assuming a lot of background in this particular post to prevent it from getting uncontrollably long, but am planning on writing a series on quantum mechanics at some point.
Let’s assume that the universe consists of N simple point particles (where N is an ungodly large number), each interacting with each other in complicated ways according to their relative positions. These positions are written as x1, x2, …, xN.
The classical description for this simple universe makes each position a function of time, and gives the following set of N equations of motion, one for each particle:
Fk(x1, x2, …, xN) = mk · ∂t2xk
Each force function Fk will be a horribly messy nonlinear function of the positions of all the particles in the universe. These functions encode the details of all of the interactions taking place between the particles.
Analytically solving this equation is completely hopeless – It’s a set of N separate equations, each one a highly nonlinear second order differential equation. You couldn’t solve any of them on their own, and on top of that, they are tightly entangled together, making it impossible to solve any one without also solving all the others.
So if you thought that Newton’s equation F = ma was simple, think again!
Compare this to how quantum mechanics describes our universe. The state of the universe is described by a function Ψ(x1, x2, …, xN, t). This function changes over time according to the Schrödinger equation:
tΨ = -i·H[Ψ]
H is a differential operator that is a complicated function of all of the positions of all the particles in the universe. It encodes the information about particle interactions in the same way that the force functions did in classical mechanics.
I claim that Schrodinger’s equation is infinitely easier to solve than Newton’s equation. In fact, I will by the end of this post write out the exact solution to the wave function of the entire universe.
At first glance, you can notice a few features of the equation that make it look potentially simpler than the classical equation. For one, there’s only one single equation, instead of N entangled equations.
Also, the equation is only first order in time derivatives, while Newton’s equation is second order in time derivatives. This is extremely important. The move from a first order differential equation to a second order differential equation is a huge deal. For one thing, there’s a simple general solution to all first order linear differential equations, and nothing close for second order linear differential equations.
Unfortunately… Schrodinger’s equation, just like Newton’s, is highly highly nonlinear, because of the presence of H. If we can’t find a way to simplify this immensely complex operator, then we’re probably stuck.
But quantum mechanics hands us exactly what we need: two magical facts about the universe that allow us to turn Schrodinger’s equation into a linear first-order differential equation.
First: It guarantees us that there exist a set of functions φE(x1, x2, …, xN) such that:
HE] = E · φE
E is an ordinary real number, and its physical meaning is the energy of the entire universe. The set of values of E is the set of allowed energies for the universe. And the functions φE(x1, x2, …, xN) are the wave functions that correspond to each allowed energy.
Second: it tells us that no matter what complicated state our universe is in, we can express it as a weighted sum over these functions:
Ψ = ∑ a· φE
With these two facts, we’re basically omniscient.
Since Ψ is a sum of all the different functions φE, if we want to know how Ψ changes with time, we can just see how each φE changes with time.
How does each φE change with time? We just use the Schrodinger equation:
tφE = -i · HE]
= -iE · φE
And we end up with a first order linear differential equation. We can write down the solution right away:
φE(x1, x2, …, xN, t) = φE(x1, x2, …, xN) · e-iEt
And just like that, we can write down the wave function of the entire universe:
Ψ(x1, x2, …, xN, t) = ∑ a· φE(x1, x2, …, xN, t)
= ∑ a· φE(x1, x2, …, xN) · e-iEt
Hand me the initial conditions of the universe, and I can hand you back its exact and complete future according to quantum mechanics.
Okay, I cheated a little bit. You might have guessed that writing out the exact wave function of the entire universe is not actually doable in a short blog post. The problem can’t be that simple.
But at the same time, everything I said above is actually true, and the final equation I presented really is the correct wave function of the universe. So if the problem must be more complex, where is the complexity hidden away?
The answer is that the complexity is hidden away in the first “magical fact” about allowed energy states.
HE] = E · φE
This equation is a highly non-linear and in general second-order differential equation. If we actually wanted to expand out Ψ in terms of the different functions φE, we’d have to solve this equation.
So there is no free lunch here. But what’s interesting is where the complexity moves when switching from classical mechanics to quantum mechanics.
In classical mechanics, virtually zero effort goes into formalizing the space of states, or talking about what configurations of the universe are allowable. All of the hardness of the problem of solving the laws of physics is packed into the dynamics. That is, it is easy to specify an initial condition of the universe. But describing how that initial condition evolves forward in time is virtually impossible.
By contrast, in quantum mechanics, solving the equation of motion is trivially easy. And all of the complexity has moved to defining the system. If somebody hands you the allowed energy levels and energy functions of the universe at a given moment of time, you can solve the future of the rest of the universe immediately. But actually finding the allowed energy levels and corresponding wave functions is virtually impossible.
Let’s get to the strangest (and my favorite) part of this.
If quantum mechanics is an accurate description of the world, then the following must be true:
Ψ(x1, x2, …, xN, 0) = ∑ a· φE(x1, x2, …, xN)
This equation has two especially interesting features. First, each term in the sum can be broken down separately into a function of position and a function of time.
And second, the temporal component of each term is an imaginary exponential – a phase factor e-iEt.
Let me take a second to explain the significance of this.
In quantum mechanics, physical quantities are invariably found by taking the absolute square of complex quantities. This is why you can have a complex wave function and an equation of motion with an i in it, and still end up with a universe quite free of imaginary numbers.
But when you take the absolute square of e-iEt, you end up with e-iEt · eiEt = 1. What’s important here is that the time dependence seems to fall away.
A way to see this is to notice that y = e-ix, when graphed, looks like a point on a unit circle in the complex plane.
So e-iEt, when graphed, is just a point repeatedly spinning around the unit circle. The larger E is, the faster it spins.
Taking the absolute square of a complex number is the same as finding its distance from the origin on the complex plane. And since e-iEt always stays on the unit circle, its absolute square is always 1.
So what this all means is that quantum mechanics tells us that there’s a sense in which our universe is remarkably static. The universe starts off as a superposition of a bunch of possible energy states, each with a particular weight. And it ends up as a sum over the same energy states, with weights of the exact same magnitude, just pointing different directions in the complex plane.
Imagine drawing the universe by drawing out all possible energy states in boxes, and shading these boxes according to how much amplitude is distributed in them. Now we advance time forward by one millisecond. What happens?
Absolutely nothing, according to quantum mechanics. The distribution of shading across the boxes stays the exact same, because the phase factor multiplication does not change the magnitude of the amplitude in each box.
Given this, we are faced with a bizarre question: if quantum mechanics tells us that the universe is static in this particular way, then why do we see so much change and motion and excitement all around us?
I’ll stop here for you to puzzle over, but I’ve posted an answer here.
One thought on “Is quantum mechanics simpler than classical physics?
Leave a Reply |
f1a9c4b0f8c92a6c | Sunday, 18 March
19:00 - Welcome reception
Monday, 19 March
8:30 - Welcome address
Welcome remarks from Prof. Peter Comba, Director of IWH, Prof. Karlheinz Meier, the Structures initiative, Prof. Carlo Ewerz, Scientific Coordinator of EMMI, and the organizers.
9:00 - Karlheinz Meier, "Physical Models of Brain Circuits - A non-Turing Approach to Computation"
The brain is a complex network of 10^11 nodes and 10^15 synaptic connections. It evolves in continuous interaction with the environment on timescales from milliseconds to years. Numerical simulations of this system provide some insights but are severely constrained by energy consumption and simulation times.
In 1982 Feynman postulated a method, in which the number of computer elements required to simulate a large physical system is proportional to the space-time volume of the physical system. Similar to todays quantum emulators neuromorphic systems follow this path by building physical models of brain circuits under user control rather than solving differential equations numerically.
Like the biological archetype physical model neuromorphic systems exhibit attractive features like energy efficiency, fault tolerance and the ability to learn.
The talk will introduce this approach and present some recent results.
9:45 - John Martinis, "Quantum Computing at Google"
A key step in the roadmap to build a useful quantum computer will be to demonstrate its exponentially growing computing power. I will explain how a 7 by 7 array of superconducting xmon qubits with nearest-neighbor coupling, and with programmable single- and two-qubit gate with errors of about 0.2%, can execute a modest depth quantum computation that fully entangles the 49 qubits. Sampling of the resulting output can be checked against a classical simulation to demonstrate proper operation of the quantum computer and compare its system error rate with predictions. With a computation space of 2^49 = 5 x 10^14 states, the quantum computation can only be checked using the biggest supercomputers. I will show experimental data towards this demonstration from a 9 qubit adjustable-coupler “gmon” device, which implements the basic sampling algorithm of quantum supremacy for a computational (Hilbert) space of about 500. We have begun testing of the quantum supremacy chip.
10:30 - Coffee break
11:00 - Giuseppe Carleo, "Machine Learning for Quantum Many-Body Physics"
In this talk I will present recent applications of machine-learning-based approaches to quantum physics. First, I will discuss how a systematic machine learning of the many-body wave-function can be realized. This goal has been achieved in [1], introducing a variational representation of quantum states based on artificial neural networks. In conjunction with Monte Carlo schemes, this representation can be used to study both ground-state and unitary dynamics, with controlled accuracy. Moreover, I will show how a similar representation can be used to perform efficient Quantum State Tomography on highly-entangled states [2], previously inaccessible to state-of-the art tomographic approaches.
[1] Carleo, and Troyer – Science 355, 602 (2017).
[2] Torlai, Mazzola, Carrasquilla, Troyer, Melko, and Carleo – Nature Physics (in press, 2018) arXiv:1703.05334.
11:45 - Mihai Petrovici, "Spiking Neuron Ensembles and Probabilistic Inference"
The ability to perform probabilistic (Bayesian) inference is a hallmark of mammalian cognition and a coveted feature for embedded AI. Recent developments in machine learning have tried to capture this kind of computation with so-called “deep” architectures, but the analogy to biology remains superficial. I will discuss a framework for cognitive computation with spiking neurons that narrows the gap between biological and artificial deep networks, while employing well-documented aspects of cortical dynamics such as spike-based communication, operation in a high-conductance state, short-term plasticity and background-driven stochasticity. By design, these models lend themselves to neuromorphic implementation, which allows them to profit from the advantages offered by these new technologies.
Spiking Neuron Ensembles and Probabilistic Inference
12:30 - Lunch
14:00 - Martin Gärttner, "Neural Network Representation of a Near-critical Quantum Ising System out of Equilibrium"
A new method to describe the unitary evolution of interacting quantum many-body systems has been introduced recently which is based on the representation of quantum states in terms of an artificial neural network (ANN). Focusing on the spin 1/2 quantum Ising model with transverse and longitudinal field after a quench near criticality, we study the prospects and limitations of this method. We compare our results to those obtained with exact analytical results, with a semi-classical discrete Truncated-Wigner approach, and with tDMRG simulations. We find that the dTWA gives good results only at short times or near zero transverse field. The ANN approach works well in a much wider range of parameters. Only in regimes where long-range spin correlations build up the long-time dynamics becomes unstable and deviates from the exact results. The ANN approach yields qualitatively correct results in regimes where the entanglement entropy in the long-time limit is extensive.
Presentation Slide: BDC2018_Martin_Gaerttner
14:45 - Xiaopeng Li, "Machine Learning Approaches to Entangled Quantum States"
Artificial neural networks play a prominent role in the rapidly growing field of machine learning and are recently introduced to quantum many-body systems. This talk will focus on using a machine-learning model, the restricted Boltzmann machine (RBM) to describe entangled quantum states. Both short- and long-range coupled RBM will be discussed. For a short-range RBM, the associated quantum state satisfies an entanglement area law, regardless of spatial dimensions. I will present our recently constructed exact RBM models for nontrivial topological phases, including a 1d cluster state and a 2d toric code. For a long-range RBM, the captured entanglement entropy scales linearly with the number of variational parameters in the RBM model, in sharp contrast to the log-scaling in matrix product state representation.
15:30 - Coffee break
16:00 - Simon Trebst, "Machine Learning Quantum Phases of Matter"
Machine learning techniques have become ubiquitous, but often hidden helpmates in our daily life. This includes pattern recognition technologies that have long filtered data in electronic mailboxes and have more recently become powerful enough to identify users by the touch of a button or the scan of a face.
In my talk, I will briefly review the algorithmic foundations of machine learning approaches and then turn to their application in the context of statistical physics problems. I will demonstrate that machine learning techniques are capable to discriminate phases of matter by extracting essential features in the many-body wavefunction or the ensemble of correlators sampled for instance in Monte Carlo simulations. Of particular interest are quantum many-fermion problems that have long resisted a thorough numerical understanding — will we be able to guide our understanding of the emergence of superconductivity or topological order in such systems using machine learning?
Machine Learning Quantum Phases of Matter
16:45 - Christof Wunderlich, " Speeding-up the Decision Making of a Learning Agent Using an Ion Trap Quantum Processor"
We report a proof-of-principle experimental demonstration of the quantum speed-up for learning agents utilizing a small-scale quantum information processor based on radiofrequency-driven trapped ions [1]. The decision-making process of a quantum learning agent within the projective simulation paradigm for machine learning is implemented in a system of two qubits. The latter are realized using hyperfine states of two frequency-addressed atomic ions exposed to a static magnetic field gradient. We show that the deliberation time of this quantum learning agent is quadratically improved with respect to comparable classical learning agents. The performance of this quantum-enhanced learning agent highlights the potential of scalable quantum processors taking advantage of machine learning.
[1] Th. Sriarunothai et al., arXiv: 1709.01366 (2017).
19:00 - Symposium's dinner
Tuesday, 20 March
9:00 - Stephen Furber, "The SpiNNaker Project"
The SpiNNaker (Spiking Neural Network Architecture) project aims to produce a massively-parallel computer capable of modelling large-scale neural networks in biological real time. The machine has been 18 years in conception and ten years in construction, and has so far delivered a 500,000-core machine in six 19-inch racks, and now being expanded towards the million-core full system. Although primarily intended as a platform to support research into information processing in the brain, SpiNNaker has also proved useful for Deep Networks and similar applied Big Data applications. In this talk I will present an overview of the machine and the design principles that went into its development, and I will indicate the sort of applications for which it is proving useful.
Presentation Slide: BDC2018_Steve_Furber
9:45 - Rainer Blatt, "Quantum Computations and Quantum Simulations with Trapped Ions"
The quantum toolbox of the Innsbruck ion-trap quantum computer is applied to simulate the dynamics and to investigate the propagation of entanglement in a quantum many-body system represented by long chains of trapped-ion qubits [1]. With strings of up to 10 ions, a dynamical phase transition was recently observed [2] and an efficient procedure for the characterization of a quantum many-body system of up to 14 entangled ions has been implemented [3].
Moreover, using the quantum toolbox operations, universal (digital) quantum simulation was realized with a string of trapped ions [4]. Here we report the experimental demonstration of a digital quantum simulation of a lattice gauge theory, by realizing (1 + 1)-dimensional quantum electrodynamics (the Schwinger model) on a few-qubit trapped-ion quantum computer [3]. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron–positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favor of exotic long-range
interactions, which can be directly and efficiently implemented on an ion trap architecture.
[1] P. Jurcevic et al., Nature 511, 202 (2014)
[2] P. Jurcevic et al., Phys. Rev. Lett. 111, 080501 (2017)
[3] B. P. Lanyon et al., Nature Physics 13, 1158 (2017)
[4] E. A. Martinez et al., Nature 534, 516 (2016)
Presentation Slides: BDC2018_Rainer_Blatt
10:30 - Coffee break
11:00 - Antonio Acin, "Detecting Entanglement and Non-Local Correlations of Many-Body Quantum States"
Quantum correlations are fundamental for quantum information protocols and for our understanding of many-body quantum physics. The detection of these correlations in these systems is challenging because it requires the estimation and processing of an exponentially growing amount of parameters. We present methods to alleviate these problem and discuss their application to physically relevant quantum states.
Presentation Slides: BDC2018_Antonio_Acin
11:45 - Sebastian Huber, "Automated Phase and Low Energy Detection"
Classifying phases of matter is key to our understanding of many problems in physics. For quantum-mechanical systems in particular, the task can be daunting due to the exponentially large Hilbert space. With modern computing power and access to ever-larger data sets, classification problems are now routinely solved using machine-learning techniques. Here, we propose a neural-network approach to finding phase transitions, based on the performance of a neural network after it is trained with data that are deliberately labelled incorrectly. We demonstrate the success of this method on the topological phase transition in the Kitaev chain, the thermal phase transition in the classical Ising model, and the many-body-localization transition in a disordered quantum spin chain. Our method does not depend on order parameters, knowledge of the topological content of the phases, or any other specifics of the transition at hand.
Presentation Slides: BDC2018_Sebastian_Huber
12:30 - Lunch
14:00 - Martin Plenio, "Diamond Quantum Simulator Architectures"
In this talk I will present some ideas for quantum simualators that may be operated at room temperature.
Presentation slides: BDC2018_Martin_Plenio
14:45 - Iris Schwenk, "Quantum simulation without quantum error correction - on the way to applications in chemical and pharmaceutical industry" (Hot Topic)
Quantum simulation is a tool to investigate problems, e.g., in chemistry or condensed matter physics, that are not solvable analytically or on classical computers. However, the inevitability of perturbations constitutes a major roadblock to useful quantum simulations. Since we cannot predict the result of the simulation, it is difficult to estimate the effect of perturbations.
We show that in specific systems a measurement of additional correlators can be used to verify the reliability of the quantum simulation. The procedure only requires additional measurements on the quantum simulator itself. Besides, we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator.
To exploit near term applications for quantum simulation we have founded a company to develop quantum algorithms to predict material properties for chemical and pharmaceutical companies. We will discuss the various steps that are necessary to implement quantum chemical problems on a quantum computer and the challenges involved in solving concrete consumer problems. We intend for our software to be hardware agnostic and to work on conventional and state of the art quantum computers.
15:15 - Nikolaj Zinner, "Quantum Spin Transistors in Superconducting Circuits" (Hot Topic)
Starting from ideas developed in the realm of strongly interacting cold atoms, I will show how to realize quantum spin transistors in small spin networks. Then I will outline how these could be realized in current superconducting platforms using transmons or flux qubits. Other neat applications of these principles for modular quantum computation includes small quantum routers and quantum spin diodes.
15:45 - Christof Weitenberg, "Topology and Dynamics in Driven Hexagonal Lattices" (Hot Topic)
Ultracold atoms are a versatile system to study the fascinating phenomena of gauge fields and topological band structures. By Floquet driving of optical lattices, the topology of the Bloch bands can be engineered. This poster presents experimental schemes for momentum-resolved Bloch state tomography, which allow mapping out the Berry curvature and obtaining the Chern number. Furthermore, it discusses the dynamics of the wave function after a quench into the Floquet system. We observe the appearance of dynamical vortices, which trace out a closed contour, the topology of which can be directly mapped to the Chern number. Our measurements provide a new perspective on topology and dynamics and a unique starting point for studying interacting topological phases.
16:15 - Bus transfer to the Neuenheim Feld Campus
16:45 - Coffee and Lab tours
18:00 - Poster session
19:00 - BBQ Dinner
21:30 - Bus transfer to the old town
Wednesday, 21 March
9:00 - Lincoln Carr, "Complex Networks on Quantum States: from Quantum Phase Transitions to Emergent Dynamics of Quantum Cellular Automata"
Complex networks defined on quantum states via quantum mutual information turn out to give a surprising level of new insight on physical problems ranging from quantum critical phenomena to far-from-equilibrium quantum dynamics. Measures on such networks serve to rapidly and efficiently identify quantum critical points for workhorse many-body models studied in present quantum simulator experiments. A small modification of such models allows one to produce entangled quantum cellular automata. Complex network-based averages and dynamics serve as key quantifiers for emergent complexity along with localized robust dynamical structures and entropy fluctuations. They show that a new class of highly entangled yet also highly structured quantum states arise out of dynamics just a short step away from present experimental protocols. They also identify a set of simple criteria, called Goldilocks Rules, which consistently produce complexity independent of the details of the protocol.
9:45 - Jacob Biamonte, "Towards Optimality Results in the Alternating Operator Ansatz"
We present some recent and ongoing results related optimal solutions to the alternating operator ansatz (QAOA). This is the backbone behind gate-model based quantum deep learning networks based on generative Boltzmann machine models. Time permitting, we will present results about those as well.
10:30 - Coffee break
11:00 - Bettina Heim, "Leveraging the Power of Quantum for Machine Learning"
Quantum machine learning is an exciting new field emerging at the intersection between quantum computing and machine learning (ML). On one hand, advances in quantum algorithms provide new approaches for widely used classical routines that could improve both training of and sampling from ML models by increasing accuracy and/or efficiency, or allowing for richer model classes. On the other hand quantum computers are a natural fit for applying machines learning techniques to study quantum systems, where the ability to more easily process and model quantum states could give a significant edge over classical approximations.
The development of quantum computing devices with an increasingly large number of qubits encourages to investigate the practicality of these algorithms and what progress one could hope for as quantum technology matures. In my talk I will give an overview over existing quantum machine learning algorithms, their potential, and their caveats.
11:45 - Christine Muschik, "Real-Time Dynamics of Lattice Gauge Theories with a Few-Qubit Quantum Computer"
Lattice gauge theories describe fundamental phenomena in nature, but calculating their real-time dynamics on classical computers is notoriously difficult. In a recent publication [Nature 534, 516 (2016)], we proposed and experimentally demonstrated a digital quantum simulation of 1+1-dimensional quantum electrodynamics (Schwinger model) on a few-qubit trapped-ion quantum computer. We are interested in the real-time evolution of the Schwinger mechanism, describing the instability of the bare vacuum due to quantum fluctuations, which manifests itself in the spontaneous creation of electron-positron pairs. To make efficient use of our quantum resources, we map the original problem to a spin model by eliminating the gauge fields in favour of exotic long-range interactions, which have a direct and efficient implementation on an ion trap architecture. Our work represents a first step towards quantum simulating high-energy theories with atomic physics experiments.
12:30 - Lunch
14:00 - Antonio Mezzacapo, "Dealing with imperfect quantum machines"
Near-term applications of quantum computers on current quantum devices are hindered by both control and decoherence issues. While we research a path to fully fault-tolerant devices, understanding how to deal with noise and imperfect control is of outmost importance. In this talk we are going to address the most promising applications of early-stage quantum computers, and how they can benefit from short-depth algorithms and error mitigation of quantum observables measured on real devices, without error-correcting their underlying quantum state.
14:45 - Christian Gross, "Novel Detection Possibilities with Quantum Gas Microscopes: From Hidden Correlations to Incommensurate Magnetism"
The rich physics of the Fermi-Hubbard model arises due to the interplay of the charge and spin degrees of freedom. Here we highlight novel detection methods for these degrees of freedom using spin and charge resolved single atom detection in ultracold lattice systems. The detection of the full local and global counting statistics allows us to analyze non-local correlation functions and perform data-postselection, which reveal hidden spin correlations and incommensurate magnetism in 1D chains.
Presentation Slide: BDC2018_Christian_Gross
15:30 - Coffee break
16:00 - Jacob Sherson, "Remote Connected Science, Hybrid Human-Machine Learning in Quantum Physics and Beyond"
Despite enabling impressive advances, the big-data driven deep learning paradigm has been challenged by AI scholars for not holding the potential to reach human scale intelligence. Instead, they propose studies of human psychology as a basis for hybrid human-machine intelligence. An open question for the future of research is therefore how to design interfaces that allow for an optimal interaction between human intuition, complex machinery, and increasingly powerful ML.
In the project, we have developed gamified interfaces allowing so far 250,000 players to contribute to research by providing insightful seeds for quantum optimization algorithms and remote access to our ultra-cold atoms experiment for amateur scientists, students, and researchers. Finally, I will discuss our effort to provide efficient, game-based heuristics for NP-hard computational problems related to spin glasses and ongoing efforts to demonstrate quantum supremacy using quantum annealing.
Presentation Slide: BDC2018_Jacob_Sherson
Shahnawaz Ahmed, "Learning constraints: A walk through the Deep Learning zoo with Sudoku and ellipsoids"
Many Deep Neural Network architectures and tricks that are currently used for training are poorly understood beyond a heuristic level. We discuss the learning of various types of constraints using several Deep Learning architectures. Specifically, we study the learning of the rules of Sudoku and various polynomial function based constraints. We design the network weights from scratch to understand how constraints are learned. The necessity of techniques such as skip connections and drop out are clear from this approach. Limits on the number of neurons in the first hidden layer are established which was discussed in the context of learning the Ising model near criticality. The goal is learning symmetries, complex relations and constraints in data rather than just obtaining a correct prediction. In science, Deep Learning can only be beneficial if we are able to extract learned relationships and constraints in the data rather than just make predictions using a black box approach.
Andrey Bagrov, "Applications of the AdS/CFT-holography to many-body quantum systems"
Luca Bayha, "Anomalous breaking of scale invariance in a two dimensional Fermi gas"
System lacking an absolute scale in the Hamiltonian show the same behavior on all scales. An example of such a scale invariant system is the classical Fermi gas in two dimensions with contact interactions. When adding a harmonic potential, the breathing mode frequency is fixed by the scale invariance of the classical gas. On the quantum mechanical level, however, the scale invariance is broken by introducing the two dimensional scattering length as a regulator. This quantum anomaly leads to a shift of the frequency of the breathing mode of the cloud. Here I present our experimental study of this frequency shift for a two component Fermi gas in the strongly interacting regime. We observe a significant shift away from the scale invariant result depending on both interactions and temperature. A careful analysis of all the additional terms that may lead to explicit breaking of scale invariance is required to distinguish their effect from the effects caused by the anomaly.
Andreas Baumbach, "Computation with Spiking Neural Networks"
Biologically inspired networks of spiking neurons can be used to implement Boltzmann machines, that can be trained to perform a multitude of tasks, including pattern completion and classification. We will present the implementation and configuration of these networks and propose ideas how we can link these implementations to recent work on the Quantum Many Body problems.
Stefanie Czischek, "Artificial Neural Network Representation of Spin Systems in a Quantum Critical Regime"
We use the newly developed artificial-neural-network (ANN) representation of quantum spin-1/2 states based on restricted Boltzmann machines to study the dynamical build-up of correlations after sudden quenches in the transverse-field Ising model (TFIM). We calculate correlation lengths and study their time evolution after sudden quenches into the vicinity of the quantum critical point. By comparison with exact numerical solutions we show that in the close vicinity of the quantum critical point, in the regime of large correlations, large network sizes are necessary to capture the exact dynamics. On the other hand we show a high accuracy of the ANN representation in regimes with smaller correlations even for small network sizes. By looking at the TFIM in an additional longitudinal field, we find the same behavior of the ANN representation by comparison with DMRG caclulations for a not exactly solvable system, which suggests that the method be efficiently used for more complex systems.
Martin Gärttner, "Spatially distributed multipartite entanglement enables Einstein-Podolsky-Rosen steering of atomic clouds"
A key resource for distributed quantum-enhanced protocols is entanglement between spatially separated modes. Yet, the robust generation and detection of nonlocal entanglement between spatially separated regions of an ultracold atomic system remains a challenge. Here, we use spin mixing in a tightly con- fined Bose-Einstein condensate to generate an entangled state of indistinguish- able particles in a single spatial mode. We show experimentally that this lo- cal entanglement can be spatially distributed by self-similar expansion of the atomic cloud. Spatially resolved spin read-out is used to reveal a particularly strong form of quantum correlations known as Einstein-Podolsky-Rosen steering between distinct parts of the expanded cloud. Based on the strength of Einstein-Podolsky-Rosen steering we construct a witness, which testifies up to genuine five-partite entanglement.
Gabriel Andres Fonseca Guerra, "Testing Local Realism for Theories of the Brain"
The violation of local realism is perhaps a more striking feature of quantum mechanics than quantization itself. Bell experiments test whether such violations exist in a physical system and if so, rule out a theory of local hidden variables which could explain the observed phenomena, prevailing the quantum over the classical interpretation. On a rather different spatiotemporal scale, resides the brain as a theoretical challenge for a unified theory. Here we explore the (non-)locality of brain states in a search for bounds on the quantum-like requirements for a theory of the brain.
Stephan Helmrich, "Self-organised critical states in driven-dissipative atomic gases"
The competition between driving, interactions and dissipation in physical systems can lead to the emergence of out-of-equilibrium phases as well as the extreme case of universal regimes. We prepare such phases of matter in driven-dissipative samples of ultracold atomic systems excited to strongly interacting Rydberg states. For weak driving we find that the system remains in an inactive state, while for strong driving the system self-organises into a critical state through subsequent cycles of facilitated excitation and decay events. We probe the nature of this self-organised critical state by analysing its scaling behaviour, temporal dynamics and susceptibility.
Luca Innocenti, "Supervised learning of time-independent Hamiltonians for gate design"
The last years have seen a growing interest in the merging of the fields of classical machine learning and quantum physics. In particular, a number of machine learning techniques originally developed for big data analysis have been fruitfully adapted to problems in quantum information science and many-body theory (Carleo and Troyer 2017, Torlai et al. 2017). While in these works standard neural network architectures have been used, the same training techniques can be applied to substantially different architectures. This kind of neural-network-inspired optimisation has already been demonstrated in the context of quantum gate learning (Banchi et al. Nature npjqi 2016). Building on that work we generalise their method and using automatic differentiation we make it possible to explore networks of 8 and more qubits, optimising over hundreds of possible pairwise interactions. We use this method to train qubit networks finding sets of interactions implementing target quantum gates.
Miroslav Jezek, "Photonic simulation in quantum thermodynamics, quantum computing, and communication"
I briefly review our experimental work on conditional cooling of quantum channels [10.1038/srep16721] and qubit interaction enhancement [10.1103/PhysRevA.92.022341, 10.1038/srep32125]. Furthermore, I present recent results on quantum thermodynamics simulations employing photon statistics generation and manipulation [10.1038/s41598-017-13502-0, arXiv:1801.03063].
Selim Jochim, "The Heidelberg Quantum Architecture"
A novel quantum simulation experiment using ultracold atoms will be set up at Heidelberg University with parameters matched in such a way that its inputs and outputs can be interfaced with a neuromorphic hardware developed within the Human Brain project. Essential ideas of the approach being pursued will be presented.
Ralf Klemt, "Direct observation of correlations and entanglement in two-fermion quantum states"
Entanglement is a defining feature of quantum many-body states and the central resource for quantum communication and computing. However, in particular in itinerant systems, entanglement is notoriously difficult to characterize experimentally. A key role is therefore played by small but deterministically prepared systems which allow us to certify the emergence of correlations and entanglement in a direct and well-controlled fashion. In the work presented on this poster, we deterministically prepare strongly interacting quantum states of two fermionic atoms trapped in a double-well potential. We measure both the in-situ and momentum distribution with single atom resolution, enabling us to extract all relevant correlation functions. Strong correlations indicating high coherence in our system allow us to observe the emergence of entanglement in the mode as well as the particle degree of freedom, and to identify relevant witnesses, measures and protocols to characterize it.
Philipp Kunkel, "Spatially distributed multipartite entanglement enables Einstein-Podolsky-Rosen steering of atomic clouds"
A key resource for distributed quantum-enhanced protocols is entangled states between spatially separated modes. Here, we use spin mixing in a tightly confined BEC of 87Rb to generate a squeezed vacuum state in a single spatial mode. We show experimentally that the corresponding local entanglement can be spatially distributed by self-similar expansion of the atomic cloud in a waveguide potential. Spatially resolved spin read-out is used to reveal EPR steering between distinct parts of the expanded cloud. Building on the ability to partition the system arbitrarily, we show three-way steering. To quantify the connection between the strength of EPR steering and genuine multipartite entanglement we construct a witness, which reveals up to genuine five partite entanglement.
Daniel Linnemann, "Spatially distributed multipartite entanglement enables Einstein-Podolsky-Rosen steering of atomic clouds"
Markus Mittnenzweig, "Hybrid quantum-classical modeling for the simulation of quantum dot devices"
We introduce a new hybrid quantum-classical modeling approach for the simulation of electrically driven quantum dot devices by coupling the semi-classical drift-diffusion system with a quantum master equation in Lindblad form. Our approach enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic device geometries in a unified way. We prove that the hybrid model is consistent with fundamental axioms of (non-)equilibrium thermodynamics, in particular it guarantees the second law. The approach is demonstrated by numerical simulations of an electrically driven single-photon source in the stationary and transient operation regime.
Katja Mombaur, "How can model-based optimization and neural networks be most efficiently combined for motion control and prediction?"
Optimal control based on realistic physical models is a powerful method for motion prediction and control for humans, robots and assistive robotic technology. While this approach allows to precisely describe all mechanical properties including kinematic and dynamic limitations, the dynamics of technical actuators and muscles as well as behavior rules, and important drawback are the high computation times. In previous research we have successfully combined optimal control and Bayesian machine learning methods by learning movement primitives based on optimal control solutions that served as training data, allowing us to generate variable walking motions for complex humanoid robots. In current research, we are interested to explore if the results could be even improved by combining neural networks and optimal control methods and how this could be most efficiently done taking software and hardware aspects into account. NN alone have so far not been able to solve such problems.
Puneet Murthy, "Exploring two-dimensional Fermi systems with ultracold atoms"
Johannes Otterbach, "Unsupervised Machine Learning on a Hybrid Quantum Computer"
Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset of those available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to classical performance. A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors. Here, we demonstrate such hybridization by training a 19-qubit gate model processor to solve a clustering problem. We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.
Davide Pastorello, "Two-way quantum key distribution based on tripartite entanglement"
Entanglement is a well-known resource in quantum information. In particular, it can be exploited for quantum key distribution (QKD). We propose a two-way QKD scheme employing GHZ-type states of three qubits obtaining an extension of the standard E91 protocol with a significant increase in the number of shared bits. Eavesdropping attacks can be detected measuring violation of the CHSH inequality and the secret key rate can be estimated in a device-independent scenario.
Cartik Sharma, "Source quantization of EEG studies using quantum machine learning"
We propose mathematical formulations for localizing source acitivity in EEG data sets for PTSD victims using quantum machine learning. We explore several techniques in source localization for global optimization and select the Ising annealing model to localize source activity instantaneously. Calculations are performed on a DWAVE 2000 for quantum computation and are rapid and efficient with high accuracy. We also adopt the use of restricted boltzmann machines based on adaptive filtering and parametric fitting to obtain optimal solutions. The data used for this exercise and proposal are eeg wave trains for 4D datasets in a spatio-temporal sense. We compare source quantization with conventional classical techniques and find that adaptive speed improvements lend itslef to clinical success and validity as concerns to diagnosing the PTSD problem. Several iterations of continuous calculation allow for expedited results and neural recovery.
Tom Tetzlaff, "Deterministic networks for probabilistic computing"
Neuronal-network models of high-level brain function often rely on the presence of stochasticity. The majority of these models assumes that each neuron is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In biological neuronal networks, the origin of this noise remains unclear. In hardware implementations, the number of noise sources is limited due to space and bandwidth constraints. Hence, neurons in large networks have to share noise sources. We show that the resulting shared-noise correlations can significantly impair the computational performance of stochastic neuronal networks, but that this problem is naturally overcome by generating noise with deterministic recurrent neuronal networks. By virtue of the decorrelating effect of inhibitory feedback, a network of a few hundred neurons can serve as a natural source of uncorrelated noise for large ensembles of functional networks, each comprising thousands of units.
Tobias Thommes, "Design and Implementation of an EXTOLL network-interface for the communication-FPGA in the BrainScaleS neuromorphic computing system"
The Human Brain Project (HBP) aims to understand by means of Synthesis Biology how the inconceivably efficient system of the Human Brain works. The BrainScaleS system at the Kirchhoff-Institute for Physics in Heidelberg is part of the HBP and pursues this goal by developing a neuromorphic analog hardware system in combination with a conventional computing cluster. This poster summarizes the development of a new network interface for the FPGAs controlling the data communication between the neuromorphic hardware chips and the conventional digital system. The new interface will enable the BrainScaleS system to use the benefits of the Extoll network, a high-performance interconnection network, optimized for low latency and high message rates.
Yuuki Tokunaga, "Cavity/Circuit QED-based quantum computing"
Cavity/circuit QED is a promising system for realizing quantum information processing, because the deterministic atom-photon interaction can efficiently make single photon sources, and atom-photon quantum gates. One of the difficulty of realizing scalable quantum computing is the trade-off between the high-fidelity operation and the integration of many qubits. Cavity/circuit QED systems may solve the problem by deterministically connecting remote atomic systems through flying photons/microwave photons and by efficiently reproducing lossy photonic qubits. Here, I present two physical systems from my collaborative works. One is the nanofiber cavity QED systems, which is a promising candidate for obtaining the high cooperativity and cavity array systems. The other is a circuit QED system that can make entanglement between remote superconducting atoms by using a quantum gate between a superconducting atom and a propagating microwave photon [Phys. Rev. Applied, 7, 064006 (2017)].
Sabine Tornow, "A Quantum Information Course for Computer Science Students"
A quantum information course is presented for students of computer science at a university of applied sciences in Germany. The lecture is built on Python simulation exercises covering different toy models, quantum algorithms and quantum error correction.
Artem Volosniev, "Using cold atoms to engineer a spin chain with perfect state transfer"
"How a quantum state (or information about it) can be transmitted" is the question that arises when one thinks about quantum computing. We study this problem in a one-dimensional gas of strongly-interacting cold atoms [1,2]. It is shown that these systems give one the opportunity to simulate inhomogeneous Heisenberg Hamiltonians, where the spin-spin interactions are determined by the shape of the trapping potential. Therefore, they can be used to engineer spin chains that enjoy perfect state transfer. We illustrate our findings using a simple yet non-trivial four-body system. [1] A. G. Volosniev, D. Petrosyan, M. Valiente, D. V. Fedorov, A. S. Jensen, and N. T. Zinner Phys. Rev. A 91 023620 (2015) [2] O. V. Marchukov, A. G. Volosniev, M. Valiente, D. Petrosyan, and N. T. Zinner Nature Commun. 7 13070 (2016)
Yibo Wang, "Q-Walker: a fully-programmable quantum dynamics simulator with Rydberg-dressed atoms"
The transport of energy, charge and information is of fundamental importance in nature and technology ranging from (bio)physical processes to the operation of nano-electronic devices. Building on experimental advances with Rydberg atoms as controlled quantum many-body systems, we aim to establish a programmable quantum dynamics simulator “Q-walker”, which is suited to studying quantum transport on complex networks. The key components include fully configurable arrays of individual atoms, tunable long-range interactions for mediating couplings, local control over system-reservoir interactions, and time- space- and state-resolved readout. Q-Walker can be used to study quantum and classical transport in complex networks and to mimic the fundamental processes played in biological networks such as light-harvesting complexes. We hope to answer questions concerning the role of quantum coherence in photosynthesis and devise methods for classifying complex quantum networks.
Ralf Wessel, "Computing Vision with Neural Circuits Operating at Self-Organized Criticality"
The confusing thicket of malleable connections between billions of neurons renders the brain a complex adaptive system. The subjective experience of vision is thought to emerge from the impact of incoming spatiotemporal stimuli onto this pliable tangle of neuronal interactions. Yet, to date, a convincing computational framework for the processing of visual stimuli in neural circuits remains elusive. The construction of a solid understanding of cortical circuit dynamics is likely to provide a useful launch pad for ongoing and future investigations of cortical computation and sensory processing. I will present new insight into cortical circuit dynamics from the perspectives of recorded cortical population activity and membrane potential fluctuations, and from model investigations. Our results suggest that cortical circuits self-organize towards a balanced critical regime at which correlated variability is maintained at an intermediate level with advantages for neural computation.
Matthias Zimmermann, "Developing scientific applications with the Data Vortex network"
The Data Vortex (DV) network is used to communicate large amounts of data between different nodes of a parallel computer. We present benchmarks illustrating how DV excels at communicating random packets in situations where aggregation is difficult and for algorithms involving the fast Fourier transform. Next, we briefly describe a programming model for DV and draw comparisons with MPI. Finally, we summarize ongoing research projects: (i) inviscid incompressible fluid flow, (ii) the Schrödinger equation for few-body systems, and (iii) the simulation of ideal quantum computers capable of running general quantum circuits.
Close Menu |
8207a5749b2d70de |
Innovative Power Products Couplers
Quantum Mechanics for Nanostructures
RF Cafe Quiz #31
Click here for the complete list of RF Cafe Quizzes.
Note: Some material based on books have quoted passages.
RF Cafe Featured Book - Quantum Mechanics for NanostructuresThis quiz is based on the information presented in the book, "Quantum Mechanics for Nanostructures," by Vladimir V. Mitin, Dmitry I. Sementsov, Nizami Z. Vagidov.
Published by Cambridge University Press.
Note: Some of these books are available as prizes in the monthly RF Cafe Giveaway.
1. When was the notion of "nanotechnology" introduced?
a) 1999
b) 1979
c) 1959
d) 1939
2. What determines whether a material is a nanostructure?
a) At least one dimension less than 100 nm
b) No dimension greater than 1 nm
c) Metric versus English units
d) There is no formal distinction between nano and macro
3. What is graphene?
a) An atomic lattice of carbon in the shape of a Cartesian graph grid
b) A single layer lattice of carbon atoms
c) A 3-D crystal lattice of carbon atoms
d) A carbon-based gas used for welding
4. When was graphene first produced?
a) 1974
b) 1984
c) 1994
d) 2004
5. What is the fundamental usefulness of the Schrödinger equation?
a) It is used to calculate the wavefunction of a system
b) It determines the relativistic mass of particles
c) It is used to calculate electron energy levels
d) It determines whether or not the cat is still alive
6. What is quantum tunnelling?
a) The ability of quanta to tunnel
b) A purely theoretical phenomenon of electrons & protons
c) Propagation of a proton in the region of a potential barrier
d) Propagation of an electron in the region of a potential barrier
7. Spin refers to which property of an electron?
a) Angular momentum
b) Whether it is pointing up or down
c) Rotational speed
d) The ability to recast information according to its bias
RF Cafe - (image from Wikipedia)8. What does the image to the right represent?
a) A molecule of graphene
b) An atom of graphite
c) A carbon nanotube
d) A diamond nanotube
9. What is the basis of a "single electron" device?
a) Devices constructed from a single electron
b) Devices that absorb a single electron per hole
c) Devices based on the effect of tunneling of a single electron
d) Devices with a single electron in the orbital
10. In classical physics, what does a particle's total mechanical energy consist of?
a) Kinetic energy + potential energy
b) Kinetic energy * potential energy
c) Sqrt (Kinetic energy2 + potential energy2)
d) Sqrt (Kinetic energy * potential energy)
Need some help? Click here for the answers and explanations.
Triad RF Systems ConductRF Precision RF VNA Test Cables - RF Cafe
Exodus Advanced Communications - RF Cafe Berkeley Nucleonics Model 865 Signal Generator - RF Cafe
About RF Cafe
Kirt Blattenberger - RF Cafe Webmaster
Copyright: 1996 - 2024
Kirt Blattenberger,
My Hobby Website: AirplanesAndRockets.com
These Are Available for Free |
be2f287e11a94a73 | Gyrokinetic theory of magnetic structures in high-\beta plasmas of the Earth’s magnetopause and of the slow solar wind
Gyrokinetic theory of magnetic structures in high- plasmas of the Earth’s magnetopause and of the slow solar wind
Dušan Jovanović Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Zemun), Serbia Olga Alexandrova Observatoire de Paris–Meudon, Laboratoire d’Etudes Spatiales et d’Instrumentation en Astrophysique (LESIA), Centre National de la Recherche Scientifique (CNRS), Meudon, France Milan Maksimović Observatoire de Paris–Meudon, Laboratoire d’Etudes Spatiales et d’Instrumentation en Astrophysique (LESIA), Centre National de la Recherche Scientifique (CNRS), Meudon, France Milivoj Belić Texas A&M University at Qatar, P.O. Box 23874 Doha, Qatar
July 6, 2019
Nonlinear effects of the trapping of resonant particles by the combined action of the electric field and the magnetic mirror force is studied using a gyrokinetic description that includes the finite Larmor radius effects. A general nonlinear solution is found that is supported by the nonlinearity arising from the resonant particles, trapped by the combined action of the parallel electric field and the magnetic mirror force. Applying these results to the space plasma conditions, we demonstrate that in the magnetosheath plasma, coherent nonlinear magnetic depression may be created associated with the nonlinear mirror mode and supported by the population of trapped ions forming a hump in the distribution function. These objects may appear either isolated or as the train of weakly correlated structures (the cnoidal wave). In the Solar wind and in the Earth’s magnetopause, characterized with anisotropic electron and ion temperatures that are of the same order of magnitude, we find coherent magnetic holes of the same form that are attributed to the two branches of the nonlinear magnetosonic mode, the electron mirror and the field swelling mode, including also the kinetic Alfvén mode, and supported by the population of trapped electrons. The localized magnetic holes may have the form of a moving oblique slab or of an ellipsoid parallel to the magnetic field and strongly elongated along it, that propagates along the magnetic field and may be convected in the perpendicular direction by a plasma flow. While the ion mirror structures are purely compressional magnetic, featuring negligible magnetic torsion and electric field, the magnetosonic and kinetic Alfvén structures possess a finite electrostatic potential, magnetic compression, and magnetic torsion, but the ratio of the perpendicular and parallel magnetic fields remains small.
52.30.Gz, 52.35.Sb, 94.05.Fg, 94.05.Lk, 94.30.cj,
I Introduction
Coherent magnetic structures are ubiquitous in the space plasma of the solar system, where they have been observed over the full range of distances and latitudes relative to the Sun. They were detected in the solar wind Perrone et al. (2016); Winterhalter et al. (1994); Chisham et al. (2000), in the Earth’s magnetosphere, i.e. in the magnetotail Rae et al. (2007) and in the magnetopause Stasiewicz et al. (2003, 2001); Gershman et al. (2017), in the magnetospheres of the Mars, Saturn, and Jupiter Bertucci et al. (2004); Tsurutani et al. (1982); Erdős and Balogh (1996), and also in the induced magnetospheres of Venus, Io, and comets Volwerk et al. (2008); Russell et al. (1999, 1987); Glassmeier et al. (1993). The Voyager mission detected magnetic structures in the heliosheath, beyond the heliospheric termination shock Avinash and Zank (2007). Magnetic structures mostly have the form of solitary magnetic depressions (holes) or the trains of magnetic holes Stasiewicz et al. (2001). Solitary magnetic humps were detected less frequently, e.g. in the Earth and Jovian magnetosheaths Lucek et al. (1999); Joy et al. (2006), while trains of humps and the combinations of humps and holes were observed in the Earth’s magnetosheath Burlaga et al. (2006). Magnetic structures often feature a large perturbation of the intensity of the magnetic field (10–50%, sometimes Stasiewicz et al. (2001) as large as 98%), and very little bending. They are pressure balanced, i.e. exhibit the anticorrelation between the magnetic and thermal pressures. Their perpendicular scale is several, to several tens proton gyroradii, but holes of several hundreds gyroradii have also also detected Stasiewicz et al. (2001). Their pitch angle to the magnetic field is close to , yielding the aspect ratio of 7–10.
In the sheath plasmas the thermal pressure exceeds the magnetic pressure and we often have , where is the ratio of thermal and magnetic pressures. The ion temperature is usually both anisotropic and much larger than the electron temperature, . Under such conditions, several linear modes are unstable. The thermal anisotropy in a high- plasma drives both the ion mirror mode Hasegawa (1969); Southwood and Kivelson (1993), whose parallel phase speed is much smaller than the ion thermal speed, and the ion cyclotron mode Price et al. (1986); Southwood and Kivelson (1993). In the same range of phase speeds, the halo in the tail of the distribution function drives the halo instability Pokhotelov et al. (2005). Conversely, in the magnetopause the electron and ion temperatures are close to each other and the temperature anisotropy is usually not very large, but there exist a strong current that yields the magnetic reconnection, contributing also to the creation of magnetic structures Stasiewicz et al. (2001). Moreover, the coexistent inhomogeneity of the pressure and magnetic field, via the Hall instability, destabilizes the kinetic Alfvén wave Duan and Li (2005) that propagates faster than the ion acoustic speed and slower than the electron thermal speed. The gradient and the anisotropy of the electron temperature, for certain combinations of the plasma and of the anisotropy , can also excite the instabilities Basu and Coppi (1982) of the magnetosonic mode, whose parallel phase velocity lies between the parallel electron and ion thermal speeds. In the literature, the unstable fast magnetosonic mode is referred to as the field swelling mode and the unstable slow magnetosonic mode as the electron mirror instability, for details see e.g. Pokhotelov et al. (2003). Particularly important is the short wavelength limit of the ideal MHD fast magnetosonic mode, in which the spatial scale of disturbances is close to the ion Larmor radius or to the electron inertial length, and the Alfvén mode acquires an electric field parallel to the background magnetic field. Such mode is usually referred to as the kinetic Alfvén mode. It often occurs in space physics where it is responsible for the acceleration and energization of particles as well as for the exchange of energy between waves and particles. Linear kinetic Alfvén waves are unstable in the presence of inhomogeneities of the density and magnetic field Duan and Li (2005) and of the electron temperature anisotropy Chen and Wu (2010).
Because of such richness of linear instabilities, that presumably saturate into the magnetic structures Stasiewicz et al. (2001); Bertucci et al. (2004); Tsurutani et al. (1982); Erdős and Balogh (1996); Volwerk et al. (2008); Russell et al. (1999, 1987); Glassmeier et al. (1993); Rae et al. (2007); Stasiewicz et al. (2003); Gershman et al. (2017); Winterhalter et al. (1994); Chisham et al. (2000); Avinash and Zank (2007); Lucek et al. (1999); Joy et al. (2006); Burlaga et al. (2006), the nature of the latter still remains elusive. The prevailing theory relates them with the nonlinear mirror mode, but other models have also been proposed, based on the magnetic reconnection Baumgärtel et al. (2003), magnetohydrodynamic (MHD) beam microinstabilities Vasquez and Hollweg (1999), Hall MHD of charge-exchange processes Avinash and Zank (2007), and on magnetosonic solitons Stasiewicz et al. (2001, 2003); Baumgärtel (1999).
The linear mirror mode in a spatially uniform, bi-Maxwellian () plasma is weakly dispersive due to the finite ion Larmor radius effects. Under magnetosheath conditions, when the electrons are cold and massless, the mirror mode is purely growing Southwood and Kivelson (1993) due to the resonant contribution of the particles with the zero parallel velocity. However, 1-D particle simulations Qu et al. (2008); Pokhotelov et al. (2008) revealed that the saturation of the mirror instability produced humps rather than holes, if the linear drive was strong enough. In weakly unstable configurations periodic structures with moderate humps and holes were obtained, while under linearly stable conditions initially imposed holes persisted for very long times. Accordingly, mirror-mode humps are observed in the middle of the magnetosheath, while the holes are observed close to the magnetopause Génot et al. (2009), where the mirror mode is marginally stable. In most cases, the trains of humps are created rather than isolated humps, and on a very long timescale these are inverted to become holes Califano et al. (2008). The saturation mechanism, for a strong drive, comes from the trapping of the resonant ions by the mirror force, producing vortices in the phase space, which actually dominates the mirror mode dynamics in the case of a weak drive Califano et al. (2008); Qu et al. (2008).
KdV-type magnetosonic solitons exist in the case of propagation at sufficiently large angles to the magnetic field Kawahara (1969); Ohsawa (1986). Conversely, for a quasiparallel propagation, envelope solitons become possible Baumgärtel (1999) that are essentially Alfvén wave packets modulated by zero-frequency acoustic perturbations and described by the derivative nonlinear Schrödinger equation (DNSE). However, both the KdV and the DNSE equations describe the dynamics of finite (but small!) amplitude perturbations of the compressional magnetic field that are strictly 1-D (slab) structures, unstable in the transverse direction. The soliton theory has been criticized Avinash and Zank (2007), because it requires a quasiparallel propagation, in a sharp disagreement with the observed large aspect ratios. Conversely, 1-D slow magnetosonic solitons propagate at close to to the magnetic field Stasiewicz et al. (2003). On the proton scale there may also exist electrostatically charged magnetic structures, whose self-organization comes from the nonlinear effects associated with trapped electrons and the magnetic hole or a hump is created by the current of the drift of trapped electrons Treumann and Baumjohann (2012). Such magnetized electron phase-space holes have been observed in the plasma sheath Sundberg et al. (2015) and in 2-D PIC simulations Haynes et al. (2015). Moreover, phase-space structures can be driven also by the grad-B and currents of ions that are trapped in a self-consistent magnetic bottle Jovanović and Shukla (2009).
Magnetic bubbles were observed by the Polar satellite Stasiewicz et al. (2001) in the high-latitude magnetopause boundary and in the presence of strong magnetopause currents (i.e. near a possible reconnection site). They featured strong depressions (up to 98%) of the ambient magnetic field and were filled with heated solar wind plasma and immersed in a broadband turbulent spectrum of kinetic Alfvén waves. Numerical simulations Stasiewicz et al. (2001) indicated that the bubbles could be produced by the magnetic reconnection, with the accompanying kinetic Alfvén fluctuations coming from the Hall instability driven by the macroscopic gradients of pressure and magnetic field. A similar situation has been recently revisited by the NASA’s Magnetospheric Multiscale (MMS) mission Gershman et al. (2017) that enabled 3-d measurements of both the charged particles and the electromagnetic fields, with a sufficiently high resolution to resolve the ion kinetic scale (i.e. the scale of the ion Larmor radius). The MMS mission observed compressive fluctuations featuring anti-correlated perturbations of the electron density and the magnetic field magnitude, in the vicinity of a recent magnetic reconnection that produced a plasma jet flowing nearly anti-parallel to the background magnetic field with a speed , where and are the acoustic and the Alfvén speeds, respectively. These magnetic field fluctuations and bursts of electron phase space holes appeared together with the kinetic Alfvén wave in the locations of strong electron pressure gradients. The magnetic structure had the form of a kinetic Alfvén wave packet, propagating at the pitch angle to the ambient magnetic field, that exhibited spatial structure in the transverse direction, of the order of an ion gyroradius. The close examination of the electron velocity distribution function in the wave packet revealed that besides the isotropic thermal core and two suprathermal beams counterstreaming along the magnetic field, commonly observed in the magnetopause boundary layer, there existed also a population of trapped particles which accounted for of the density fluctuations and a increase in the electron temperature within the KAW. However, the latter was not indicative of heating but rather of a nonlinear capture process that may have provided the nonlinear saturation of Landau and transit-time damping. These electrons were trapped within adjacent wave peaks by the combined effects of the parallel electric field and the magnetic mirror force. Their distribution function unmistakably exhibited the loss-cone features, since it contained only the particle velocities with near magnetic pitch angles. In the magnetic hole recorded by Gershman et al. (2017), the ratio of the minimum to maximum magnetic field magnitude was , and the resulting magnetic mirror force was sufficient to trap electrons with magnetic pitch angles between and .
In the present paper, we study the effects of particle trapping in a high- plasma with anisotropic temperature, using the Chew–Goldber–Law gyrokinetic theory and including the Dippolito–Davidson treatment of higher-order corrections Frieman et al. (1966); Davidson (1967); Dippolito and Davidson (1975). We derive the nonlinear equation for the compressional magnetic field, allowing also for a finite parallel electric field (the latter is short-circuited only when the electrons are cold), including also the convection of both particle species by the grad-B drift. In the stationary regime, the appropriate expressions for the energy, magnetic moment, and canonical momentum for both species are found, and used to construct their distribution functions Luque and Schamel (2005). In appropriate limits, our equations reduce to the nonlinear ion mirror Southwood and Kivelson (1993); Kuznetsov et al. (2007a); Jovanović and Shukla (2009), kinetic Alfvén Gershman et al. (2017), electron mirror-, and the field swelling modes Pokhotelov et al. (2003), as well as to the magnetized electrostatic electron and ion holes Treumann and Baumjohann (2012). We demonstrate that in the general case, all perturbations whose characteristic perpendicular scale exceeds the ion scales (i.e. the ion plasma length, the ion Larmor radius, or the ion acoustic radius) are described by the same generic nonlinear equation (62), which possesses two distinct coherent solutions in the form of a slab that is oblique to the magnetic field and propagates perpendicularly to it, or of a finite length filament (’cigar’) parallel to the magnetic and propagating along the latter. A propagating, infinitely long, oblique filament, i.e. a cylinder with ellipsoidal cross section, is also possible but its description requires the solution of a 2-D nonlinear equation and it has been left out from our present study. Our oblique slab is, actually, the limiting case of the well known periodic cnoidal wave solution Luque and Schamel (2005) that can fully reproduce the properties of the Ref. Gershman et al. (2017) structures. Conversely, our filaments are fundamentally different from the high- MHD (quasi)monopolar vortices, governed by the fluid convective nonlinearity, which are prohibited in the kinetic Alfvén regime Jovanović et al. (2017).
Ii Gyrokinetic description of perturbations in a warm plasma, somewhat bigger than the ion-scale
In order to study the effects of the mirror force on plasma particles, we use the classical Chew–Goldber–Law gyrokinetic theory, including the Dippolito–Davidson treatment of higher-order corrections Frieman et al. (1966); Davidson (1967); Dippolito and Davidson (1975). The latter is obtained by the integration of the Vlasov equation for the particles’ gyroangle, taking that the dynamics of the particles’ guiding centers is slow on the temporal scale of the their cyclotron gyrations, that the dynamics of magnetic field lines belongs to the same slow temporal scale and that their curvature is relatively small. It includes the terms of the zeroth and of the first order in the small parameter pertinent to the drift scaling and to the small corrections coming from the finite Larmor radius and from the displacement current, viz.
where and are the characteristic frequency and characteristic perpendicular wavenumber. The Dippolito–Davidson theory was developed under the ordering and which resulted in a rather complicated gyrokinetic equation (1) of Ref. Davidson (1967). The latter is considerably simplified if we relax their ordering between the parallel and perpendicular wavenumbers as well as for parallel and perpendicular electric fields. Here, in addition to the constraints Eq. (1), we assume a weak -dependence, an electric field that is mostly perpendicular to the magnetic fiel, and small perturbations of the density and of the magnetic field, viz.
and the gyrokinetic equation of Refs. Frieman et al. (1966); Davidson (1967); Dippolito and Davidson (1975) obtains an elegant form that is accurate to the leading order in , viz.
where is the particle velocity, while and are the magnitudes of its components parallel and perpendicular to the magnetic field, respectively. The guiding-centers’ distribution function is obtained by the integration of the particle distribution function for the gyroangle , defined as , where is a unit vector in the direction of the bi-normal of the magnetic field line, . Here is the drift velocity, while , , and are the kinetic counterparts of the grad-B, polarization, and parallel drift velocities. The parallel acceleration comes from the electric field and from the mirror force, while the perpendicular acceleration is equal to the divergence of the guiding center velocity, viz.
Here is the gyrofrequency, . Velocities and accelerations given in Eqs. (3)-(5) have been calculated with the accuracy to second order in the small parameter introduced in the ordering of Eq. (2), viz.
In Ref. Jovanović and Shukla (2009), a gyrokinetic equation has been derived that permits also large perturbations of the compressional magnetic field, if the curvature of the magnetic field lines is sufficiently small so that magnetic curvature and helicity can be neglected in the (small) terms coming from the ion polarization by grad-B drift. In other words, when the unperturbed magnetic field is oriented along the -axis, viz. , the results of Ref. Jovanović and Shukla (2009) are applicable when , but . Note that the scaling of Eq. (6) does not set a strong constraint on the Larmor radius, since it gives
where appears to be of arbitrary order. However, although it is not self-evident, our gyrokinetic equation may be valid when . To demonstrate this, we deduce from Eq. (3) the corresponding hydrodynamic equations of continuity and parallel momentum, and compare them with the hydrodynamic equations that exist in the literature, whose domain of validity and accuracy with respect to the small parameters and are known. Integrating the gyrokinetic equation (3) in velocity space and with appropriate weight functions and and after some tedious but straightforward algebra, carefully keeping the leading terms in the small parameter , we arrive at
for the notations, see Eq.s (46) and (47). Equations (8) and (9) include the effects of particles’ gyromotion, through the convection by the grad- drift and the acceleration by the mirror force, which makes them more general than the standard Strauss’s equations of reduced MHD Strauss (1976, 1977) in a moderately cold plasma, , from which the mirror force is absent. More accurate fluid calculations (see e.g. Ref. Jovanović et al. (2015)) include also finite Larmor radius corrections to the convective derivative , and the diamagnetic and grad-B contributions to the plasma polarization, viz.
where . Obviously, our moment equations (8) and (9) agree with the more accurate fluid equations (10) and (11) in the regime of small Larmor radius corrections, , when both the density perturbations are sufficiently small and the nonlinear convection by grad-B drift in the polarization term can be neglected. The latter is possible not only when , that is realized in low- plasmas, but also for in 1-D slab and cylindrically symmetric geometries, in which the essentially 1-D shape of the structure suppresses all convective derivatives. In view of this, we conclude that the gyrokinetic equation (3) can be used with caution also in plasmas with large ratios of thermodynamic and magnetic pressures, , to describe kinetic phenomena whose characteristic scales are somewhat bigger than the Larmor radius, .
ii.1 Integrals of motion (characteristics of the gyrokinetic equation)
We take that the unperturbed magnetic field is oriented along the -axis, viz. , and seek a localized, stationary, 2-D solution of Eq. (3) that is travelling with the velocity , where is an arbitrary phase velocity. This implies that the solution depends only on the variables , , , , and . Then, using , the gyrokinetic equation (3) can be rewritten as
where, for a stationary solution, we set and keeping only the terms of the orders and , we also have
The characteristics of the above stationary, 3-d gyrokinetic equation are determined from
from which we can calculate explicitly only two integrals of motion, the energy and the magnetic moment , viz.
Our expression (18) for the magnetic moment coincides with that derived by Davidson Davidson (1967) within the less restrictive gyro-drift scaling of Eq. (1), and also (in the appropriate limit) with the result of Jovanović and Shukla Jovanović and Shukla (2009) that permits also large perturbations of the compressional magnetic field. In a special case of a 2-D solution that is tilted relative to the axis by the small angle (where is the -component of the phase velocity), for which we have , we find one more conserved quantity, identified as the canonical momentum , viz.
Such tilted solution depends on four variables, , , , and , and the conserved quantities (17)-(19) constitute a complete set. Thus, an arbitrary travelling-tilted 2-D distribution function can be expressed as the function of three variables , , and . As the last one contains the explicit spatial variable , a distribution function can feature a -dependence only if it is spatially dependent in the unperturbed state. It should be noted also that the above integrals of motion have been calculated with the accuracy to first order in the small parameter , where in the expressions for the energy and the canonical momentum we neglected small terms of order , while the small variation of the magnetic moment is given by In a strictly 1-D case, this gives , and we expect that is of the same order also in 2-D and 3-D.
ii.2 Free and trapped particles
The stationary state under study has been established at a distant past and thus the solution of the 2-D stationary gyrokinetic equation (3) is constant along its characteristics. From the conservation laws (17) and (18) we can relate the particle velocities at the infinity, and , with those at the phase-space location . These "initial velocities" are the functions of integrals of motion and within the adopted accuracy take the form
One should keep in mind that cold and massless electrons efficiently short-circuit the parallel electric field and that, as a consequence, the term may become very small. As the latter appears to be the leading term within the scalings (6) and (7), it is necessary that we retain also the next-order term in Eqs. (20) and (22) although, at first sight, it appears to be a small quantity of higher order.
We take that that the electromagnetic field is localized, i.e. that that the potentials and , and the compressional field vanish at infinity, when , where . In such a case, there exist two fundamentally different shapes of the characteristics, i.e. of the particle trajectories in phase space , determined by :
i) Open characteristics, stretching to an infinitely distant point in real space . Particles following open characteristics are labeled as free.
ii) Characteristics that close on themselves and are confined to a limited domain in phase space. Particles on such trajectories are trapped.
On open characteristics, the distribution function is equal to its asymptotic value at , which we adopt to be a Maxwellian with anisotropic temperature, viz.
Here is the unperturbed ion density, and are the perpendicular and the parallel (to the magnetic field) ion temperatures, respectively, and and are the corresponding thermal velocities, . The "initial velocities" and are given in Eqs. (21) and (20). Clearly, the initial parallel velocity of free particles must be a real quantity, which is realized when . In the simple case , yielding , we find that inside a local minimum of the magnetic field, , the velocities of free particles belong to the loss cone in velocity space .
Conversely, for , i.e. for particles whose parallel velocities are in the region , the corresponding "initial velocity" is a complex quantity. Such result is unphysical and it implies that these particles have never been at, and will never come to, an asymptotic location . In other words, these particles are trapped on their characteristics which are closed curves in phase-space. As the particle trajectories do not cross, such closed characteristics occupy a region in phase-space that is inaccessible for free particles. This further implies that, in a distant past, the trapped particles have gone through some nonadiabatic process, during which time the potentials and the compressional magnetic field have been time-dependent, the term in the gyrokinetic equation (12) has been finite and their energy and magnetic moment have not been conserved. Over time, trapped particles perform a large number of bounces and we expect that the phase-averaging of the individual trajectories of trapped particles results in a shifted thermal distribution, with a parallel temperature , viz.
where such normalization has been adopted that the distribution functions and are continuous at the branch point , determined by Eq. (21). It should be noted that the trapped particles are isolated from those that are free and that the parallel temperature of trapped particles may be different than that of free particles and it can be even negative. This does not contradict the second law of thermodynamics, since trapped particles occupy only a limited phase-space volume within which their distribution function remains finite, irrespectively of the sign of the temperature. As a consequence, the relevant integrals of distribution function also remain finite.
Now we can calculate the necessary hydrodynamic quantities as the moments of the particle distribution function, performing the integration in velocity space with appropriate weight functions. It is instructive to separate nonresonant and resonant contributions in a specific moment , denoted by the superscripts "" and "", as follows
where denotes the principal value of an integral. In the above, the resonant distribution function is defined as and the weight function takes the values , , , and , in the expressions for the number density , the parallel hydrodynamic flow , the parallel pressure , and the perpendicular pressure , respectively. In the computation of nonresonant contributions, Eq. (25), we conveniently expand the free distribution function using the small quantity , which permits us to rewrite in the form
This enables a straightforward integration in Eq. (25), yielding
The parameter is the real part of the Fried-Conte plasma dispersion function (also called the -function):
and it has simple asymptotic values for and for . Although the finite Larmor radius terms in Eqs. (28)-(31) are small within our scaling, viz. , we may need them later since they provide the dispersion of MHD-like modes, such as the field swelling, electron-, and ion-mirror modes. As already mentioned, FLR terms are not calculated accurately from the gyrokinetic Frieman et al. (1966); Davidson (1967); Dippolito and Davidson (1975) equation (3). Making a comparison with the solutions of fluid equations (10) and (11) [see also Eq.s (16) and (17) in Ref. Jovanović et al. (2015)], we note that an appropriate description of the grad-, polarization, and weak FLR effects is obtained when in Eqs. (28)-(31) we implement the substitution , where the leading-order expression for is used, . We note also that the densities and parallel fluid velocities of nonresonant particles, Eqs. (28)-(31), have the same form as the linear solutions of the fluid equations (8)-(9). This implies that the adopted simple form (23) of the distribution function exist only when the sought-for coherent nonlinear structure possesses a geometry for which the nonlinearities due to convective derivatives vanish (e.g. 1-D slab or cylindrically symmetric geometries).
In Eq. (26), the integration is performed over the domain of trapped particles, viz. , inside which it is convenient to rewrite in the following way
which is further simplified setting
Using the above and comparing Eq.s (33) and (24), we note that in the domain of resonant parallel velocities, the trapped particles’ distribution and the even part of have identical forms, but with different parallel temperatures. Now we can easily write down the effective distribution in the resonant domain, viz.
where small terms of order have been neglected and we have conveniently separated even and odd parts.
Finally, performing the integrations in velocity space, we obtain the moments Eq. (26) in a closed form, as
where we used the notations
and erfc is the complementary error function . The function in Eqs. (36)-(39) behaves asymptotically as , with a full agreement for both and . We note that in Eq. (41) we can safely set , since in the regime the term represents a small FLR correction and it is negligible in the above setting. Conversely, for , the contribution of trapped particles can be neglected altogether, since from Eq. (40) it scales as .
ii.3 Field equations
Now we can easily write down the Poisson’s equation, and the components of the Ampere’s law that are parallel and perpendicular to the magnetic field, viz. |
7d0660d4bc837e94 | Open main menu
Wikipedia β
Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925.
Matrix mechanics was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr Model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation.
In some contrast to the wave formulation, it produces spectra of (mostly energy) operators by purely algebraic, ladder operator, methods.[1] Relying on these methods, Pauli derived the hydrogen atom spectrum in 1926,[2] before the development of wave mechanics.
Development of matrix mechanicsEdit
In 1925, Werner Heisenberg, Max Born, and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics.
Epiphany at HelgolandEdit
In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen. By May 1925 he began trying to describe atomic systems by observables only. On June 7, to escape the effects of a bad attack of hay fever, Heisenberg left for the pollen free North Sea island of Helgoland. While there, in between climbing and learning by heart poems from Goethe's West-östlicher Diwan, he continued to ponder the spectral issue and eventually realised that adopting non-commuting observables might solve the problem, and he later wrote[3]
"It was about three o' clock at night when the final result of the calculation lay before me. At first I was deeply shaken. I was so excited that I could not think of sleep. So I left the house and awaited the sunrise on the top of a rock."
The Three Fundamental PapersEdit
After Heisenberg returned to Göttingen, he showed Wolfgang Pauli his calculations, commenting at one point:[4]
"Everything is still vague and unclear to me, but it seems as if the electrons will no more move on orbits."
On July 9 Heisenberg gave the same paper of his calculations to Max Born, saying, "...he had written a crazy paper and did not dare to send it in for publication, and that Born should read it and advise him on it..." prior to publication. Heisenberg then departed for a while, leaving Born to analyse the paper.[5]
In the paper, Heisenberg formulated quantum theory without sharp electron orbits. Hendrik Kramers had earlier calculated the relative intensities of spectral lines in the Sommerfeld model by interpreting the Fourier coefficients of the orbits as intensities. But his answer, like all other calculations in the old quantum theory, was only correct for large orbits.
Heisenberg, after a collaboration with Kramers,[6] began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation, so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum. The quantities in Heisenberg's formulation were the classical position and momentum, but now they were no longer sharply defined. Each quantity was represented by a collection of Fourier coefficients with two indices, corresponding to the initial and final states.[7]
When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices,[8] which he had learned from his study under Jakob Rosanes[9] at Breslau University. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper.[10]
A follow-on paper was submitted for publication before the end of the year by all three authors.[11] (A brief review of Born's role in the development of the matrix mechanics formulation of quantum mechanics along with a discussion of the key formula involving the non-commutivity of the probability amplitudes can be found in an article by Jeremy Bernstein.[12] A detailed historical and technical account can be found in Mehra and Rechenberg's book The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926.[13])
*W. Heisenberg, Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen, Zeitschrift für Physik, 33, 879-893, 1925 (received July 29, 1925). [English translation in: B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1 (English title: Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations).]
• M. Born and P. Jordan, Zur Quantenmechanik, Zeitschrift für Physik, 34, 858-888, 1925 (received September 27, 1925). [English translation in: B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1 (English title: On Quantum Mechanics).]
• M. Born, W. Heisenberg, and P. Jordan, Zur Quantenmechanik II, Zeitschrift für Physik, 35, 557-615, 1926 (received November 16, 1925). [English translation in: B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1 (English title: On Quantum Mechanics II).]
Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912 and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics.[14]
Born, however, had learned matrix algebra from Rosanes, as already noted, but Born had also learned Hilbert's theory of integral equations and quadratic forms for an infinite number of variables as was apparent from a citation by Born of Hilbert's work Grundzüge einer allgemeinen Theorie der Linearen Integralgleichungen published in 1912.[15][16]
Jordan, too was well equipped for the task. For a number of years, he had been an assistant to Richard Courant at Göttingen in the preparation of Courant and David Hilbert's book Methoden der mathematischen Physik I, which was published in 1924.[17] This book, fortuitously, contained a great many of the mathematical tools necessary for the continued development of quantum mechanics.
In 1926, John von Neumann became assistant to David Hilbert, and he would coin the term Hilbert space to describe the algebra and analysis which were used in the development of quantum mechanics.[18][19]
Heisenberg's reasoningEdit
Before matrix mechanics, the old quantum theory described the motion of a particle by a classical orbit, with well defined position and momentum X(t), P(t), with the restriction that the time integral over one period T of the momentum times the velocity must be a positive integer multiple of Planck's constant
While this restriction correctly selects orbits with more or less the right energy values En, the old quantum mechanical formalism did not describe time dependent processes, such as the emission or absorption of radiation.
When a classical particle is weakly coupled to a radiation field, so that the radiative damping can be neglected, it will emit radiation in a pattern which repeats itself every orbital period. The frequencies which make up the outgoing wave are then integer multiples of the orbital frequency, and this is a reflection of the fact that X(t) is periodic, so that its Fourier representation has frequencies 2πn/T only.
The coefficients Xn are complex numbers. The ones with negative frequencies must be the complex conjugates of the ones with positive frequencies, so that X(t) will always be real,
A quantum mechanical particle, on the other hand, can not emit radiation continuously, it can only emit photons. Assuming that the quantum particle started in orbit number n, emitted a photon, then ended up in orbit number m, the energy of the photon is EnEm, which means that its frequency is (EnEm)/h.
For large n and m, but with nm relatively small, these are the classical frequencies by Bohr's correspondence principle
In the formula above, T is the classical period of either orbit n or orbit m, since the difference between them is higher order in h. But for n and m small, or if nm is large, the frequencies are not integer multiples of any single frequency.
Since the frequencies which the particle emits are the same as the frequencies in the fourier description of its motion, this suggests that something in the time-dependent description of the particle is oscillating with frequency (EnEm)/h. Heisenberg called this quantity Xnm, and demanded that it should reduce to the classical Fourier coefficients in the classical limit. For large values of n, m but with nm relatively small, Xnm is the (nm)th Fourier coefficient of the classical motion at orbit n. Since Xnm has opposite frequency to Xmn, the condition that X is real becomes
By definition, Xnm only has the frequency (EnEm)/h, so its time evolution is simple:
This is the original form of Heisenberg's equation of motion.
Given two arrays Xnm and Pnm describing two physical quantities, Heisenberg could form a new array of the same type by combining the terms XnkPkm, which also oscillate with the right frequency. Since the Fourier coefficients of the product of two quantities is the convolution of the Fourier coefficients of each one separately, the correspondence with Fourier series allowed Heisenberg to deduce the rule by which the arrays should be multiplied,
Born pointed out that this is the law of matrix multiplication, so that the position, the momentum, the energy, all the observable quantities in the theory, are interpreted as matrices. Under this multiplication rule, the product depends on the order: XP is different from PX.
The X matrix is a complete description of the motion of a quantum mechanical particle. Because the frequencies in the quantum motion are not multiples of a common frequency, the matrix elements cannot be interpreted as the Fourier coefficients of a sharp classical trajectory. Nevertheless, as matrices, X(t) and P(t) satisfy the classical equations of motion; also see Ehrenfest's theorem, below.
Matrix basicsEdit
When it was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, matrix mechanics was not immediately accepted and was a source of controversy, at first. Schrödinger's later introduction of wave mechanics was greatly favored.
Part of the reason was that Heisenberg's formulation was in an odd mathematical language, for the time, while Schrödinger's formulation was based on familiar wave equations. But there was also a deeper sociological reason. Quantum mechanics had been developing by two paths, one under the direction of Einstein and the other under the direction of Bohr. Einstein emphasized wave–particle duality, while Bohr emphasized the discrete energy states and quantum jumps. De Broglie had shown how to reproduce the discrete energy states in Einstein's framework – the quantum condition is the standing wave condition, and this gave hope to those in the Einstein school that all the discrete aspects of quantum mechanics would be subsumed into a continuous wave mechanics.
Matrix mechanics, on the other hand, came from the Bohr school, which was concerned with discrete energy states and quantum jumps. Bohr's followers did not appreciate physical models which pictured electrons as waves, or as anything at all. They preferred to focus on the quantities which were directly connected to experiments.
In atomic physics, spectroscopy gave observational data on atomic transitions arising from the interactions of atoms with light quanta. The Bohr school required that only those quantities which were in principle measurable by spectroscopy should appear in the theory. These quantities include the energy levels and their intensities but they do not include the exact location of a particle in its Bohr orbit. It is very hard to imagine an experiment which could determine whether an electron in the ground state of a hydrogen atom is to the right or to the left of the nucleus. It was a deep conviction that such questions did not have an answer.
The matrix formulation was built on the premise that all physical observables are represented by matrices, whose elements are indexed by two different energy levels. The set of eigenvalues of the matrix were eventually understood to be the set of all possible values that the observable can have. Since Heisenberg's matrices are Hermitian, the eigenvalues are real.
If an observable is measured and the result is a certain eigenvalue, the corresponding eigenvector is the state of the system immediately after the measurement. The act of measurement in matrix mechanics 'collapses' the state of the system. If one measures two observables simultaneously, the state of the system collapses to a common eigenvector of the two observables. Since most matrices don't have any eigenvectors in common, most observables can never be measured precisely at the same time. This is the uncertainty principle.
If two matrices share their eigenvectors, they can be simultaneously diagonalized. In the basis where they are both diagonal, it is clear that their product does not depend on their order because multiplication of diagonal matrices is just multiplication of numbers. The uncertainty principle, by contrast, is an expression of the fact that often two matrices A and B do not always commute, i.e., that AB − BA does not necessarily equal 0. The fundamental commutation relation of matrix mechanics,
implies then that there are no states which simultaneously have a definite position and momentum.
This principle of uncertainty holds for many other pairs of observables as well. For example, the energy does not commute with the position either, so it is impossible to precisely determine the position and energy of an electron in an atom.
Nobel PrizeEdit
In 1928, Albert Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics.[20] The announcement of the Nobel Prize in Physics for 1932 was delayed until November 1933.[21] It was at that time that it was announced Heisenberg had won the Prize for 1932 "for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen"[22] and Erwin Schrödinger and Paul Adrien Maurice Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory".[22]
One can rightly ask why Born was not awarded the Prize in 1932 along with Heisenberg, and Bernstein gives some speculations on this matter. One of them is related to Jordan joining the Nazi Party on May 1, 1933 and becoming a Storm Trooper.[23] Hence, Jordan's Party affiliations and Jordan's links to Born may have affected Born's chance at the Prize at that time. Bernstein also notes that when Born won the Prize in 1954, Jordan was still alive, and the Prize was awarded for the statistical interpretation of quantum mechanics, attributable to Born alone.[24]
Heisenberg's reactions to Born for Heisenberg receiving the Prize for 1932 and for Born receiving the Prize in 1954 are also instructive in evaluating whether Born should have shared the Prize with Heisenberg. On November 25, 1933 Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration – you, Jordan and I." Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside."[25]
In 1954, Heisenberg wrote an article honoring Max Planck for his insight in 1900. In the article, Heisenberg credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye."[26]
Mathematical developmentEdit
Once Heisenberg introduced the matrices for X and P, he could find their matrix elements in special cases by guesswork, guided by the correspondence principle. Since the matrix elements are the quantum mechanical analogs of Fourier coefficients of the classical orbits, the simplest case is the harmonic oscillator, where the classical position and momentum, X(t) and P(t), are sinusoidal.
Harmonic oscillatorEdit
In units where the mass and frequency of the oscillator are equal to one (see nondimensionalization), the energy of the oscillator is
The level sets of H are the clockwise orbits, and they are nested circles in phase space. The classical orbit with energy E is
The old quantum condition dictates that the integral of P dX over an orbit, which is the area of the circle in phase space, must be an integer multiple of Planck's constant. The area of the circle of radius 2E is 2πE. So
or, in natural units where ħ = 1, the energy is an integer.
The Fourier components of X(t) and P(t) are simple, and more so if they are combined into the quantities
Both A and A have only a single frequency, and X and P can be recovered from their sum and difference.
Since A(t) has a classical Fourier series with only the lowest frequency, and the matrix element Amn is the (mn)th Fourier coefficient of the classical orbit, the matrix for A is nonzero only on the line just above the diagonal, where it is equal to 2En. The matrix for A is likewise only nonzero on the line below the diagonal, with the same elements.
Thus, from A and A, reconstruction yields
which, up to the choice of units, are the Heisenberg matrices for the harmonic oscillator. Note that both matrices are hermitian, since they are constructed from the Fourier coefficients of real quantities.
Finding X(t) and P(t) is direct, since they are quantum Fourier coefficients so they evolve simply with time,
The matrix product of X and P is not hermitian, but has a real and imaginary part. The real part is one half the symmetric expression XP + PX, while the imaginary part is proportional to the commutator
It is simple to verify explicitly that XPPX in the case of the harmonic oscillator, is , multiplied by the identity.
It is likewise simple to verify that the matrix
is a diagonal matrix, with eigenvalues Ei.
Conservation of energyEdit
The harmonic oscillator is an important case. Finding the matrices is easier than determining the general conditions from these special forms. For this reason, Heisenberg investigated the anharmonic oscillator, with Hamiltonian
In this case, the X and P matrices are no longer simple off diagonal matrices, since the corresponding classical orbits are slightly squashed and displaced, so that they have Fourier coefficients at every classical frequency. To determine the matrix elements, Heisenberg required that the classical equations of motion be obeyed as matrix equations,
He noticed that if this could be done, then H, considered as a matrix function of X and P, will have zero time derivative.
where A∗B is the anticommutator,
Given that all the off diagonal elements have a nonzero frequency; H being constant implies that H is diagonal. It was clear to Heisenberg that in this system, the energy could be exactly conserved in an arbitrary quantum system, a very encouraging sign.
The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse.
Differentiation trick — canonical commutation relationsEdit
Demanding that the classical equations of motion are preserved is not a strong enough condition to determine the matrix elements. Planck's constant does not appear in the classical equations, so that the matrices could be constructed for many different values of ħ and still satisfy the equations of motion, but with different energy levels.
So, in order to implement his program, Heisenberg needed to use the old quantum condition to fix the energy levels, then fill in the matrices with Fourier coefficients of the classical equations, then alter the matrix coefficients and the energy levels slightly to make sure the classical equations are satisfied. This is clearly not satisfactory. The old quantum conditions refer to the area enclosed by the sharp classical orbits, which do not exist in the new formalism.
The most important thing that Heisenberg discovered is how to translate the old quantum condition into a simple statement in matrix mechanics.
To do this, he investigated the action integral as a matrix quantity,
There are several problems with this integral, all stemming from the incompatibility of the matrix formalism with the old picture of orbits. Which period T should be used? Semiclassically, it should be either m or n, but the difference is order ħ, and an answer to order ħ is sought. The quantum condition tells us that Jmn is 2πn on the diagonal, so the fact that J is classically constant tells us that the off-diagonal elements are zero.
His crucial insight was to differentiate the quantum condition with respect to n. This idea only makes complete sense in the classical limit, where n is not an integer but the continuous action variable J, but Heisenberg performed analogous manipulations with matrices, where the intermediate expressions are sometimes discrete differences and sometimes derivatives.
In the following discussion, for the sake of clarity, the differentiation will be performed on the classical variables, and the transition to matrix mechanics will be done afterwards, guided by the correspondence principle.
In the classical setting, the derivative is the derivative with respect to J of the integral which defines J, so it is tautologically equal to 1.
where the derivatives dP/dJ and dX/dJ should be interpreted as differences with respect to J at corresponding times on nearby orbits, exactly what would be obtained if the Fourier coefficients of the orbital motion were differentiated. (These derivatives are symplectically orthogonal in phase space to the time derivatives dP/dt and dX/dt).
The final expression is clarified by introducing the variable canonically conjugate to J, which is called the angle variable θ: The derivative with respect to time is a derivative with respect to θ, up to a factor of 2πT,
So the quantum condition integral is the average value over one cycle of the Poisson bracket of X and P.
An analogous differentiation of the Fourier series of P dX demonstrates that the off-diagonal elements of the Poisson bracket are all zero. The Poisson bracket of two canonically conjugate variables, such as X and P, is the constant value 1, so this integral really is the average value of 1; so it is 1, as we knew all along, because it is dJ/dJ after all. But Heisenberg, Born and Jordan, unlike Dirac, were not familiar with the theory of Poisson brackets, so, for them, the differentiation effectively evaluated {X, P} in J, θ coordinates.
The Poisson Bracket, unlike the action integral, does have a simple translation to matrix mechanics−−it normally corresponds to the imaginary part of the product of two variables, the commutator.
To see this, examine the (antisymmetrized) product of two matrices A and B in the correspondence limit, where the matrix elements are slowly varying functions of the index, keeping in mind that the answer is zero classically.
In the correspondence limit, when indices m, n are large and nearby, while k,r are small, the rate of change of the matrix elements in the diagonal direction is the matrix element of the J derivative of the corresponding classical quantity. So its possible to shift any matrix element diagonally through the correspondence,
where the right hand side is really only the (mn)'th Fourier component of dA/dJ at the orbit near m to this semiclassical order, not a full well-defined matrix.
The semiclassical time derivative of a matrix element is obtained up to a factor of i by multiplying by the distance from the diagonal,
since the coefficient Am(m+k) is semiclassically the k'th Fourier coefficient of the m-th classical orbit.
The imaginary part of the product of A and B can be evaluated by shifting the matrix elements around so as to reproduce the classical answer, which is zero.
The leading nonzero residual is then given entirely by the shifting. Since all the matrix elements are at indices which have a small distance from the large index position (m,m), it helps to introduce two temporary notations: A[r,k] = A(m+r)(m+k) for the matrices, and (dA/dJ)[r] for the r'th Fourier components of classical quantities,
Flipping the summation variable in the first sum from r to r' = kr, the matrix element becomes,
and it is clear that the principal (classical) part cancels.
The leading quantum part, neglecting the higher order product of derivatives in the residual expression, is then
so that, finally,
which can be identified with i times the k-th classical Fourier component of the Poisson bracket.
Heisenberg's original differentiation trick was eventually extended to a full semiclassical derivation of the quantum condition, in collaboration with Born and Jordan. Once they were able to establish that
this condition replaced and extended the old quantization rule, allowing the matrix elements of P and X for an arbitrary system to be determined simply from the form of the Hamiltonian.
The new quantization rule was assumed to be universally true, even though the derivation from the old quantum theory required semiclassical reasoning. (A full quantum treatment, however, for more elaborate arguments of the brackets, was appreciated in the 1940s to amount to extending Poisson brackets to Moyal brackets.)
State vectors and the Heisenberg equationEdit
To make the transition to standard quantum mechanics, the most important further addition was the quantum state vector, now written |ψ⟩, which is the vector that the matrices act on. Without the state vector, it is not clear which particular motion the Heisenberg matrices are describing, since they include all the motions somewhere.
The interpretation of the state vector, whose components are written ψm, was furnished by Born. This interpretation is statistical: the result of a measurement of the physical quantity corresponding to the matrix A is random, with an average value equal to
Alternatively, and equivalently, the state vector gives the probability amplitude ψn for the quantum system to be in the energy state n.
Once the state vector was introduced, matrix mechanics could be rotated to any basis, where the H matrix need no longer be diagonal. The Heisenberg equation of motion in its original form states that Amn evolves in time like a Fourier component,
which can be recast in differential form
and it can be restated so that it is true in an arbitrary basis, by noting that the H matrix is diagonal with diagonal values Em,
This is now a matrix equation, so it holds in any basis. This is the modern form of the Heisenberg equation of motion.
Its formal solution is:
All these forms of the equation of motion above say the same thing, that A(t) is equivalent to A(0), through a basis rotation by the unitary matrix eHt, a systematic picture elucidated by Dirac in his bra–ket notation.
Conversely, by rotating the basis for the state vector at each time by eiHt, the time dependence in the matrices can be undone. The matrices are now time independent, but the state vector rotates,
This is the Schrödinger equation for the state vector, and this time-dependent change of basis amounts to transformation to the Schrödinger picture, with ⟨x|ψ⟩ = ψ(x).
In quantum mechanics in the Heisenberg picture the state vector, |ψ⟩ does not change with time, while an observable A satisfies the Heisenberg equation of motion,
The extra term is for operators such as
which have an explicit time dependence, in addition to the time dependence from the unitary evolution discussed.
The Heisenberg picture does not distinguish time from space, so it is better suited to relativistic theories than the Schrödinger equation. Moreover, the similarity to classical physics is more manifest: the Hamiltonian equations of motion for classical mechanics are recovered by replacing the commutator above by the Poisson bracket (see also below). By the Stone–von Neumann theorem, the Heisenberg picture and the Schrödinger picture must be unitarily equivalent, as detailed below.
Further resultsEdit
Matrix mechanics rapidly developed into modern quantum mechanics, and gave interesting physical results on the spectra of atoms.
Wave mechanicsEdit
Jordan noted that the commutation relations ensure that P acts as a differential operator.
The operator identity
allows the evaluation of the commutator of P with any power of X, and it implies that
which, together with linearity, implies that a P-commutator effectively differentiates any analytic matrix function of X.
Assuming limits are defined sensibly, this extends to arbitrary functions−−but the extension need not be made explicit until a certain degree of mathematical rigor is required,
Since X is a Hermitian matrix, it should be diagonalizable, and it will be clear from the eventual form of P that every real number can be an eigenvalue. This makes some of the mathematics subtle, since there is a separate eigenvector for every point in space.
In the basis where X is diagonal, an arbitrary state can be written as a superposition of states with eigenvalues x,
so that ψ(x) = ⟨x|ψ⟩, and the operator X multiplies each eigenvector by x,
Define a linear operator D which differentiates ψ,
and note that
so that the operator −iD obeys the same commutation relation as P. Thus, the difference between P and −iD must commute with X,
so it may be simultaneously diagonalized with X: its value acting on any eigenstate of X is some function f of the eigenvalue x.
This function must be real, because both P and −iD are Hermitian,
rotating each state by a phase f(x), that is, redefining the phase of the wavefunction:
The operator iD is redefined by an amount:
which means that, in the rotated basis, P is equal to −iD.
Hence, there is always a basis for the eigenvalues of X where the action of P on any wavefunction is known:
and the Hamiltonian in this basis is a linear differential operator on the state-vector components,
Thus, the equation of motion for the state vector is but a celebrated differential equation,
Since D is a differential operator, in order for it to be sensibly defined, there must be eigenvalues of X which neighbors every given value. This suggests that the only possibility is that the space of all eigenvalues of X is all real numbers, and that P is iD, up to a phase rotation.
To make this rigorous requires a sensible discussion of the limiting space of functions, and in this space this is the Stone–von Neumann theorem: any operators X and P which obey the commutation relations can be made to act on a space of wavefunctions, with P a derivative operator. This implies that a Schrödinger picture is always available.
Matrix mechanics easily extends to many degrees of freedom in a natural way. Each degree of freedom has a separate X operator and a separate effective differential operator P, and the wavefunction is a function of all the possible eigenvalues of the independent commuting X variables.
In particular, this means that a system of N interacting particles in 3 dimensions is described by one vector whose components in a basis where all the X are diagonal is a mathematical function of 3N-dimensional space describing all their possible positions, effectively a much bigger collection of values than the mere collection of N three-dimensional wavefunctions in one physical space. Schrödinger came to the same conclusion independently, and eventually proved the equivalence of his own formalism to Heisenberg's.
Since the wavefunction is a property of the whole system, not of any one part, the description in quantum mechanics is not entirely local. The description of several quantum particles has them correlated, or entangled. This entanglement leads to strange correlations between distant particles which violate the classical Bell's inequality.
Even if the particles can only be in just two positions, the wavefunction for N particles requires 2N complex numbers, one for each total configuration of positions. This is exponentially many numbers in N, so simulating quantum mechanics on a computer requires exponential resources. Conversely, this suggests that it might be possible to find quantum systems of size N which physically compute the answers to problems which classically require 2N bits to solve. This is the aspiration behind quantum computing.
Ehrenfest theoremEdit
For the time-independent operators X and P, A/∂t = 0 so the Heisenberg equation above reduces to:[27]
where the square brackets [ , ] denote the commutator. For a Hamiltonian which is , the X and P operators satisfy:
where the first is classically the velocity, and second is classically the force, or potential gradient. These reproduce Hamilton's form of Newton's laws of motion. In the Heisenberg picture, the X and P operators satisfy the classical equations of motion. You can take the expectation value of both sides of the equation to see that, in any state |ψ⟩:
So Newton's laws are exactly obeyed by the expected values of the operators in any given state. This is Ehrenfest's theorem, which is an obvious corollary of the Heisenberg equations of motion, but is less trivial in the Schrödinger picture, where Ehrenfest discovered it.
Transformation theoryEdit
In classical mechanics, a canonical transformation of phase space coordinates is one which preserves the structure of the Poisson brackets. The new variables x',p' have the same Poisson brackets with each other as the original variables x,p. Time evolution is a canonical transformation, since the phase space at any time is just as good a choice of variables as the phase space at any other time.
The Hamiltonian flow is the canonical transformation:
Since the Hamiltonian can be an arbitrary function of x and p, there are such infinitesimal canonical transformations corresponding to every classical quantity G, where G serves as the Hamiltonian to generate a flow of points in phase space for an increment of time s,
For a general function A(x, p) on phase space, its infinitesimal change at every step ds under this map is
The quantity G is called the infinitesimal generator of the canonical transformation.
In quantum mechanics, the quantum analog G is now a Hermitian matrix, and the equations of motion are given by commutators,
The infinitesimal canonial motions can be formally integrated, just as the Heisenberg equation of motion were integrated,
where U= eiGs and s is an arbitrary parameter.
The definition of a quantum canonical transformation is thus an arbitrary unitary change of basis on the space of all state vectors. U is an arbitrary unitary matrix, a complex rotation in phase space,
These transformations leave the sum of the absolute square of the wavefunction components invariant, while they take states which are multiples of each other (including states which are imaginary multiples of each other) to states which are the same multiple of each other.
The interpretation of the matrices is that they act as generators of motions on the space of states.
For example, the motion generated by P can be found by solving the Heisenberg equation of motion using P as a Hamiltonian,
These are translations of the matrix X by a multiple of the identity matrix,
This is the interpretation of the derivative operator D: eiPs = eD, the exponential of a derivative operator is a translation (so Lagrange's shift operator).
The X operator likewise generates translations in P. The Hamiltonian generates translations in time, the angular momentum generates rotations in physical space, and the operator X 2 + P 2 generates rotations in phase space.
When a transformation, like a rotation in physical space, commutes with the Hamiltonian, the transformation is called a symmetry (behind a degeneracy) of the Hamiltonian−−the Hamiltonian expressed in terms of rotated coordinates is the same as the original Hamiltonian. This means that the change in the Hamiltonian under the infinitesimal symmetry generator L vanishes,
It then follows that the change in the generator under time translation also vanishes,
so that the matrix L is constant in time: it is conserved.
The one-to-one association of infinitesimal symmetry generators and conservation laws was discovered by Emmy Noether for classical mechanics, where the commutators are Poisson brackets, but the quantum-mechanical reasoning is identical. In quantum mechanics, any unitary symmetry transformation yields a conservation law, since if the matrix U has the property that
so it follows that
and that the time derivative of U is zero—it is conserved.
The eigenvalues of unitary matrices are pure phases, so that the value of a unitary conserved quantity is a complex number of unit magnitude, not a real number. Another way of saying this is that a unitary matrix is the exponential of i times a Hermitian matrix, so that the additive conserved real quantity, the phase, is only well-defined up to an integer multiple of . Only when the unitary symmetry matrix is part of a family that comes arbitrarily close to the identity are the conserved real quantities single-valued, and then the demand that they are conserved become a much more exacting constraint.
Symmetries which can be continuously connected to the identity are called continuous, and translations, rotations, and boosts are examples. Symmetries which cannot be continuously connected to the identity are discrete, and the operation of space-inversion, or parity, and charge conjugation are examples.
The interpretation of the matrices as generators of canonical transformations is due to Paul Dirac.[28] The correspondence between symmetries and matrices was shown by Eugene Wigner to be complete, if antiunitary matrices which describe symmetries which include time-reversal are included.
Selection rulesEdit
It was physically clear to Heisenberg that the absolute squares of the matrix elements of X, which are the Fourier coefficients of the oscillation, would yield the rate of emission of electromagnetic radiation.
In the classical limit of large orbits, if a charge with position X(t) and charge q is oscillating next to an equal and opposite charge at position 0, the instantaneous dipole moment is q X(t), and the time variation of this moment translates directly into the space-time variation of the vector potential, which yields nested outgoing spherical waves.
For atoms, the wavelength of the emitted light is about 10,000 times the atomic radius, and the dipole moment is the only contribution to the radiative field, while all other details of the atomic charge distribution can be ignored.
Ignoring back-reaction, the power radiated in each outgoing mode is a sum of separate contributions from the square of each independent time Fourier mode of d,
Now, in Heisenberg's representation, the Fourier coefficients of the dipole moment are the matrix elements of X. This correspondence allowed Heisenberg to provide the rule for the transition intensities, the fraction of the time that, starting from an initial state i, a photon is emitted and the atom jumps to a final state j,
This then allowed the magnitude of the matrix elements to be interpreted statistically: they give the intensity of the spectral lines, the probability for quantum jumps from the emission of dipole radiation.
Since the transition rates are given by the matrix elements of X, wherever Xij is zero, the corresponding transition should be absent. These were called the selection rules, which were a puzzle until the advent of matrix mechanics.
An arbitrary state of the Hydrogen atom, ignoring spin, is labelled by |n;ℓ,m ⟩, where the value of ℓ is a measure of the total orbital angular momentum and m is its z-component, which defines the orbit orientation. The components of the angular momentum pseudovector are
where the products in this expression are independent of order and real, because different components of X and P commute.
The commutation relations of L with all three coordinate matrices X, Y, Z (or with any vector) are easy to find,
which confirms that the operator L generates rotations between the three components of the vector of coordinate matrices X.
From this, the commutator of Lz and the coordinate matrices X, Y, Z can be read off,
This means that the quantities X + iY, XiY have a simple commutation rule,
Just like the matrix elements of X + iP and X − iP for the harmonic oscillator Hamiltonian, this commutation law implies that these operators only have certain off diagonal matrix elements in states of definite m,
meaning that the matrix (X + iY) takes an eigenvector of Lz with eigenvalue m to an eigenvector with eigenvalue m + 1. Similarly, (XiY) decrease m by one unit, while Z does not change the value of m.
So, in a basis of |ℓ,m⟩ states where L2 and Lz have definite values, the matrix elements of any of the three components of the position are zero, except when m is the same or changes by one unit.
This places a constraint on the change in total angular momentum. Any state can be rotated so that its angular momentum is in the z-direction as much as possible, where m = ℓ. The matrix element of the position acting on |ℓ,m⟩ can only produce values of m which are bigger by one unit, so that if the coordinates are rotated so that the final state is |ℓ',ℓ' ⟩, the value of ℓ’ can be at most one bigger than the biggest value of ℓ that occurs in the initial state. So ℓ’ is at most ℓ + 1.
The matrix elements vanish for ℓ’ > ℓ + 1, and the reverse matrix element is determined by Hermiticity, so these vanish also when ℓ’ < ℓ - 1: Dipole transitions are forbidden with a change in angular momentum of more than one unit.
Sum rulesEdit
The Heisenberg equation of motion determines the matrix elements of P in the Heisenberg basis from the matrix elements of X.
which turns the diagonal part of the commutation relation into a sum rule for the magnitude of the matrix elements:
This yields a relation for the sum of the spectroscopic intensities to and from any given state, although to be absolutely correct, contributions from the radiative capture probability for unbound scattering states must be included in the sum:
See alsoEdit
1. ^ Herbert S. Green (1965). Matrix mechanics (P. Noordhoff Ltd, Groningen, Netherlands) ASIN : B0006BMIP8.
2. ^ Pauli, W (1926). "Über das Wasserstoffspektrum vom Standpunkt der neuen Quantenmechanik". Zeitschrift für Physik. 36 (5): 336–363. Bibcode:1926ZPhy...36..336P. doi:10.1007/BF01450175.
3. ^ W. Heisenberg, "Der Teil und das Ganze", Piper, Munich, (1969) The Birth of Quantum Mechanics.
4. ^ The Birth of Quantum Mechanics
5. ^ W. Heisenberg, Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen, Zeitschrift für Physik, 33, 879-893, 1925 (received July 29, 1925). [English translation in: B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1 (English title: "Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations").]
6. ^ H. A. Kramers und W. Heisenberg, Über die Streuung von Strahlung durch Atome, Zeitschrift für Physik 31, 681-708 (1925).
7. ^ Emilio Segrè, From X-Rays to Quarks: Modern Physicists and their Discoveries (W. H. Freeman and Company, 1980) ISBN 0-7167-1147-8, pp 153–157.
8. ^ Abraham Pais, Niels Bohr's Times in Physics, Philosophy, and Polity (Clarendon Press, 1991) ISBN 0-19-852049-2, pp 275–279.
9. ^ Max Born – Nobel Lecture (1954)
11. ^ M. Born, W. Heisenberg, and P. Jordan, Zur Quantenmechanik II, Zeitschrift für Physik, 35, 557-615, 1925 (received November 16, 1925). [English translation in: B. L. van der Waerden, editor, Sources of Quantum Mechanics (Dover Publications, 1968) ISBN 0-486-61881-1]
12. ^ Jeremy Bernstein Max Born and the Quantum Theory, Am. J. Phys. 73 (11) 999-1008 (2005)
13. ^ Mehra, Volume 3 (Springer, 2001)
14. ^ Jammer, 1966, pp. 206-207.
15. ^ van der Waerden, 1968, p. 51.
16. ^ The citation by Born was in Born and Jordan's paper, the second paper in the trilogy which launched the matrix mechanics formulation. See van der Waerden, 1968, p. 351.
17. ^ Constance Ried Courant (Springer, 1996) p. 93.
18. ^ John von Neumann Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren, Mathematische Annalen 102 49–131 (1929)
19. ^ When von Neumann left Göttingen in 1932, his book on the mathematical foundations of quantum mechanics, based on Hilbert's mathematics, was published under the title Mathematische Grundlagen der Quantenmechanik. See: Norman Macrae, John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More (Reprinted by the American Mathematical Society, 1999) and Constance Reid, Hilbert (Springer-Verlag, 1996) ISBN 0-387-94674-8.
20. ^ Bernstein, 2004, p. 1004.
21. ^ Greenspan, 2005, p. 190.
22. ^ a b Nobel Prize in Physics and 1933 – Nobel Prize Presentation Speech.
23. ^ Bernstein, 2005, p. 1004.
24. ^ Bernstein, 2005, p. 1006.
25. ^ Greenspan, 2005, p. 191.
26. ^ Greenspan, 2005, pp. 285-286.
27. ^ Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, ISBN 978-0-13-146100-0
28. ^ Dirac, P. A. M. (1981). The Principles of Quantum Mechanics (4th revised ed.). New York: Oxford University Press. ISBN 0-19-852011-5.
Further readingEdit
External linksEdit |
a2a79f1a9fc9ddb9 | The Two Operators
The strong ring
The strong ring generated by simplicial complexes produces a category of geometric objects which carries a ring structure. Each element in the strong ring is a “geometric space” carrying cohomology (simplicial, and more general interaction cohomologies) and has nice spectral properties (like McKean Singer) and a “counting calculus” in which Euler characteristic is the most natural functional. Unlike the class of simplicial complexes or the class of discrete CW complexes or Stanley-Raiser ring elements, this ring combines the property that it is a “Cartesian closed category” and that the arithmetic is compatible with cohomology and spectra of connection Laplacians L of G. The strong ring is isomorphic to a subring of the strong Sabidussi ring via the ring homomorphism G \to G' attaching to a complex its signed connection graph. Like the Stanley-Reisner ring, also the full Sabidussi ring on the category of all finite simple graphs is too large. The strong ring solves the problem to define a category of finite geometries which is Cartesian closed, has a ring arithmetic, compatibility with cohomology (Kuenneth) and a finite potential theory (energy theorem) and spectral compatibility in the sense that multiplication leads to products of spectra and tensor products of connection Laplacians. Each of the two Laplacians also carries nonlinear discrete partial differential equations (PDE’s). The first one is a Lax pair and integrable, the eigenvalues of the Dirac operator being the integrals. For the second, the Helmhotz system, we suspect integrability as it looks like a nonlinear Schrödinger equation. In the first case, the description is given in the Heisenberg picture, deforming the operators. In the second case, the discrete PDE will deform states of the Hilbert space. Of course both can be seen in the Schroedinger or Heisenberg picture. We have implemented both dynamical systems on the computer, but still require to do more experiments. Interesting would be to study how both play together. We hope of course to see soliton like solutions (nonlinear systems are nicer in explaining particle structures and featuring particles with different speeds).
Remarks and reiterations of points made elsewhere:
• The full Stanley-Reisner ring is a ring, but it is too large. Its elements somehow behave like measurable sets and do not carry cohomology nor an Euler caracteristic compatible with cohomology and homotopy. It contains for example elements like f=xy which is an “edge without vertices”, we want elements like f=xy+x+y which is the graph K2. The Euler characteristic of f=xy is -1. There is no Euler-Poincare formula which links this to cohomology. Discrete CW complexes are natural, carry cohomology but forming products is problematic. Its the strong ring generated by products of simplicial complexes which works nicely. It is a finite structure resembling the concpt of compactly generated Hausdorff space in classical topology. Having algebraic structures associated to geometric objects is the heart of algebraic topology. But the point of view here is not to attach an algebraic object like a ring to a geometric object but to see a geometric object as an element in an algebraic one, here a ring. We “calculate WITH spaces”. The category of geometries is an algebraic object. This arithmetic generalizes the usual arithmetic, which is the special case if the geometric objects are zero dimensional (like pebbles used by the earlierst mathematicians in pre-Babylonian time). In the strong ring, primes are either zero dimensional classical rational primes or then elements in the ring which are simplicial complexes. Some of them might decay in the full Sabidussi ring of all graphs, which is a natural “ring extension” of the strong ring.
• While we have compatibility with cohomology on the entire space of graphs (which as usual are seen as simplicial Whitney complexes and not just one-dimensional skeletons to which classical graph theory reduces them), the corresponding Cartesian product is not associative, as multiplying with 1=K_1 is the Barycentric refinement.
• Graphs with the weak Cartesian product form a Cartesian closed category too, but this assumes to see graphs as one-dimensional simplicial complexes, which is a point of view from the last century. Graphs have much more structure as they can be equipped with more interesting simplicial complex structures, in particularly the canonical Whitney complex for which the discrete topological results are identical to the continuum. As pointed out at various places, we like graphs for their intuitive appeal, because they are hard wired into computer algebra systems and because after one Barycentric refinement, one always deals with Whitney complexes. The language of abstract simplicial complexes is equally suited but much less intuitive as one can not visualize them nicely unless they are Whitney complexes of graphs.
• The connection graph G’ is in general not homotopic to G, but only homotopic to G1, the Barycentric refinement of G. Also the Barycentric refinement G1 of a ring element G, is the Whitney complex of a graph. The dimension of the connection graph G’ is larger in general but G_1 and G are homotopic.
• In some sense the strong ring produces a purely geometric quantum field theory where the individual simplicial complexes are elementary particles as they are algebraic primes in the strong ring. Taking the product of two positive dimensional spaces produces a “particle state” in which the particles are correlated. The energy spectra multiply.
• We can make use of the strong ring and define a discrete Euclidean space Zd which when equipped with the connection Laplacian has a mass gap also in the infinite volume limit. This is really exciting as perturbation theory becomes trivial. No strong implicit function theorems are needed, the classical implicit function theorem works as the operators remain bounded and invertible even in the infinite volume limit. We perturb around hyperbolic systems. This is very un-usual as we usually perturb around a trivial integrable case, which leads to small divisor subtleties.
• An obvious task is to implement Euclidean rotation and translation symmetries on such a geometry. Together with scaling operations we can approximate also more complex rotations: just zoom into the space first, then implement a transformation in which the matrices are integers, then zoom back. Its what computer programmers always do when implementing Lie group symmetries in computer games. They never, ever would dream about using floating point arithmetic to do that. Rotating using integer matrices is good enough and saves valuable computation time for the GPU.
Two towering operators
Two operators
Both the Hodge and the connection Laplacians define nonlinear integrable Hamiltonian systems [P.S. we currently only suspect the Helmholtz system to be integrable. There is no proof yet]. The first one defines an evolution of space (as the exterior derivative d by Connes distance defines a metric d(x,y) = sup {f | |df|<1} |f(x)-f(y)|). The second is an evolution of waves which looks like a non-linear Schrödinger evolution which at least in the zero temperature and large time version is the classical linear Schrödinger evolution of a combination of the two Laplacians. For the Hodge Laplacian H it is the kernel, the Harmonic forms which is topologically interesting and Künneth which shows the compatibility with the arithmetic. For the connection Laplacian L, it is the inverse g=L-1, the Green function values which produces a finite (!) potential theory and an other relation with Euler characteristic. For a simplicial complex G or more generally for any element in the strong ring, the two Hamiltonians H(G) and L(G) live on the same Hilbert space. Both operators show some kind of compatibility. The map G \to p(G) is a ring homomorphism. The spectrum G \to \sigma(G) is multiplicative.
The Hodge operator H is affiliated to calculus given by an exterior derivative d appearing in the Stokes equation X(δ G) = d X(G) for signed valuations, where δis the boundary operator for simplicial complexes. The connection operator L is oblivious to the initial orientation of simplices and like the Dirac operator D=d+d* encodes incidences. But while D does not count intersections of simplices for which the difference of dimension is larger than 1, the operator L does count such intersections.
Just because of these analogies, we like to see L more and more as an object parallel to the Dirac operator and not to the Laplacian. Also the spectral picture supports this. Both D and L have both positive and negative spectrum. It is D2 and L2 which have non-negative spectrum. Still, for here, we continue to look at L as a Laplacian. We like L also because the entries of the inverse g have topological interpretations. We don’t have that (yet) for the inverse of the square L2.
The Hodge Laplacian H=D^2= (d+d^*)^2 decomposes into blocks H_k of form Laplacians. It is important because the kernels of H_k are isomorphic to the cohomology groups. The Nullety of H_k is the k’th Betti number b_k(G). The Super nullity \sum_k (-1)^k b_k(G) is the Euler characteristic. It is also the super trace of the identity or by McKean-Singer, the super trace of the heat kernel e^{-t H} as well as the super trace of L (by definition) and the super trace of its inverse (a Gauss-Bonnet formula). The Dirac operator D compares more closely to the connection operator and D^2=H is probably more in spirit to L^2 as both have non-negative spectrum. The connection operator L encodes the incidence of simplices in G as it is L(x,y)=1 if x,y intersect and 0 else. Both the boundary operation \delta and the exterior derivative (incidence matrices) d depend on an orientation of the simplices but this “gauge choice” does not matter for any cohomologically or spectrally relevant quantities. It is a choice of basis in the Hilbert space [which always exist, we don’t insist on compatibility of the orientations which for non-orientable graphs like the projective plane or Moebius strip do not exist]. The Hodge Laplacian H for example is independent of the gauge. We usually are not aware of this choice of orientation. In the case of an orientable manifold with boundary, the orientations are naturally linked in the sense that the orientation of a subsimplex matches the orientation of the larger one. For the operator L, the kernel is irrelevant as it is trivial by the unimodularity theorem.
Two nonlinear Hamiltonian systems
We have a natural dynamical isospectral deformation of the Dirac operator D in the form of a Lax pair D'=[D,B] where B=d-d^*. It is natural as it is a symmetry of the geometry but it dramatically effects geometry. Space expands with an initial inflation bump. This is the case for any simplicial complex and extends to all strong ring elements. The deformation is a nonlinear integrable dynamical system. As stated it is a scattering case. There is a complex version which renders the deformed d complex and which is asymptotically the Schrödinger evolution. Also the operator L features a natural dynamics. As the Green function g=L^{-1} satisfies the energy theorem \chi(G) = E(\psi) = \sum_{x,y} g(x,y) = <\psi,g \psi>, where \psi is the constant wave being \psi(x)=1, one can take E as a Hamiltonian and look at i \psi' = E_{\overline{\psi}} a dynamics which preserves both |\psi|^2 and energy E(\psi). As Euler characteristic shares with entropy compatibility with the algebraic structure in the ring, we can look at the Helmhotz system
Is it relevant in physics?
The strong ring and its integrable dynamical systems belongs to pure mathematics: it is a ring of geometric objects with nice mathematical properties in which each element has operators which can be deformed using nonlinear partial difference equations. The first deformation can be seen in the continuum also by deforming exterior derivatives on compact Riemannian manifolds (leading to Pseudo partial differential operators but featuring the same kind of expansion featuring an initial inflation bump). For the second deformation, there is no continuum analogue yet, at least not classically. [The reasons are various. We don’t have a classical analogue of the connection graph and we also don’t have a good definition of Shannon entropy for general measures. Staying within finite structures really makes entropy mathematically solid. Notions of entropy as used in physics are often vague, especially when used in cosmology.] The Barycentric limit produces not the usual Euclidean manifolds but leads to dyadic analogue structures. In the limiting case we actually deform almost periodic operators.
But one can ask if there is physics involved. The development of calculus always almost by definition ran parallel to questions in physics, in particularly in fluid dynamics and later in electromagnetism. Obviously, H is associated to fundamental particle interactions. The Maxwell equations for example are equivalent to the Poisson equation H_1 A =j. The reason is that in a special gauge d^* A=0, the electromagnetic field F=dA satisfies dF=0, d^* F = j. Also the gravitational field equation H_0 V = \rho (a Poisson equation too but for scalar fields) gives the gravitational potential V so that the field F=dV satisfies the Gauss equation {\rm div}(F) = d^* F = \rho. Similarly as gravitational potentials V and electromagnetic potentials A are described by 0-forms and 1-forms, solving H_k U_k = \rho_k gives k-forms U_k which lead to “fields” F_k = d U_k. Allowing the Dirac operator to move freely in the isospectral set produces a dynamics which naturally leads to a complex Schrödinger equation. Its really interesting that the evolution produces a diagonal part in the Dirac operator which is independent of the deformed exterior derivative parts. The effect of the flow is that parts of the dynamical kinetic energy of space given by the exterior derivatives is fueled into a potential theoretical part which is not visible. [The Dirac operator which is initially D=d+d* develops a diagonal part D=d+d* + b.] The diagonal part b is kind of like “dark matter”: it is present but not visible on the Laplacian side. The Hodge Laplacian H=D2 does not move under the isospectral deformation of the Dirac operator. Classical geometry does not see that system! But under the hood, on the level of the Dirac operator, the geometric effect is dramatic and pretty universal. We can run that system for any simplicial complex or now on any element in the strong ring and always get the qualitative same behavior: expansion with an inflationary single bumpy start.
[ For the connection operator L, which does not appear to have such an isospectral deformation the Euler characteristic or internal energy \chi(G) of the complex is formally related to the Hilbert action. In physics, gravity has various descriptions. It has been geometrically trivialized in relativity theory by the assumption that particles just move on geodesics in a pseudo Riemannian manifold, where the metric is defined by matter. In the Standard model there is the Higgs mechanism for assigning mass to certain particles. Also this does not seem to complete the enigma of defining “mass” as the mass of some particles like neutrini is believed not to come from the Higgs mechanism. Then there are still undiscovered gravitons. In any case, there are both geometric and quantum field aspects for gravity. ]
So, when we look at any individual element in the ring of geometric structures generated by simplicial complexes, then we have a finite dimensional Hilbert space and two dynamical systems: the first is a nonlinear isospectral deformation which asymptotically produces the Schrödinger equation and so quantum mechanics, the second features an internal finite potential theory with bounded Green functions. Both operators are related to Euler characteristic, the most important functional in geometry, in some sense the only functional compatible with the algebraic ring structure. Similarly, also the functional on waves, the entropy is unique in the class of functional which are compatible with arithmetic. We don’t have to justify the naturality of isospectral deformation, it is like explaining the rotation of a rock thrown into empty space (a rigid body motion free of external forces is a Lax pair too in any dimension leading to the integrable evolution, a geodesic flow on a Lie group. This integrable system is discribed in an appendix of Arnold who also looks that infinite dimensional case, where it is the Euler equations of fluid dynamics.) It just happens without any input (it would be very strange if a rock would NOT rotate, as the identity has zero measure in the group of all rotations). The choice of the energy functional in the L case however needed to be justified and we see that the Helmholtz functional has two aspects, internal potential energy and entropy. They were both uniquely selected by compatibility with arithmetic (a theorem of Meyer telling that Euler characteristic is the only valuation (counting function) compatible with addition and multiplication, and then a theorem of Shannon which renders Entropy unique. Of course, both functionals require a normalization to justify their uniqueness). Besides that there is the success of the “Gibbs and Helmholtz approach” to free energy who saw it as a more fundamental quantity than energy. Free energy features a healthy competition between boring static energy equilibria sinks in the zero temperature limit and equally boring chaos in the infinite temperature limit. Chemistry in particular shows that interesting processes happen when these two principles compete. The minimal energy principle and maximal entropy principle combined make life possible.
Now, lets look at both systems together. What happens is that the Helmholtz energy changes if we deform the operator alone. Obviously, when looking at the isospectral deformation of the operator D (in the Heisenberg picture) then we simultaneously have to deform the waves (in the Schrödinger picture). While the potential energy does not change if we both deform L and \psi, the entropy changes. This is not a big surprise as entropy is more like a mathematical trick to incorporate any other aspects of the system which are not part of the Hamiltonian under consideration. The geometric space is “thrown into a heat bath” so to speak and the Gibbs measures appear as critical points.
The Lax and Helmholtz systems can be run together: if U(t) D(t) U(t)^* = D(0), and \psi(t) is a solution of the Helmholtz system, just look at U(t) \psi(t). This evolution could even be extended to quaternion valued fields. There is one caveat: the isospectral deformation given by the symmetry of the Dirac operator does not preserve the entropy part of the Helmholtz Hamiltonian. Intuition given by the fact that the deformation of the exterior derivative produces an expansion suggests that the entropy increases, leading to an arrow of time but this is not a surprise as already the expansive nature of the evolution leads to an arrow of time. By the way, it might appear paradox that we have a Hamiltonian system with that feature as running the system backwards is also a solution. But what happens is that if would run the system backwards, then we would eventually reach the case where D=d+d* has no diagonal term and then get the usual expansion as D(t) = d+d* + b(t) will produce a diagonal term and reduce the size of the exterior derivatives and thus the Connes scale used to measure distance in space. |
ae15712acf1f39d0 | Wave Equation
(redirected from Inhomogenous wave equation)
Also found in: Dictionary, Thesaurus.
Wave equation
The name given to certain partial differential equations in classical and quantum physics which relate the spatial and time dependence of physical functions. In this article the classical and quantum wave equations are discussed separately, with the classical equations first for historical reasons.
In classical physics the name wave equation is given to the linear, homogeneous partial differential equations which have the form of Eq. (1).
Here &ugr; is a parameter with the dimensions of velocity; r represents the space coordinates x, y, z; t is the time; and ∇2 is Laplace's operator defined by
Eq. (2). The function f( r ,t) is a physical observable; that is, it can be measured and consequently must be a real function.
The simplest example of a wave equation in classical physics is that governing the transverse motion of a string under tension and constrained to move in a plane.
A second type of classical physical situation in which the wave equation (1) supplies a mathematical description of the physical reality is the propagation of pressure waves in a fluid medium. Such waves are called acoustical waves, the propagation of sound being an example. A third example of a classical physical situation in which Eq. (1) gives a description of the phenomena is afforded by electromagnetic waves. In a region of space in which the charge and current densities are zero, Maxwell's equations for the photon lead to the wave equations (3).
Here E is the electric field strength and B is the magnetic flux density; they are both vectors in ordinary space. The parameter c is the speed of light in vacuum. See Electromagnetic radiation, Maxwell's equations
The nonrelativistic Schrödinger equation is an example of a quantum wave equation. Relativistic quantum-mechanical wave equations include the Schrödinger-Klein-Gordon equation and the Dirac equation. See Quantum mechanics, Relativistic quantum theory
Wave Equation
a partial differential equation that describes the process of propagation of a disturbance in a medium. In the case of small disturbances and a homogeneous, isotropic medium, the wave equation has the form
where x, y, and z are spatial variables; t is time; u = u(x, y, z) is the function to be determined, which characterizes the disturbance at point (x, y, z) and time t; and a is the velocity of propagation of the disturbance. The wave equation is one of the fundamental equations of mathematical physics and is applied extensively. If u is a function of only two (one) spatial variables, then the wave equation is simplified and is called a two-dimensional (one-dimensional) equation. It permits a solution in the form of a“diverging spherical wave”:
u = f(t – r/a)/r
where f is an arbitrary function and Wave Equation. The so-called elementary solution (elementary wave) is of particular interest:
u = δ(t - r/a)/r
(where δ is the delta function); it gives the process of propagation for a disturbance produced by an instantaneous point source acting at the origin (when t = 0). Figuratively speaking, an elementary wave is an“infinite surge” on a circumference r = at that is moving away from the origin at a velocity a with gradually diminishing intensity. By superimposing elementary waves it is possible to describe the process of propagation of an arbitrary disturbance.
Small vibrations of a string are described by a one-dimensional wave equation:
In 1747, J. d’Alembert proposed a method of solving this wave equation in terms of superimposed forward and back waves: u = f(x - at) + g(x + at); and in 1748, L. Euler established that the functions f and g are determined by as-signing so-called initial conditions.
Tikhonov, A. N., and A. A. Samarskii. Uravneniia matematicheskoi fiziki, 3rd ed. Moscow, 1966.
wave equation
[′wāv i‚kwā·zhən]
In classical physics, a special equation governing waves that suffer no dissipative attenuation; it states that the second partial derivative with respect to time of the function characterizing the wave is equal to the square of the wave velocity times the Laplacian of this function. Also known as classical wave equation; d'Alembert's wave equation.
Any of several equations which relate the spatial and time dependence of a function characterizing some physical entity which can propagate as a wave, including quantum-wave equations for particles. |
811fc61dfc996356 | Neutron Interferometry: Lessons in Experimental Quantum Mechanics
by Helmut Rauch and Samuel A. Werner
448 pp. Clarendon Press, Oxford, 2000
Reviewed in American Journal of Physics by Mark P. Silverman
It is all too easy, when one reads standard textbooks of quantum mechanics, to focus so intently on abstract state vectors in Hilbert space or on mathematical techniques for solving the Schrödinger equation, that one loses track of (or perhaps never encounters) the fascinating experiments on real physical particles for which the principles of quantum mechanics are required. Neutron Interferometry helps motivate the theoretical side of quantum mechanical instruction by disclosing a world of experimental detail, centered on the neutron, that calls for and tests the principles of quantum theory. The book is not a textbook, but I have recently used it, together with my own book on the quantum interference of electrons (both free and bound in atoms), for instructive, thought-provoking examples - some for mathematical analysis, others for qualitative discussion - in a junior-senior level course of quantum mechanics.
The authors, who have, both independently and in collaboration, made pioneering contributions to neutron interferometry, begin with the analogy between neutron optics and light optics, and from there develop seminal concepts relating to coherence, diffraction, and interference. I find this approach congenial to my own way of teaching quantum mechanics, which, in brief, is to begin with the analogy to classical optics rather than with ties to classical mechanics. In this way, students may find that certain aspects of quantum mechanics are not as unintuitive as physics popularizers, or even quantum mechanics teachers, are wont to claim, if by "intuitive" one means the capacity to predict qualitatively the behavior of a system on the basis of past experience. With classical optics (instead of classical mechanics) as past experience, a variety of single-particle quantum phenomena, e.g. those involving step potentials, barriers, and wells, become reasonably intuitive.
Although quantum mechanics takes its name from the discreteness - quantization - of energy, angular momentum, and other dynamical observables, I believe a good case can be made (and I have made it elsewhere) that what distinguishes quantum mechanics most from classical mechanics is superposition and interference. If interference is to occur, then the superposing waves (or states) must exhibit some degree of coherence. Ironically, for all its fundamentality as the concept underlying quantum interference, I have not found many quantum mechanics textbooks in which the term coherence even appears in the index (apart, perhaps, from the topic of coherent oscillator states), let alone in the discussion of interference phenomena. Students all too frequently may be left with an erroneous impression that it is the de Broglie wavelength that sets the size scale for objects or apertures to give rise to interference effects. By contrast, Neutron Interferometry gives a thorough discussion of the important coherence parameters (longitudinal coherence length, transverse coherence length, coherence volume, coherence time, and so forth) that enter into an analysis of quantum interference, as well as experimental procedures for measuring these coherence parameters in the case of neutron beams.
For readers looking for satisfyingly detailed descriptions of quantum interference phenomena, Neutron Interferometry is a gold mine of illustrative examples. As in my own book whose title asserts that, contrary to Feynman's oft-quoted remark, there is more to the "mystery" of quantum mechanics than two-slit interference, Rauch and Werner outline the basic theory and experimental features of various inequivalent categories of quantum interference phenomena involving spin superposition, topological phase, gravitational and noninertial effects, nonlinearity of the Schrödinger equation, particle-antiparticle oscillations, quantum statistics, quantum entanglement, and much more. Some of these examples are experiments that have already been done (in fact, many years ago), and others are speculative experiments waiting for appropriate advances in technology. Because book reviews are expected to be reasonably brief, I will comment on only a few of the numerous experiments that have interested me most and which represent quantum interference phenomena conceptually different from the standard example of two-slit interference that one encounters most often in textbooks.
The Aharonov-Bohm (AB) effect is a quantum interference effect that depends on spatial topology and can be manifested only by particles endowed with electric charge. A split electron beam, for example, made to pass in field-free space around (and not through) a region of space within which is a confined magnetic flux, will, upon recombination, exhibit a flux-dependent pattern of fringes. Thus, by a judicious adjustment of the magnetic flux, one can produce an interference minimum in the forward direction, even though the optical path length difference of the two beam components is null. The electrons do not experience a magnetic field locally, and therefore are not acted upon by a classical Lorentz force. As neutral particles, neutrons do not exhibit what is traditionally regarded as the AB effect. However, neutrons have a magnetic moment and give rise to a companion topological phenomenon known as the Aharonov-Casher (AC) effect. In the latter, a split neutron beam is made to pass around a region of space within which is a confined electric charge and, upon recombination, gives rise to a charge-dependent interference pattern. The experimental confirmation of this effect, which may be interpreted as an example of spin-orbit coupling, was performed at the University of Missouri Research Reactor in 1991. Rauch and Werner summarize the theoretical interpretations and experimental features of the AC effect and its variants very well.
All particles, quantum as well as classical, are subject to the attractive force of gravity. In quantum mechanics, however, potential differences in the absence of classical forces can give rise to quantum interference effects (as just illustrated above in the case of a topological phase). In their book, the authors describe the so-called COW experiments (for Colella-Overhauser-Werner) in which a beam of neutrons, coherently split into two components moving parallel, but displaced vertically from one another, are recombined to yield an interference pattern that depends on the gravitational potential difference of the two beams. Here is an example where, ideally, the net work done by gravity on the two beams is the same, as well as is the optical path length difference of the two beams. There is a gravitationally-induced quantum interference in the absence of a net gravitational force. Quite by chance, I was lecturing on the COW experiments to my quantum mechanics class at about the time (2002) when the first experiments reporting the quantization of neutron energy states in a gravitational field were reported in Nature - an experiment that I hope will be included in the next edition of this book.
The AB, AC, and COW experiments are examples of single-particle self-interference. Among the entries in the chapter on "forthcoming and speculative experiments" is the neutron analogue of the optical Hanbury Brown-Twiss (HBT) experiments that demonstrated the correlated "wave noise" in chaotic light. From a quantum perspective, such correlations are known as photon bunching and represent a type of quantum interference attributable to the bosonic nature of the photon. Neutrons, however, like electrons, are fermions and are therefore governed by Fermi-Dirac statistics. A neutron HBT experiment would show a negative correlation or antibunching effect. In my own book I analyzed a variety of HBT experiments on free electron beams and had come to the conclusion that the degeneracy parameter of the most coherent field-emission electron sources available was marginally large enough for such experiments to be performed. (The degeneracy parameter is a measure of the mean number of electrons per cell of phase space.) The much lower (by orders of magnitude) degeneracy of known neutron sources led me to conclude that a neutron HBT experiment was virtually hopeless. Rauch and Werner point out, however, the very interesting possibility of obtaining correlated neutrons from the deuteron disintegration reactions D(n,p)2n and D(p-, g)2n, a proposition similar to my proposal of obtaining correlated electrons from the disintegration of the exotic ion m+e-e- (the muonic analogue of H-).
Throughout their book, the authors describe clearly and objectively the successful applications of quantum mechanics to neutron interferometry, eschewing philosophical digressions over such matters as the completeness or interpretation of the quantum mechanical formalism. In the final chapter, however, they give a comprehensive neutral summary of the principal positions that have emerged in answer to the epistemological questions: (a) What is the meaning of the wavefunction? (b) How is the measurement process described? (c) How can a classical world appear out of quantum mechanics? (d) How can non-locality be explained? That such questions remain after more than 75 years of extensive use and meticulous testing of quantum mechanics testifies to how odd the quantum world can be - a world humorously, and not inaptly, mirrored in the Charles Addams cartoon that decorates the cover of the book: the skier who in some mysterious way has left one ski track around each side of a tall pine tree.
About the author
Mark P. Silverman is Jarvis Professor of Physics at Trinity College. He wrote of his investigations of light, electrons, nuclei, and atoms in his books Waves and Grains: Reflections on Light and Learning (Princeton, 1998), Probing the Atom (Princeton, 2000), and A Universe of Atoms, An Atom in the Universe (Springer, 2002). His latest book Quantum Superposition (Springer, 2008) elucidates principles underlying the strange, counterintuitive behaviour of quantum systems. |
2fe4a42a6500425d | Friday, August 16, 2013
Yiannis Hadjichristos has just called my attention to the following paper, a real double rara avis:
- it is published in a peer reviewed journal;
- it clearly opts for a multi-stage theory, interdisciplinar approach.
It is
"Potential Exploration of Cold Fusion and Its Quantitative
Theory of Physical-Chemical-Nuclear Multistage Chain
Reaction Mechanism
Yi-Fang Chang, Department of Physics, Yunnan University, Kunming, 650091, China
International Journal of Modern Chemistry, 2013, 5(1): 29-43
Abstract: Cold fusion is very important and complex. One of main difficulties of cold fusion is the explanation on appearance of nuclear reaction. Based on the standard quantum
mechanics, we propose the physical-chemical-nuclear multistage chain reaction theory,which may explain cold fusion. Since cold fusion is an open system, synergetics and laser theory can be applied, and the Fokker-Planck equation is obtained. Using the corresponding Schrödinger equation and the nonlinear Dirac equation, and combining the multistage chain reaction theory, the quantitative results agree completely with some experiments on cold fusion. Finally, we discuss some new researches, for example, the
nonlinear quantum theory, catalyzer and nanomaterial, etc., and propose the three laws of cold fusion:
(1) The time accumulate law,
(2) The area direct ratio law, and
(3) The
multistage chain reaction law.
There are some striking similarities with the DGT-AXIL approach
to understand LENR+/HENI as:
1. Open system definition of the NAE
2. Complexity of multistage fusion fission process
3. The 3 laws, indicating a path to plasmonics
Eppur si muove - it is progress in Cold Fusion- marching away from its Cradle!
1 comment:
1. The use of weak interaction to create neutrons through invert-beta-decay make some links with Widom-Larsen...
Widom-Larsen have strong problems with heavy electrons and gamma screenings, but the neutrons claims a re supported by good evidence.
I remember also Kozima laws.
For the rest i don't understand why it does not produce gamma when n+H or nn+D merge.
anyway, the reaction works in real life, theory is just a bonus... |
90804263f157521b | I take inspiration where I can get it. My girlfriend recently alerted me to a viral video in which a teenage girl complains about mathematics. “I was just doing my makeup for work,” Gracie Cunningham says while dabbing makeup on her face, “and I just wanted to tell you guys how I don’t think math is real.”
Some of the math she’s learning in school, Cunningham suggests, has little to do with the world in which she lives. “I get addition, like, if I take two apples and add three it’s five. But how would you come up with the concept of algebra?” While some geeks mocked Cunningham, others came to her defense, pointing out that she is raising questions that have troubled scientific heavyweights.
Gracie’s complaints struck a chord in me. Since last May, as part of my ongoing effort to learn quantum mechanics, I’ve been struggling to grasp eigenvectors, complex conjugates and other esoterica. Wolfgang Pauli dismissed some ideas as so off base that they’re “not even wrong.” I’m so confused that I’m not even confused. I keep wondering, as Cunningham put it, “Who came up with this concept?”
Take Hilbert space, a realm of infinite dimensions swarming with arrow-shaped abstractions called vectors. Pondering Hilbert space makes me feel like a lump of dumb, decrepit flesh trapped in a squalid, 3-D prison. Far from exploring Hilbert space, I can’t even find a window through which to peer into it. I envision it as an immaterial paradise where luminescent cognoscenti glide to and fro, telepathically swapping witticisms about adjoint operators.
Reality, great sages have assured us, is essentially mathematical. Plato held that we and other things of this world are mere shadows of the sublime geometric forms that constitute reality. Galileo declared that “the great book of nature is written in mathematics.” We’re part of nature, aren’t we? So why does mathematics, once we get past natural numbers and basic arithmetic, feel so alien to most of us?
More to Gracie’s point, how real are the equations with which we represent nature? As real as or even more real than nature itself, as Plato insisted? Were quantum mechanics and general relativity waiting for us to discover them in the same way that gold, gravity and galaxies were waiting?
Physicists’ theories work. They predict the arc of planets and the flutter of electrons, and they have spawned smartphones, H-bombs and—well, what more do we need? But scientists, and especially physicists, aren’t just seeking practical advances. They’re after Truth. They want to believe that their theories are correct—exclusively correct—representations of nature. Physicists share this craving with religious folk, who need to believe that their path to salvation is the One True Path.
But can you call a theory true if no one understands it? A century after inventing quantum mechanics, physicists still squabble over what, exactly, it tells us about reality. Consider the Schrödinger equation, which allows you to compute the “wave function” of an electron. The wave function, in turn, yields a “probability amplitude,” which, when squared, yields the likelihood that you’ll find the electron in a certain spot.
The wave function has embedded within it an imaginary number. That’s an appropriate label, because an imaginary number consists of the square root of a negative number, which by definition does not exist. Although it gives you the answer you want, the wave function doesn’t correspond to anything in the real world. It works, but no one knows why. The same can be said of the Schrödinger equation.
Maybe we should look at the Schrödinger equation not as a discovery but as an invention, an arbitrary, contingent, historical accident, as much so as the Greek and Arabic symbols with which we represent functions and numbers. After all, physicists arrived at the Schrödinger equation and other canonical quantum formulas only haltingly, after many false steps.
Imagine you are the Great Geek God, looking down on the sprawling landscape of all possible mathematical ways of representing the microrealm. Would you say, “Yup, those clever humans found it, the best possible set of solutions.” Or would you exclaim, “Oh, if only they had taken a different path at this moment, they might have found these equations over here, which would work much better!”
Moreover, the Schrödinger equation is far from all-powerful. Although it does a great job modeling a hydrogen atom, the Schrödinger equation can’t yield an exact description of a helium atom! Helium, which consists of a positively charged nucleus and two electrons, is an example of a three-body problem, which can be solved, if at all, only through extra mathematical sleights of hand.
And three-body problems are just a subset of the vastly larger set of N-body problems, which riddle classical as well as quantum physics. Physicists exalt the beauty and elegance of Newton’s law of gravitational attraction and of the Schrödinger equation. But the formulas match experimental data only with the help of hideously complex patches and approximations.
When I contemplate quantum mechanics, with all its hedges and qualifications, I keep thinking of poor old Ptolemy. We look back at his geocentric model of the solar system, with its baroque circles within circles within circles, as hopelessly kludgy and ad hoc. But Ptolemy’s geocentric model worked. It accurately predicted the motions of planets and solar and lunar eclipses.
Quantum mechanics also works, better, arguably, than any other scientific theory. But perhaps its relationship to reality—to what’s really out there—is as tenuous as Ptolemy’s geocentric model. Perhaps our descendants will look back on quantum mechanics a century from now and think, “Those old physicists didn’t have a clue.”
Some authorities have suggested as much. Last fall I took a course at my school, Stevens Institute of Technology, called “PEP553: Quantum Mechanics for Engineering Applications.” In the last line of our textbook, Introduction to Quantum Mechanics, David Griffiths and a co-author speculate that future physicists will look back on our era and “wonder how we could have been so gullible.”
The implication is that one day we will find the correct mathematical theory of reality, one that actually makes sense, like the heliocentric model of the solar system. But maybe the best we can say of any mathematical theory is that it works in a particular context. That is the subversive takeaway of Eugene Wigner’s famous 1960 essay “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.”
Wigner, a prominent quantum theorist, notes that the equations embedded in Newton’s laws of motion, quantum mechanics and general relativity are extraordinarily, even unreasonably effective. Why do they work so well? No one knows, Wigner admits. But just because these models work, he emphasizes, does not mean they are “uniquely” true.
Wigner points out several problems with this assumption. First, theories of physics are limited in their scope. They apply only to specific, highly circumscribed aspects of nature, and they leave lots of stuff out. Second, quantum mechanics and general relativity, the foundational theories of modern physics, are mathematically incompatible.
“All physicists believe that a union of the two theories is inherently possible and that we shall find it,” Wigner writes. “Nevertheless, it is possible also to imagine that no union of the two theories can be found.” Sixty years after Wigner wrote his essay, quantum mechanics and relativity remain unreconciled. Doesn’t that imply that one or both are in some sense incorrect?
The “laws” of physics, Wigner adds, have little or nothing to say about biology, and especially about consciousness, the most baffling of all biological phenomena. When we understand life and consciousness better, inconsistencies might arise between biology and physics. These conflicts, like the incompatibility of quantum mechanics and general relativity, might imply that physics is incomplete or wrong.
Here again Wigner has proven prescient. Prominent scientists and philosophers are questioning whether physics and indeed the basic paradigm of materialism can account for life and consciousness. Some claim that mind is at least as fundamental as matter.
Wigner is questioning the Gospel of Physics, which decrees, “In the beginning was the Number….” He is urging his colleagues not to confuse their mathematical models with reality. That’s also the position of Scott Beaver, one of the commenters on Gracie Cunningham’s math video. “Here’s my simple answer about whether math is real: No,” said Beaver, a chemical engineer. “Math is just a way to describe patterns. Patterns are real, but not math. Nonetheless, math is really, really useful stuff!”
I like the pragmatism and modesty of Beaver’s view, which reflects, I’m guessing, his background in engineering. Compared to physicists, engineers are humble. When trying to solve a problem—such as building a new car or drone—engineers don’t ask whether a given solution is true; they would see that terminology as a category error. They ask whether the solution works, whether it solves the problem at hand.
Mathematical models such as quantum mechanics and general relativity work, extraordinarily well. But they aren’t real in the same sense that neutrons and neurons are real, and we shouldn’t confer upon them the status of “truth” or “laws of nature.”
If physicists adopt this humble mindset, and resist their craving for certitude, they are more likely to seek and hence to find more even more effective theories, perhaps ones that work even better than quantum mechanics. The catch is that they must abandon hope of finding a final formula, one that demystifies, once and for all, our weird, weird world.
Further Reading:
My Quantum Experiment
Quantum Escapism
The Rise of Neo-Geocentrism
See also “Tragedy and Telepathy,” a chapter in my free online book Mind-Body Problems.
And for more ruminations on quantum mechanics and other puzzles, see my new book Pay Attention: Sex, Death, and Science. |
6a87d0205eae36c4 | Orbifolds as configuration spaces of systems with gauge symmetries
Research paper by C. Emmrich, H. Römer
Indexed on: 01 Apr '90Published on: 01 Apr '90Published in: Communications in Mathematical Physics
In systems like Yang-Mills or gravity theory, which have a symmetry of gauge type, neither phase space nor configuration space is a manifold but rather an orbifold with singular points corresponding to classical states of non-generically higher symmetry. The consequences of these symmetries for quantum theory are investigated. First, a certain orbifold configuration space is identified. Then, the Schrödinger equation on this orbifold is considered. As a typical case, the Schrödinger equation on (double) cones over Riemannian manifolds is discussed in detail as a problem of selfadjoint extensions. A marked tendency towards concentration of the wave function around the singular points in configuration space is observed, which generically even reflects itself in the existence of additional bound states and can be interpreted as a quantum mechanism of symmetry enhancement. |
8e1401a7db4f9b9c | Quantum machine learning
Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning.[1][2][3][4][5][6] The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning.[7][8][9][10] While machine learning algorithms are used to compute immense quantities of data, quantum machine learning increases such capabilities intelligently, by creating opportunities to conduct analysis on quantum states and systems.[11] This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device.[12][13][14] These routines can be more complex in nature and executed faster with the assistance of quantum devices.[2] Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data.[15][16] Beyond quantum computing, the term "quantum machine learning" is often associated with classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning quantum phase transitions[17][18] or creating new quantum experiments.[19][20][21][22] Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa.[23][24][25] Finally, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".[26]
Machine learning with quantum computers
Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving and often expediting classical machine learning techniques. Such algorithms typically require one to encode the given classical data set into a quantum computer to make it accessible for quantum information processing. Subsequently, quantum information processing routines are applied and the result of the quantum computation is read out by measuring the quantum system. For example, the outcome of the measurement of a qubit reveals the result of a binary classification task. While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices.
Linear algebra simulation with quantum amplitudes
A number of quantum algorithms for machine learning are based on the idea of amplitude encoding, that is, to associate the amplitudes of a quantum state with the inputs and outputs of computations.[29][30][31][32] Since a state of qubits is described by complex amplitudes, this information encoding can allow for an exponentially compact representation. Intuitively, this corresponds to associating a discrete probability distribution over binary random variables with a classical vector. The goal of algorithms based on amplitude encoding is to formulate quantum algorithms whose resources grow polynomially in the number of qubits , which amounts to a logarithmic growth in the number of amplitudes and thereby the dimension of the input.
Many quantum machine learning algorithms in this category are based on variations of the quantum algorithm for linear systems of equations[33] (colloquially called HHL, after the paper's authors) which, under specific conditions, performs a matrix inversion using an amount of physical resources growing only logarithmically in the dimensions of the matrix. One of these conditions is that a Hamiltonian which entrywise corresponds to the matrix can be simulated efficiently, which is known to be possible if the matrix is sparse[34] or low rank.[35] For reference, any known classical algorithm for matrix inversion requires a number of operations that grows at least quadratically in the dimension of the matrix.
Quantum matrix inversion can be applied to machine learning methods in which the training reduces to solving a linear system of equations, for example in least-squares linear regression,[30][31] the least-squares version of support vector machines,[29] and Gaussian processes.[32]
A crucial bottleneck of methods that simulate linear algebra computations with the amplitudes of quantum states is state preparation, which often requires one to initialise a quantum system in a state whose amplitudes reflect the features of the entire dataset. Although efficient methods for state preparation are known for specific cases,[36][37] this step easily hides the complexity of the task.[38][39]
Another approach to improving classical machine learning with quantum information processing uses amplitude amplification methods based on Grover's search algorithm, which has been shown to solve unstructured search problems with a quadratic speedup compared to classical algorithms. These quantum routines can be employed for learning algorithms that translate into an unstructured search task, as can be done, for instance, in the case of the k-medians[40] and the k-nearest neighbors algorithms.[7] Another application is a quadratic speedup in the training of perceptron.[41]
Amplitude amplification is often combined with quantum walks to achieve the same quadratic speedup. Quantum walks have been proposed to enhance Google's PageRank algorithm[42] as well as the performance of reinforcement learning agents in the projective simulation framework.[43]
Quantum-enhanced reinforcement learning
Reinforcement learning is a branch of machine learning distinct from supervised and unsupervised learning, which also admits quantum enhancements.[44][43][45][46] In quantum-enhanced reinforcement learning, a quantum agent interacts with a classical environment and occasionally receives rewards for its actions, which allows the agent to adapt its behavior—in other words, to learn what to do in order to gain more rewards. In some situations, either because of the quantum processing capability of the agent,[43] or due to the possibility to probe the environment in superpositions,[28] a quantum speedup may be achieved. Implementations of these kinds of protocols in superconducting circuits[47] and in systems of trapped ions[48][49] have been proposed.
Quantum annealing
Quantum annealing is an optimization technique used to determine the local minima and maxima of a function over a given set of candidate functions. This is a method of discretizing a function with many local minima or maxima in order to determine the observables of the function. The process can be distinguished from Simulated annealing by the Quantum tunneling process, by which particles tunnel through kinetic or potential barriers from a high state to a low state. Quantum annealing starts from a superposition of all possible states of a system, weighted equally. Then the time-dependent Schrödinger equation guides the time evolution of the system, serving to affect the amplitude of each state as time increases. Eventually, the ground state can be reached to yield the instantaneous Hamiltonian of the system.
Quantum sampling techniques
Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include deep learning, probabilistic programming, and other machine learning and artificial intelligence applications.
A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects.[50]
Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.[51][52][53][54][55] The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset.
The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures.[53] Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks.[51] The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets.[52] In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward.
Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed.[56] Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines.[52][54][57]
Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing.[58] The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template.[59] This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware.
Quantum neural networks
Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network.[60] Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set.[60] Such gates make certain phases unable to be observed and generate specific oscillations.[60] Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing.[61] Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size.[61] A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time.[60] The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10±20 nm)[62] electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of and in Dirac notation.[62]
Hidden Quantum Markov Models
Hidden Quantum Markov Models[63] (HQMMs) are a quantum-enhanced version of classical Hidden Markov Models (HMMs), which are typically used to model sequential data in various fields like robotics and natural language processing. Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well.[64] Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue: density matrices. Recent work has shown that these models can be successfully learned by maximizing the log-likelihood of the given data via classical optimization, and there is some empirical evidence that these models can better model sequential data compared to classical HMMs in practice, although further work is needed to determine exactly when and how these benefits are derived.[64] Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models.[64]
Fully quantum machine learning
In the most general case of quantum machine learning, both the learning device and the system under study, as well as their interaction, are fully quantum. This section gives a few examples of results on this topic.
One class of problem that can benefit from the fully quantum approach is that of 'learning' unknown quantum states, processes or measurements, in the sense that one can subsequently reproduce them on another quantum system. For example, one may wish to learn a measurement that discriminates between two coherent states, given not a classical description of the states to be discriminated, but instead a set of example quantum systems prepared in these states. The naive approach would be to first extract a classical description of the states and then implement an ideal discriminating measurement based on this information. This would only require classical learning. However, one can show that a fully quantum approach is strictly superior in this case.[65] (This also relates to work on quantum pattern matching.[66]) The problem of learning unitary transformations can be approached in a similar way.[67]
Going beyond the specific problem of learning states and transformations, the task of clustering also admits a fully quantum version, wherein both the oracle which returns the distance between data-points and the information processing device which runs the algorithm are quantum.[68] Finally, a general framework spanning supervised, unsupervised and reinforcement learning in the fully quantum setting was introduced in,[28] where it was also shown that the possibility of probing the environment in superpositions permits a quantum speedup in reinforcement learning.
Classical learning applied to quantum problems
The term "quantum machine learning" sometimes refers to classical machine learning performed on data from quantum systems. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other applications include learning Hamiltonians[69] and automatically generating quantum experiments.[20]
Quantum learning theory
Quantum learning theory pursues a mathematical analysis of the quantum generalizations of classical learning models and of the possible speed-ups or other improvements that they may provide. The framework is very similar to that of classical computational learning theory, but the learner in this case is a quantum information processing device, while the data may be either classical or quantum. Quantum learning theory should be contrasted with the quantum-enhanced machine learning discussed above, where the goal was to consider specific problems and to use quantum protocols to improve the time complexity of classical algorithms for these problems. Although quantum learning theory is still under development, partial results in this direction have been obtained.[70]
The starting point in learning theory is typically a concept class, a set of possible concepts. Usually a concept is a function on some domain, such as . For example, the concept class could be the set of disjunctive normal form (DNF) formulas on n bits or the set of Boolean circuits of some constant depth. The goal for the learner is to learn (exactly or approximately) an unknown target concept from this concept class. The learner may be actively interacting with the target concept, or passively receiving samples from it.
In active learning, a learner can make membership queries to the target concept c, asking for its value c(x) on inputs x chosen by the learner. The learner then has to reconstruct the exact target concept, with high probability. In the model of quantum exact learning, the learner can make membership queries in quantum superposition. If the complexity of the learner is measured by the number of membership queries it makes, then quantum exact learners can be polynomially more efficient than classical learners for some concept classes, but not more.[71] If complexity is measured by the amount of time the learner uses, then there are concept classes that can be learned efficiently by quantum learners but not by classical learners (under plausible complexity-theoretic assumptions).[71]
A natural model of passive learning is Valiant's probably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples . In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors.[72] However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution.[73] When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions).[71]
This passive learning type is also the most common scheme in supervised learning: a learning algorithm typically takes the training examples fixed, without the ability to query the label of unlabelled examples. Outputting a hypothesis h is a step of induction. Classically, an inductive model splits into a training and an application phase: the model parameters are estimated in the training phase, and the learned model is applied an arbitrary many times in the application phase. In the asymptotic limit of the number of applications, this splitting of phases is also present with quantum resources.[74]
Implementations and experiments
The earliest experiments were conducted using the adiabatic D-Wave quantum computer, for instance, to detect cars in digital images using regularized boosting with a nonconvex objective function in a demonstration in 2009.[75] Many experiments followed on the same architecture, and leading tech companies have shown interest in the potential of quantum machine learning for future technological implementations. In 2013, Google Research, NASA, and the Universities Space Research Association launched the Quantum Artificial Intelligence Lab which explores the use of the adiabatic D-Wave quantum computer.[76][77] A more recent example trained a probabilistic generative models with arbitrary pairwise connectivity, showing that their model is capable of generating handwritten digits as well as reconstructing noisy images of bars and stripes and handwritten digits.[52]
Using a different annealing technology based on nuclear magnetic resonance (NMR), a quantum Hopfield network was implemented in 2009 that mapped the input data and memorized data to Hamiltonians, allowing the use of adiabatic quantum computation.[78] NMR technology also enables universal quantum computing, and it was used for the first experimental implementation of a quantum support vector machine to distinguish hand written number ‘6’ and ‘9’ on a liquid-state quantum computer in 2015.[79] The training data involved the pre-processing of the image which maps them to normalized 2-dimensional vectors to represent the images as the states of a qubit. The two entries of the vector are the vertical and horizontal ratio of the pixel intensity of the image. Once the vectors are defined on the feature space, the quantum support vector machine was implemented to classify the unknown input vector. The readout avoids costly quantum tomography by reading out the final state in terms of direction (up/down) of the NMR signal.
Photonic implementations are attracting more attention,[80] not the least because they do not require extensive cooling. Simultaneous spoken digit and speaker recognition and chaotic time-series prediction were demonstrated at data rates beyond 1 gigabyte per second in 2013.[81] Using non-linear photonics to implement an all-optical linear classifier, a perceptron model was capable of learning the classification boundary iteratively from training data through a feedback rule.[82] A core building block in many learning algorithms is to calculate the distance between two vectors: this was first experimentally demonstrated for up to eight dimensions using entangled qubits in a photonic quantum computer in 2015.[83]
Recently, based on a neuromimetic approach, a novel ingredient has been added to the field of quantum machine learning, in the form of a so-called quantum memristor, a quantized model of the standard classical memristor.[84] This device can be constructed by means of a tunable resistor, weak measurements on the system, and a classical feed-forward mechanism. An implementation of a quantum memristor in superconducting circuits has been proposed,[85] and an experiment with quantum dots performed.[86] A quantum memristor would implement nonlinear interactions in the quantum dynamics which would aid the search for a fully functional quantum neural network.
Since 2016, IBM has launched an online cloud-based platform for quantum software developers, called the IBM Q Experience. This platform consists of several fully operational quantum processors accessible via the IBM Web API. In doing so, the company is encouraging software developers to pursue new algorithms through a development environment with quantum capabilities. New architectures are being explored on an experimental basis, up to 32 qbits, utilizing both trapped-ion and superconductive quantum computing methods.
See also
1. Schuld, Maria; Petruccione, Francesco (2018). Supervised Learning with Quantum Computers. Quantum Science and Technology. doi:10.1007/978-3-319-96424-9. ISBN 978-3-319-96423-2.
2. Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2014). "An introduction to quantum machine learning". Contemporary Physics. 56 (2): 172–185. arXiv:1409.3097. Bibcode:2015ConPh..56..172S. CiteSeerX doi:10.1080/00107514.2014.964942.
3. Wittek, Peter (2014). Quantum Machine Learning: What Quantum Computing Means to Data Mining. Academic Press. ISBN 978-0-12-800953-6.
4. Adcock, Jeremy; Allen, Euan; Day, Matthew; Frick, Stefan; Hinchliff, Janna; Johnson, Mack; Morley-Short, Sam; Pallister, Sam; Price, Alasdair; Stanisic, Stasja (2015). "Advances in quantum machine learning". arXiv:1512.02900 [quant-ph].
5. Biamonte, Jacob; Wittek, Peter; Pancotti, Nicola; Rebentrost, Patrick; Wiebe, Nathan; Lloyd, Seth (2017). "Quantum machine learning". Nature. 549 (7671): 195–202. arXiv:1611.09347. Bibcode:2017Natur.549..195B. doi:10.1038/nature23474. PMID 28905917.
6. Perdomo-Ortiz, Alejandro; Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak (2018). "Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers". Quantum Science and Technology. 3 (3): 030502. arXiv:1708.09757. Bibcode:2018QS&T....3c0502P. doi:10.1088/2058-9565/aab859.
7. Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta (2014). "Quantum Algorithms for Nearest-Neighbor Methods for Supervised and Unsupervised Learning". Quantum Information & Computation. 15 (3): 0318–0358. arXiv:1401.2142. Bibcode:2014arXiv1401.2142W.
8. Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2013). "Quantum algorithms for supervised and unsupervised machine learning". arXiv:1307.0411 [quant-ph].
9. Yoo, Seokwon; Bang, Jeongho; Lee, Changhyoup; Lee, Jinhyoung (2014). "A quantum speedup in machine learning: Finding a N-bit Boolean function for a classification". New Journal of Physics. 16 (10): 103014. arXiv:1303.6055. Bibcode:2014NJPh...16j3014Y. doi:10.1088/1367-2630/16/10/103014.
10. Lee, Joong-Sung; Bang, Jeongho; Hong, Sunghyuk; Lee, Changhyoup; Seol, Kang Hee; Lee, Jinhyoung; Lee, Kwang-Geol (2019). "Experimental demonstration of quantum learning speedup with classical input data". Physical Review A. 99 (1): 012313. arXiv:1706.01561. Bibcode:2019PhRvA..99a2313L. doi:10.1103/PhysRevA.99.012313.
11. Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2014-10-15). "An introduction to quantum machine learning". Contemporary Physics. 56 (2): 172–185. CiteSeerX doi:10.1080/00107514.2014.964942. ISSN 0010-7514.
12. Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2017-11-30). "Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models". Physical Review X. 7 (4): 041052. arXiv:1609.02542. Bibcode:2017PhRvX...7d1052B. doi:10.1103/PhysRevX.7.041052. ISSN 2160-3308.
13. Farhi, Edward; Neven, Hartmut (2018-02-16). "Classification with Quantum Neural Networks on Near Term Processors". arXiv:1802.06002 [quant-ph].
14. Schuld, Maria; Bocharov, Alex; Svore, Krysta; Wiebe, Nathan (2018-04-02). "Circuit-centric quantum classifiers". arXiv:1804.00633 [quant-ph].
15. Yu, Shang; Albarran-Arriagada, F.; Retamal, J. C.; Wang, Yi-Tao; Liu, Wei; Ke, Zhi-Jin; Meng, Yu; Li, Zhi-Peng; Tang, Jian-Shun (2018-08-28). "Reconstruction of a Photonic Qubit State with Quantum Reinforcement Learning". Advanced Quantum Technologies. 2 (7–8). arXiv:1808.09241. doi:10.1002/qute.201800074.
16. Ghosh, Sanjib; Opala, A.; Matuszewski, M.; Paterek, T.; Liew, Timothy C. H. (2019). "Quantum reservoir processing". NPJ Quantum Information. 5 (35): 35. arXiv:1811.10335. Bibcode:2019npjQI...5...35G. doi:10.1038/s41534-019-0149-8.
17. Broecker, Peter; Assaad, Fakher F.; Trebst, Simon (2017-07-03). "Quantum phase recognition via unsupervised machine learning". arXiv:1707.00663 [cond-mat.str-el].
18. Huembeli, Patrick; Dauphin, Alexandre; Wittek, Peter (2018). "Identifying Quantum Phase Transitions with Adversarial Neural Networks". Physical Review B. 97 (13): 134109. arXiv:1710.08382. Bibcode:2018PhRvB..97m4109H. doi:10.1103/PhysRevB.97.134109. ISSN 2469-9950.
19. Dunjko, Vedran; Briegel, Hans J (2018-06-19). "Machine learning & artificial intelligence in the quantum domain: a review of recent progress". Reports on Progress in Physics. 81 (7): 074001. Bibcode:2018RPPh...81g4001D. doi:10.1088/1361-6633/aab406. ISSN 0034-4885. PMID 29504942.
20. Krenn, Mario (2016-01-01). "Automated Search for new Quantum Experiments". Physical Review Letters. 116 (9): 090405. arXiv:1509.02749. Bibcode:2016PhRvL.116i0405K. doi:10.1103/PhysRevLett.116.090405. PMID 26991161.
21. Knott, Paul (2016-03-22). "A search algorithm for quantum state engineering and metrology". New Journal of Physics. 18 (7): 073033. arXiv:1511.05327. Bibcode:2016NJPh...18g3033K. doi:10.1088/1367-2630/18/7/073033.
22. Melnikov, Alexey A.; Nautrup, Hendrik Poulsen; Krenn, Mario; Dunjko, Vedran; Tiersch, Markus; Zeilinger, Anton; Briegel, Hans J. (1221). "Active learning machine learns to create new quantum experiments". Proceedings of the National Academy of Sciences. 115 (6): 1221–1226. arXiv:1706.00868. doi:10.1073/pnas.1714936115. ISSN 0027-8424. PMC 5819408. PMID 29348200.
23. Huggins, William; Patel, Piyush; Whaley, K. Birgitta; Stoudenmire, E. Miles (2018-03-30). "Towards Quantum Machine Learning with Tensor Networks". Quantum Science and Technology. 4 (2): 024001. arXiv:1803.11537. doi:10.1088/2058-9565/aaea94.
24. Carleo, Giuseppe; Nomura, Yusuke; Imada, Masatoshi (2018-02-26). "Constructing exact representations of quantum many-body systems with deep neural networks". Nature Communications. 9 (1): 5322. arXiv:1802.09558. Bibcode:2018NatCo...9.5322C. doi:10.1038/s41467-018-07520-3. PMC 6294148. PMID 30552316.
25. Bény, Cédric (2013-01-14). "Deep learning and the renormalization group". arXiv:1301.3124 [quant-ph].
26. Arunachalam, Srinivasan; de Wolf, Ronald (2017-01-24). "A Survey of Quantum Learning Theory". arXiv:1701.06806 [quant-ph].
27. Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (2006-06-07). Machine Learning in a Quantum World. Advances in Artificial Intelligence. Lecture Notes in Computer Science. 4013. pp. 431–442. doi:10.1007/11766247_37. ISBN 978-3-540-34628-9.
28. Dunjko, Vedran; Taylor, Jacob M.; Briegel, Hans J. (2016-09-20). "Quantum-Enhanced Machine Learning". Physical Review Letters. 117 (13): 130501. arXiv:1610.08251. Bibcode:2016PhRvL.117m0501D. doi:10.1103/PhysRevLett.117.130501. PMID 27715099.
29. Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth (2014). "Quantum Support Vector Machine for Big Data Classification". Physical Review Letters. 113 (13): 130503. arXiv:1307.0471. Bibcode:2014PhRvL.113m0503R. doi:10.1103/PhysRevLett.113.130503. hdl:1721.1/90391. PMID 25302877.
30. Wiebe, Nathan; Braun, Daniel; Lloyd, Seth (2012). "Quantum Algorithm for Data Fitting". Physical Review Letters. 109 (5): 050505. arXiv:1204.5242. Bibcode:2012PhRvL.109e0505W. doi:10.1103/PhysRevLett.109.050505. PMID 23006156.
31. Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco (2016). "Prediction by linear regression on a quantum computer". Physical Review A. 94 (2): 022342. arXiv:1601.07823. Bibcode:2016PhRvA..94b2342S. doi:10.1103/PhysRevA.94.022342.
32. Zhao, Zhikuan; Fitzsimons, Jack K.; Fitzsimons, Joseph F. (2019). "Quantum assisted Gaussian process regression". Physical Review A. 99 (5): 052331. arXiv:1512.03929. doi:10.1103/PhysRevA.99.052331.
33. Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2008). "Quantum algorithm for solving linear systems of equations". Physical Review Letters. 103 (15): 150502. arXiv:0811.3171. Bibcode:2009PhRvL.103o0502H. doi:10.1103/PhysRevLett.103.150502. PMID 19905613.
34. Berry, Dominic W.; Childs, Andrew M.; Kothari, Robin (2015). Hamiltonian simulation with nearly optimal dependence on all parameters. 56th Annual Symposium on Foundations of Computer Science. IEEE. pp. 792–809. arXiv:1501.01715. doi:10.1109/FOCS.2015.54.
35. Lloyd, Seth; Mohseni, Masoud; Rebentrost, Patrick (2014). "Quantum principal component analysis". Nature Physics. 10 (9): 631. arXiv:1307.0401. Bibcode:2014NatPh..10..631L. CiteSeerX doi:10.1038/nphys3029.
36. Soklakov, Andrei N.; Schack, Rüdiger (2006). "Efficient state preparation for a register of quantum bits". Physical Review A. 73 (1): 012307. arXiv:quant-ph/0408045. Bibcode:2006PhRvA..73a2307S. doi:10.1103/PhysRevA.73.012307.
37. Giovannetti, Vittorio; Lloyd, Seth; MacCone, Lorenzo (2008). "Quantum Random Access Memory". Physical Review Letters. 100 (16): 160501. arXiv:0708.1879. Bibcode:2008PhRvL.100p0501G. doi:10.1103/PhysRevLett.100.160501. PMID 18518173.
38. Aaronson, Scott (2015). "Read the fine print". Nature Physics. 11 (4): 291–293. Bibcode:2015NatPh..11..291A. doi:10.1038/nphys3272.
39. Bang, Jeongho; Dutta, Arijit; Lee, Seung-Woo; Kim, Jaewan (2019). "Optimal usage of quantum random access memory in quantum machine learning". Physical Review A. 99 (1): 012326. arXiv:1809.04814. Bibcode:2019PhRvA..99a2326B. doi:10.1103/PhysRevA.99.012326.
40. Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (2013-02-01). "Quantum speed-up for unsupervised learning". Machine Learning. 90 (2): 261–287. doi:10.1007/s10994-012-5316-5. ISSN 0885-6125.
41. Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta M. (2016). Quantum Perceptron Models. Advances in Neural Information Processing Systems. 29. pp. 3999–4007. arXiv:1602.04799. Bibcode:2016arXiv160204799W.
42. Paparo, Giuseppe Davide; Martin-Delgado, Miguel Angel (2012). "Google in a Quantum Network". Scientific Reports. 2 (444): 444. arXiv:1112.2079. Bibcode:2012NatSR...2E.444P. doi:10.1038/srep00444. PMC 3370332. PMID 22685626.
43. Paparo, Giuseppe Davide; Dunjko, Vedran; Makmal, Adi; Martin-Delgado, Miguel Angel; Briegel, Hans J. (2014). "Quantum Speedup for Active Learning Agents". Physical Review X. 4 (3): 031002. arXiv:1401.4997. Bibcode:2014PhRvX...4c1002P. doi:10.1103/PhysRevX.4.031002.
44. Dong, Daoyi; Chen, Chunlin; Li, Hanxiong; Tarn, Tzyh-Jong (2008). "Quantum Reinforcement Learning". IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 38 (5): 1207–1220. arXiv:0810.3828. CiteSeerX doi:10.1109/TSMCB.2008.925743. PMID 18784007.
45. Crawford, Daniel; Levit, Anna; Ghadermarzy, Navid; Oberoi, Jaspreet S.; Ronagh, Pooya (2018). "Reinforcement Learning Using Quantum Boltzmann Machines". arXiv:1612.05695 [quant-ph].
46. Briegel, Hans J.; Cuevas, Gemma De las (2012-05-15). "Projective simulation for artificial intelligence". Scientific Reports. 2 (400): 400. arXiv:1104.3787. Bibcode:2012NatSR...2E.400B. doi:10.1038/srep00400. ISSN 2045-2322. PMC 3351754. PMID 22590690.
47. Lamata, Lucas (2017). "Basic protocols in quantum reinforcement learning with superconducting circuits". Scientific Reports. 7 (1): 1609. arXiv:1701.05131. Bibcode:2017NatSR...7.1609L. doi:10.1038/s41598-017-01711-6. PMC 5431677. PMID 28487535.
48. Dunjko, V.; Friis, N.; Briegel, H. J. (2015-01-01). "Quantum-enhanced deliberation of learning agents using trapped ions". New Journal of Physics. 17 (2): 023006. arXiv:1407.2830. Bibcode:2015NJPh...17b3006D. doi:10.1088/1367-2630/17/2/023006. ISSN 1367-2630.
49. Sriarunothai, Th.; Wölk, S.; Giri, G. S.; Friis, N.; Dunjko, V.; Briegel, H. J.; Wunderlich, Ch. (2019). "Speeding-up the decision making of a learning agent using an ion trap quantum processor". Quantum Science and Technology. 4 (1): 015014. arXiv:1709.01366. doi:10.1088/2058-9565/aaef5e. ISSN 2058-9565.
50. Biswas, Rupak; Jiang, Zhang; Kechezi, Kostya; Knysh, Sergey; Mandrà, Salvatore; O’Gorman, Bryan; Perdomo-Ortiz, Alejando; Pethukov, Andre; Realpe-Gómez, John; Rieffel, Eleanor; Venturelli, Davide; Vasko, Fedir; Wang, Zhihui (2016). "A NASA perspective on quantum computing: Opportunities and challenges". Parallel Computing. 64: 81–98. arXiv:1704.04836. doi:10.1016/j.parco.2016.11.002.
51. Adachi, Steven H.; Henderson, Maxwell P. (2015). "Application of quantum annealing to training of deep neural networks". arXiv:1510.06356 [quant-ph].
52. Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2017). "Quantum-assisted learning of graphical models with arbitrary pairwise connectivity". Physical Review X. 7 (4): 041052. arXiv:1609.02542. Bibcode:2017PhRvX...7d1052B. doi:10.1103/PhysRevX.7.041052.
53. Benedetti, Marcello; Realpe-Gómez, John; Biswas, Rupak; Perdomo-Ortiz, Alejandro (2016). "Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning". Physical Review A. 94 (2): 022308. arXiv:1510.07611. Bibcode:2016PhRvA..94b2308B. doi:10.1103/PhysRevA.94.022308.
54. Korenkevych, Dmytro; Xue, Yanbo; Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Rolfe, Jason; Andriyash, Evgeny (2016). "Benchmarking quantum hardware for training of fully visible Boltzmann machines". arXiv:1611.04528 [quant-ph].
55. Khoshaman, Amir; Vinci, Walter; Denis, Brandon; Andriyash, Evgeny; Amin, Mohammad H (2018-09-12). "Quantum variational autoencoder". Quantum Science and Technology. 4 (1): 014001. arXiv:1802.05779. doi:10.1088/2058-9565/aada1f. ISSN 2058-9565.
56. Amin, Mohammad H.; Andriyash, Evgeny; Rolfe, Jason; Kulchytskyy, Bohdan; Melko, Roger (2018). "Quantum Boltzmann machines". Phys. Rev. X. 8 (21050): 021050. arXiv:1601.02036. doi:10.1103/PhysRevX.8.021050.
57. "Phys. Rev. E 72, 026701 (2005): Quantum annealing in a kinetically co…". archive.is. 2014-01-13. Retrieved 2018-12-07.
58. Wiebe, Nathan; Kapoor, Ashish; Svore, Krysta M. (2014). "Quantum deep learning". arXiv:1412.3489 [quant-ph].
59. Wittek, Peter; Gogolin, Christian (2017). "Quantum Enhanced Inference in Markov Logic Networks". Scientific Reports. 7 (45672): 45672. arXiv:1611.08104. Bibcode:2017NatSR...745672W. doi:10.1038/srep45672. PMC 5395824. PMID 28422093.
60. Gupta, Sanjay; Zia, R.K.P. (2001-11-01). "Quantum Neural Networks". Journal of Computer and System Sciences. 63 (3): 355–383. arXiv:quant-ph/0201144. doi:10.1006/jcss.2001.1769. ISSN 0022-0000.
61. Ezhov, Alexandr A.; Ventura, Dan (2000), "Quantum Neural Networks", Future Directions for Intelligent Systems and Information Sciences, Physica-Verlag HD, pp. 213–235, CiteSeerX, doi:10.1007/978-3-7908-1856-7_11, ISBN 9783790824704
62. Behrman, E.C.; Nash, L.R.; Steck, J.E.; Chandrashekar, V.G.; Skinner, S.R. (2000-10-01). "Simulations of quantum neural networks". Information Sciences. 128 (3–4): 257–269. doi:10.1016/S0020-0255(00)00056-6. ISSN 0020-0255.
63. Clark, Lewis A.; Huang W., Wei; Barlow, Thomas H.; Beige, Almut (2015). "Hidden Quantum Markov Models and Open Quantum Systems with Instantaneous Feedback". In Sanayei, Ali; Rössler, Otto E.; Zelinka, Ivan (eds.). ISCS 2014: Interdisciplinary Symposium on Complex Systems. Emergence, Complexity and Computation. Iscs, P. 143, Springer (2015). Emergence, Complexity and Computation. 14. pp. 131–151. arXiv:1406.5847. CiteSeerX doi:10.1007/978-3-319-10759-2_16. ISBN 978-3-319-10759-2.
64. Srinivasan, Siddarth; Gordon, Geoff; Boots, Byron (2018). "Learning Hidden Quantum Markov Models" (PDF). Aistats.
65. Sentís, Gael; Guţă, Mădălin; Adesso, Gerardo (9 July 2015). "Quantum learning of coherent states". EPJ Quantum Technology. 2 (1). doi:10.1140/epjqt/s40507-015-0030-4.
66. Sasaki, Masahide; Carlini, Alberto (6 August 2002). "Quantum learning and universal quantum matching machine". Physical Review A. 66 (2): 022303. arXiv:quant-ph/0202173. Bibcode:2002PhRvA..66b2303S. doi:10.1103/PhysRevA.66.022303.
67. Bisio, Alessandro; Chiribella, Giulio; D’Ariano, Giacomo Mauro; Facchini, Stefano; Perinotti, Paolo (25 March 2010). "Optimal quantum learning of a unitary transformation". Physical Review A. 81 (3): 032324. arXiv:0903.0543. Bibcode:2010PhRvA..81c2324B. doi:10.1103/PhysRevA.81.032324.
68. Aïmeur, Esma; Brassard, Gilles; Gambs, Sébastien (1 January 2007). Quantum Clustering Algorithms. Proceedings of the 24th International Conference on Machine Learning. pp. 1–8. CiteSeerX doi:10.1145/1273496.1273497. ISBN 9781595937933.
69. Cory, D. G.; Wiebe, Nathan; Ferrie, Christopher; Granade, Christopher E. (2012-07-06). "Robust Online Hamiltonian Learning". New Journal of Physics. 14 (10): 103013. arXiv:1207.1655v2. doi:10.1088/1367-2630/14/10/103013.
70. Arunachalam, Srinivasan; de Wolf, Ronald (2017). "A Survey of Quantum Learning Theory". arXiv:1701.06806 [quant-ph].
71. Servedio, Rocco A.; Gortler, Steven J. (2004). "Equivalences and Separations Between Quantum and Classical Learnability". SIAM Journal on Computing. 33 (5): 1067–1092. CiteSeerX doi:10.1137/S0097539704412910.
72. Arunachalam, Srinivasan; de Wolf, Ronald (2016). "Optimal Quantum Sample Complexity of Learning Algorithms". arXiv:1607.00932 [quant-ph].
73. Nader, Bshouty H.; Jeffrey, Jackson C. (1999). "Learning DNF over the Uniform Distribution Using a Quantum Example Oracle". SIAM Journal on Computing. 28 (3): 1136–1153. CiteSeerX doi:10.1137/S0097539795293123.
74. Monràs, Alex; Sentís, Gael; Wittek, Peter (2017). "Inductive supervised quantum learning". Physical Review Letters. 118 (19): 190503. arXiv:1605.07541. Bibcode:2017PhRvL.118s0503M. doi:10.1103/PhysRevLett.118.190503. PMID 28548536.
75. "NIPS 2009 Demonstration: Binary Classification using Hardware Implementation of Quantum Annealing" (PDF). Static.googleusercontent.com. Retrieved 26 November 2014.
76. "Google Quantum A.I. Lab Team". Google Plus. 31 January 2017. Retrieved 31 January 2017.
77. "NASA Quantum Artificial Intelligence Laboratory". NASA. NASA. 31 January 2017. Retrieved 31 January 2017.
78. Neigovzen, Rodion; Neves, Jorge L.; Sollacher, Rudolf; Glaser, Steffen J. (2009). "Quantum pattern recognition with liquid-state nuclear magnetic resonance". Physical Review A. 79 (4): 042321. arXiv:0802.1592. Bibcode:2009PhRvA..79d2321N. doi:10.1103/PhysRevA.79.042321.
79. Li, Zhaokai; Liu, Xiaomei; Xu, Nanyang; Du, Jiangfeng (2015). "Experimental Realization of a Quantum Support Vector Machine". Physical Review Letters. 114 (14): 140504. arXiv:1410.1054. Bibcode:2015PhRvL.114n0504L. doi:10.1103/PhysRevLett.114.140504. PMID 25910101.
80. Wan, Kwok-Ho; Dahlsten, Oscar; Kristjansson, Hler; Gardner, Robert; Kim, Myungshik (2017). "Quantum generalisation of feedforward neural networks". NPJ Quantum Information. 3 (36): 36. arXiv:1612.01045. Bibcode:2017npjQI...3...36W. doi:10.1038/s41534-017-0032-4.
81. Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo (2013). "Parallel photonic information processing at gigabyte per second data rates using transient states". Nature Communications. 4: 1364. Bibcode:2013NatCo...4.1364B. doi:10.1038/ncomms2368. PMC 3562454. PMID 23322052.
82. Tezak, Nikolas; Mabuchi, Hideo (2015). "A coherent perceptron for all-optical learning". EPJ Quantum Technology. 2. arXiv:1501.01608. Bibcode:2015arXiv150101608T. doi:10.1140/epjqt/s40507-015-0023-3.
83. Cai, X.-D.; Wu, D.; Su, Z.-E.; Chen, M.-C.; Wang, X.-L.; Li, Li; Liu, N.-L.; Lu, C.-Y.; Pan, J.-W. (2015). "Entanglement-Based Machine Learning on a Quantum Computer". Physical Review Letters. 114 (11): 110504. arXiv:1409.7770. Bibcode:2015PhRvL.114k0504C. doi:10.1103/PhysRevLett.114.110504. PMID 25839250.
84. Pfeiffer, P.; Egusquiza, I. L.; Di Ventra, M.; Sanz, M.; Solano, E. (2016). "Quantum memristors". Scientific Reports. 6 (2016): 29507. arXiv:1511.02192. Bibcode:2016NatSR...629507P. doi:10.1038/srep29507. PMC 4933948. PMID 27381511.
85. Salmilehto, J.; Deppe, F.; Di Ventra, M.; Sanz, M.; Solano, E. (2017). "Quantum Memristors with Superconducting Circuits". Scientific Reports. 7 (42044): 42044. arXiv:1603.04487. Bibcode:2017NatSR...742044S. doi:10.1038/srep42044. PMC 5307327. PMID 28195193.
86. Li, Ying; Holloway, Gregory W.; Benjamin, Simon C.; Briggs, G. Andrew D.; Baugh, Jonathan; Mol, Jan A. (2017). "A simple and robust quantum memristor". Physical Review B. 96 (7): 075446. arXiv:1612.08409. Bibcode:2017PhRvB..96g5446L. doi:10.1103/PhysRevB.96.075446. |
0662f418ab5b2cd5 | The Fabric of the Cosmos
The Fabric of the Cosmos
2011, Science - 207 Comments
Ratings: 8.73/10 from 164 users.
Interweaving provocative theories, experiments, and stories with crystal-clear explanations and imaginative metaphors like those that defined the groundbreaking and highly acclaimed series The Elegant Universe, The Fabric of the Cosmos aims to be the most compelling, visual, and comprehensive picture of modern physics ever seen on television.
More great documentaries
207 Comments / User Reviews
1. So it's not Crimplene or Lycra then.
"Amazin'!" — Brian Cox
2. Lucky to be here.
3. Compressing the infinitesimal... Yeah... That's great, is this made possible by the same equation that brought us the period in history where time didn't exist? OK. That's how the world is flat these days.
4. This is the dumbed down version of the cosmos. Some of this stuff is interesting but for me it's far too American. I understand that the point is to appeal to as many people as possible but this may as well be a children's program on theoretical physics at times.
1. I've read the book and have now watched the video documentaries.
The two physics Brians Cox and Greene have motivated enthusiasm in curious youngsters and reawakened science thrills in old scientists like me.
What has become obvious to me is that Physics teachers who taught me were unaware of Einstein,Dirac,Bohr, Feynman etc.
The fabric of the cosmos should be preached from church pulpits instead of the usual boring bulls*it!
5. "Life/Reality we know it may just be a projection of all the information/data residing on the event horizon...." mind-boggling!
6. I agree with Bruce. About 20 mins of interesting content padded out into an hour and dramatized for an impatient dumb audience.
7. Remember it is just for TV entertainment for the masses. We expect extreme interesting everything, So they try to provide it. Complete factual science may seem slow and boring to many. If you want nothing but facts science you may not have as much fun with it yourself. I love all the ideas, even if I think some are crap. "The only dumb question...."
8. Pretty far fetched theories. Some that seem almost equal to supernatural in stature.
So this may be the right place to ask: Does the universe exist with no observer? Does the observer give it dimension?
9. This video quickly got off the topic concerning the actual nature
of space. Instead of talking about the factual fabric of the cosmos, it talked about Alice-in-Wonderland fantasies that have nothing to do with the real nature of space. However, the video is correct in one aspect, modern physicists and cosmologists have deceived us all. Brian Greene and his associates are fable salesmen; trying to convince people that the three-dimensional world we live in is really a two-dimensional holographic projection on the surface of a black hole. All the irrational scientists in this video also show up in many other videos spieling curved space-time conceptualizations and quantum mechanical notions as facts. They are like blind men looking at a photo of an elephant, then declaring themselves as experts on the subject. Society should be very skeptical of their elusive postulations. Frankly, their imaginations have gotten way out of hand.
So, what is the real nature of space... space is physical volume. We
see physical volume everywhere we look out, and everywhere we look in. Physical volume is the most abundant aspect composing the nature of the universe. We can factually prove that physical volume exists because we can see it, touch it, move through it, and measure it. Physical volume is an undisputable fact.
The nature of physical volume has three obvious spatial dimensions, height, width, and length; thus, volume is a spatial measurement. In all physical applications on Earth, physical volume is a geometric measurement deriving the quantity of a solid, liquid, or gas. In other words, volume is a product of substance, a measurement of something tangible, defined by shape and amount.
To put the nature of physical volume in a pure perspective, if we remove all matter and energy from the cosmos, then there is only pure physical volume remaining. The question becomes, what is the physical causation of this remaining volume. In order for physical volume to exist, it must have four fundamental components, three of spatial dimensions, and one of substantive causation. Obviously, physical volume has to exist first, for matter and energy to exist within and move through. Therefore, we can look at this remaining space as a proto-state, representing the fabric composing the cosmos, and call this state of ambient physical volume "Protospace."
So, what do we know for fact; we know that volume is a product of physical substance. We know that physical volume has four components that compose its existence. We know that physical volume is a homogeneous continuum, continuously existing between particles, planets, stars, and beyond galaxies. And, we know that physical volume had to exist before matter or energy could come into existence. Functionally, physical volume is the most elemental aspect of nature. The fact that physical volume exists, gives us enough information to decipher the totality of its nature. Where it comes from, how old it is, what it is made of, and how it formulates into the material universe we see today. The clue here is that substance causes volume, volume does not cause substance. Physical volume is the Rosetta stone of both the origin and formation composing the universe.
Unfortunately, orthodox physics and cosmology are not dealing with factual based science; they postulate irrational notions such as the big bang, curved space-time, elusive quantum constituencies, multiverse, and holographic conceptualizations. These concepts have no actual facts to support them. They are an abyss of fantasies, only supported by elaborate equations. Theorists forget that mathematics is only symbolic representation of values in an event that can apply to any concept whether it is real or not. Two angels plus two angels equals four angels; the math works, but does
not, and cannot, prove or predict angels exist. Science based on subjective equations is a fool's journey.
To understand what the fabric of space is really made of, you only need look at what you can factually see and measure. The answers are all there... they are hiding in plain sight waiting to be recognized.
1. This may be the first time am saying this, but what are your qualifications? and from what sources are you deriving this info? the only physical volume that I know of is what is derived from hard disc space or other such device.
To be physical as in your physical volume, is to have mass/matter and mass means to made up from atoms, but atoms have only .0999999% matter.
Do not really know what you are talking about, or did you just make all this up?
2. I use the term "physical volume" to mean that the phenomenon of space has a substantive causation.
Anything that physically exists, regardless of its form, must be made of something tangible to exist, if it was not, that would be magic. Since space obviously exists, then it must be made of something physical in order to exist. Physical volume is a case of substantive cause and effect.
If you think about it, substance and volume are inseparable. If there were no substance, there would be no amount of volume. Likewise, if there were no volume, there would be no amount of substance. Each is a function of the other, where one exists so must the other.
Mass is anything substantive that can be quantified. Particles (finite units) are one type of mass, space (homogeneous continuum) is a different type of mass. Both are made of the same substance, just in different states. Mass represents the amount of substance something is made of. A particle has finite mass, while space has infinite mass.
Particles exist within a greater context of space. If space did not exist first, there would be no place for particles to exist within or move around. Obviously, space has to exist before particles. The mass of space generates the mass composing particles. In other words, particles are a byproduct of space; there is nowhere else for the mass of particles to come from.
3. What if C A T really spelled DOG?
4. Our universe did not exist until one Planck sec 10^34 after the BB, an expansion, not an explosion, and all space, plasma, matter, time came into being from that singularity, not just space first where you say all derived from.
5. The big bang is a fairy tale. There are absolutely no facts on any level to support the notion. Hearsay as evidence is not a fact, nor is consensus.
A singularity is also a fairy tale. To say that the material universe is derived from a singularity, a point so small that it cannot be quantified, is
ridiculous. It too has no facts, and relies on hearsay and consensus as its only support.
Physical volume (space) is a fact.
6. I will not be your tutor, crack open some science books.
7. Many books and videos, (including this one,) assume that parallel universes, relativity, the big bang, holographic existence, or quantum mechanics are based on facts; they are not, these are all unsubstantiated theories. There is not one shred of tangible facts to prove any of these notions. Theoretical physicists and cosmologists, use elaborate subjective equations as a simulation of facts. They have no real facts to base their theories on. To test this, you only need ask them for a single proving fact for any one of their conjectures; you will find that they cannot provide even one. Theoretical scientists have engaged in a
conspiracy of fantastic fantasies and elusive equations to explain the composition of the material universe. They use simulated facts because they cannot recognize the real facts.
The physical volume that composes space is a tangible fact to work with. In deciphering "the fabric of the cosmos," the physical volume composing space is an elemental and logical place to start. The theoretical singularity of the big bang assumes that space does not exist until it is created by inflation. The theory ignores that there has to be something there to expand into, before it can expand into it. It is not possible to expand into something that is not there; that would be magic.
8. Ya, pretty extreme theories, hard and may be forever impossible to prove. Science can only guess. This may sound philosophical (as most science was at one time), may still be? Does the universe exist with no observer?
9. The universe is not dependent on human existence. An ant does not cause the Earth to exist, nor does a human cause the universe to exist. The universe is an absolute physical construct in every way, shape, and form. There is no part of the universe that is supernatural, magical, or a just because thing.
Writing it in books, or produced in entertainment movies and cartoons, or documentary video's does not make it true; after all, fairy tales are also distributed through all these mediums. What differentiates reality from fantasies are facts. When reviewing conceptualizations, one only needs to look for the facts. If facts do not exist, then it is fantasy. Critical thinking is your strongest ally in deciphering what is real and what is not.
10. I have been an scientific observer for over 50 years. I agree the scientific method is the only practical way to prove. And proof is all we have to call truth. In order to prove you must first question. Many discoveries thought of as proof are proven wrong and many theories thought to be quack are eventually proven fact. The question is more than half the answer.
You say to look at the facts to see what is "real", but how did these facts come to be? Many facts were once thought of as lunacy...One may see things as fact, reality and proof but is it a fact, a strong belief or opinion? Sometimes it is hard to see through the false sciences. (and arrogance) How do we know? We form opinions based on what we observe.
We are easily fooled. (all of us) Most BIG science is eventually proven wrong, I'm waiting for Einsteins version of space and time to be put to the test.
Most modern quantum dimensional theories seem extreme but so did much of what we believe now.
If there were never ever any observer,,,
(humans and ants or Gods included) from the start of "time" there would be no time and no dimension. Without measurement...No science. You answered this yourself.
Right or not science is mostly all cool,.
I love science, it is very stimulating entertainment.
11. I agree with much of what you have stated. In regards to your first post: a question never asked is worth nothing; an answer never given is worth even less. We do have profound facts to work with, physical volume is a pointblank fact; from it we can derive even more questions and answers than we can from science fiction. Science is supposed to deal with facts only, not science fiction; "philosophy" is the proper place for science fiction. Mixing science fiction (philosophy) with real science facts confuses the issue. Real, factually based science, is the only thing that will provide humanity with the tools it needs to manage the future.
Teaching young minds science fiction as science facts is not properly preparing civilization for tomorrow. Stimulating young minds is one thing, misdirecting young minds is quite another. When facts are mixed with fiction, young minds become confused. How much harder is it to learn the alphabet when mixed with letters that do not really exist? Science fiction does the same thing to science facts. If these videos were clearly marked as "for entertainment and philosophical purposes only," then maybe they would not be so damaging to young minds. Science fiction notions such as curved space-time or the big bang theories are so pervasive - that they are taught to college level students as science fact. In physics and cosmology, we should be teaching our children critical thinking skills, not bandwagon science fiction as a simulation of facts.
Stimulating imagination is a great tool; corrupting imagination is harmful to us all...
12. I rarely see any of this stuff stated as fact. You rant about them stating things such as
"two dimensions" etc, as a stated fact? I never see it that way. It to me seems *at most* stated as a theory. Besides, proof is rarely a fact first., they usually start out as theory. (like this stuff)
You to me seem to have a hate on for it.
I promise one thing, closing your mind will blind you. You seem to believe all genius's past and pres are all wrong about eveything?
Do you believe in the speed of light?
13. Quote: "Science is supposed to deal with facts only"
Proof NEVER starts out as fact. It must go through a scientific process. "Corruption"...? Hard core accusation. I think most modern science is stimulating. Even if a theory is dead wrong it still stimulates productive thought....
If you let it....
14. Facts? I thought of all the extreme stuff as conjecture and brain storming, Math is main tool they have to try to gather evidence of any of the stuff you mentioned.
I never thought of it as claimed facts, just TV for the general audience, to capture interest. Interest is the first step.
Sagan said knowledge is the only way to keep ourselves safe from ourselves.,,,,, Interest is good:)
15. present us some "real" facts
16. So on acid the Physical volumes that come talk to me are real in your mind, but not the idea of a singularity ?
17. Could you please tell me what is right and not concocted sensationalist nonsense aimed at attracting viewers?
10. I'l' tell you what space is. It's just that - space! A.k.a. "room" :-)
I think I'll have whatever hallusinogen it is those holographic nutters are having..
1. There is a lot of room in your head!
2. U bet there is - you can fit an entire Universe in here ;-)
11. I've searched high and low for a reasonable explanation of the holographic principle but the literature is to stupid to explain it to me. This is all wrong. According to the comments below the only true source of modern science is the Quran. Therefore I'm gonna go look for a Quran. F**k you science.
1. Have you watched "What is reality?" on here? - If that's just fluff to you now after all your searching for holographic universe info, I've posted a link to a video on quantum loop gravity which is very, very head scratching (in the comments).
2. Oops. I posted on the other doc before I saw your comment here. Yeah "What is reality?" is very very good, probably one of the better ones I've watched in years. I'm still lost on this holographic principle though. I wish I had enough math to grasp it but I don't. I don't think my imagination is fit for it either, there's just too many bizarre notions it throws up. If we are living in/on a projection which mirrors the event horizon of a black hole then we are more than likely in a black hole and whatever is outside our blackhole world has no understanding of how the laws in their Universe breaks down in our world ... no? So what's going on inside blackholes in "our" Universe? But Susskin's string theory contradicts this with branes ... what am I missing here?
Leonard Susskind needs to be burned at the stake.
3. Uhm...well, something's not quite right there because that would suggest infinite dimensions as you burrow deeper into each universe's black holes (if indeed one leads somehow to the other type deal). Strings only gives us 11... or so... to work with. I'll think about it, but right now I'm a zombie staring at a screen. I'll also try and find that quantum loop gravity has to be seen to be believed! (and is right in with all this).
4. loop quantum cosmology
5. "LQG predicts that not just matter, but also space itself has an atomic structure".
That a lot of this new theory can only be fully understood through Mathematics is troubling to me. The further down the rabbit hole I go, the less common place analogies there are to hang an understanding on and the more things seem based on Mathematical abstractions. It hurts my tiny little mind :-/
6. The dimensions would exist separate of this infinite loop, as they do in the current model where parallel universes are allowed. Wouldn't all those parallel universes most likely have the same laws of physics and the same amount of dimensions? This is my own take on the whole thing so I'm sure it's seriously flawed.
7. I haven't looked at Achems' links yet (many thanks), but my guess is that they would not necessarily share the same laws at all. Planks Constant could be different in different universes for example, changing much of the outcomes. This is hardly scientific of me, just philosophical conjecture, but if you look at the conditions needed for life on earth, each step of the way seems to be 'rare' (I use the term loosely) as in the distance from the sun, our solar system's place in the galaxy to name but a couple... It's sometimes referred to as the goldilocks scenario (everything being just right). So if things were to follow that trend then it would be suggestive that our entire universe has laws and physics that are just right... rare even, loosely conjecturing of course. ;-)
12. This show is a load of hypothetical nonsense. It is not experimental, can never be verified. It is a way of fooling people into giving these cosmologists job security. They have never given any real benefit to humanity except wishful thinking. These programs should be banned!!
1. Could you tell me where in the Qur'an it explains all this?
2. Even if it did get some things right, you'd expect them to firstly, and secondly, why refer to old, outdated materiel when we have learned so much more since.....
3. One thing I will never ever ever understand in your perspective that says Quran is outdated, I bet you paid any attention to reciting it closely and seriously.
What is so called "The Big Bang" for an example is just a fact got itself to be mentioned, from the complex worlds of various kinds of knowledge within this layers of this book still unmentioned, rather observed than mentioned by those who look, wonder, question, reason and understand.
I invite you to have a study of how come mentioned facts like these do actually exist 1400+ years ago, that would amaze me for good -personally speaking-.
4. G'day Arrayat. Welcome to TDF mate.
To your first question about why I think the Quran (and Bible etc) are outdated, is because of how long ago it/they were written. As you say 1400+ in this case. I also consider science text books from 20 years ago outdated, as we've learned more since then. (We know things now they never taught when I went to school)
I'm not 100% sure I completely understand your 2nd paragraph, I think because of our language differences, but I think you mean that the Big Bang is an example of knowledge that's contained within the Quran, that we eventually 're-discovered', and the Quran also has many more 'layers' of knowledge that science is yet to discover. You'll have to correct me if I've misunderstood you.
I agree with your last part about thinking it would be interesting if that knowledge was known back then, I've quite an interest in some history and what was known by earlier peoples also. I've been lucky enough to see much of Egypt and have always been interested in our ancient past. I think some civilisations like Egypt clearly had a quite advanced knowledge of our sky, probably more then they're given credit for now.
I'd be very interested to read the passages that you say talk about the Big Bang or any other science that is relevant to the topic that is contained in the Quran. Could you please tell me which ones to look at, or quote them here?
To be honest with you, I must say I have trouble taking the word of a book about science matters when it also talks of a winged horse taking the prophet to cut the moon in half with a sword. When did that happen? Is there any evidence that the moon has ever been cut/cleaved in half in the time frame indicated? You'll have to point out the passages that make that make sense too please.
5. docoman..i am posting simply in the interest of knowledge of early peoples look into info on the ancient ionians.. it is amazing what these people understood at the time.. about the 6th and 5th century BCE.. they were away that matter is made up of atoms and it is from them the word originates..they believed the earth was once entirely covered with water and from that life arose and evolved.They knew the earth was round as well as that the earth orbits the sun. They were the first true scientists. Another great source of information concerning the ionians can be found in the 7th and 8th episodes of Carl Sagans "Cosmos" it also explains how why this knowledge was lost and shall we say re-discovered as late as the 16th and 17th century. I can hardly imagine what life would be like today had those early scientists been allowed to flourish. And then consider europe's dark ages before the renissance, we must thank the east and the muslims for saving the knowledge of the including science and medicine.
6. G'day kicknbak60, thanks for your interesting and informative post mate, and welcome to TDF too.
Thanks for pointing out the Iinians, I don't know much about them at all and will follow up your suggestions. The little I just read I find very interesting.
I agree with you, the world does owe the muslim world the acknowledgement and a thanks for keeping science alive. I also agree that we (homo sapiens) have discovered knowledge and lost it, to rediscover later. (it'd be interesting to be able to know how advanced we've been, what have we forgotten and how much do we still forget) I'm convinced the Great Pyramid builders knew much more about maths then they're given credit for now, and the Sphinx is older then the Egyptologists say. And other things in Africa, like the Dogon people and what it seems they remembered from earlier times I also find very interesting and suggestive of knowledge we had that we've forgotten.
But I'll also stand by my feelings that even though some scriptures may contain some interesting things that may be accurate (historic events or even hints of science knowledge we've not given them credit for knowing), other wrongs in their story make it impossible for me to swallow their dogma. At best they're an interesting read to put into context with their era, not some 'word of God'.
7. You have a healthy disposition balancing curiosity with scepticism, allowing potentially great discoveries. I for one will be all ears.
8. Weep for the Museum of Alexandria.
9. Museum, or Library? I've heard of the burning of the Library of Alexandria, was it also called a Museum?
One of the greatest crimes/pities for our species I think losing that. Some of those scrolls would have been so interesting, and as kicknbak said, how far could we have been now if we'd not been 'interrupted'.
I also wonder how much the ice ages that we have gone through have shaped our 'memory' over the longer term as well. We've been through at least one, most likely multiple. Climate change like that, as we're seeing in more recent discussions, must have a big impact on what we're doing.
10. The Library was run by the "Popular People's Front of Judea", the Museum was run by the "People's Popular Front of Judea" (and had a marvellous little gift shop annex) bad. I knew as I was writing it I'd made a nagging blunder, but thought of this Python post as a recompense and left it as is. lol.
Search Youtube for "Monty Python - Life of Brian - PFJ Splitters" (some swearing involved)
11. lols!! Gotta love Python. Cheers mate. :)
12. At Alexandria there was known to be a library which was due to the collection of written scrolls (perhaps 1,000,000+). There was also a museum although not defined properly by the modern definition of a museum. This was a place to meet and muse over the concepts presented by the books (scrolls) and current thnking -- a forum!!!
13. The ancient Mahatbharata talks about atomic explosions. And when they went and excavated the city where this happened the skeletons where on a par with victims of Hiroshima, and Nagasaki, with radiation levels.
14. G'day Bruce,
very interesting mate. Do you know where those excavations were? Or a good documentary or website talking about it, I'd like to read/watch more.
15. Mohenjo-daro and Harappa
Substantiating the Pakistan/India texts that apparently describe atomic attacks is an amazing find found in the prehistoric Indian cities of Mohenjo-daro and Harappa. [Pakistan's Indus Valley] On the street level were discovered skeletons, appearing to be fleeing, but death came too quick. They were found to be highly radioactive, on a level comparable to Hiroshima and Nagasaki. Yet there are absolutely no indications of volcanic activity, and it appears that both cities were destroyed at virtually the same time.
Further, "At Rajasthan in India, radioactive ash covers three square miles not far from Jodhpur. This is an area of high rates of cancer and birth defects and it was cordoned off by the Indian government when radiation readings soared astonishingly high. An ancient city was unearthed which, the evidence indicates, was destroyed by an atomic explosion some 8,000 to 12,000 years ago. It has been estimated that half a million people could have died in the blast and it was at least the size of those that devastated Japan in 1945. Archeologist Francis Taylor stated that etchings in some nearby temples he translated suggested that they prayed to be spared from the great light that was coming to lay ruin to the city."[1]
16. um im thinking via something like vauge statements and your confirmation bias
17. Glad to answer that, It is in 21:30 where it talk about the so recently called "The Big Bang"
18. Sorry Mate, but your interpretation of 21:30 is way biased...
What century are you teleporting your thoughts from? Just so I know which one to avoid when I travel back in time in my deLorean.
14. great
15. YO THIS WEBSITE IS F**KING GREAT. if you like this documentary you should check out WHAT THE BLEEP DO YOU KNOW, and reat the book THE POWER OF NOW BY ECHKART TOLLE. before watching this i knew the fundamentals of what space is but still really good film! Reality is not real, and no one sees it the same. we have five sensory organ which cant even percieve most of the stuff that is going on metaphysically. This movie just reaffirms my beleife in a creator. because time is all relative and our creator is outside of time. Some of these principles apply to god. But everyones defintion of god is different. Still great documentary
16. in my opinion, if this world is how we perceive as reality, then it is our reality. there's nothing too bothering to me.
17. this is so mind boggling. i can't quite wrap my mind around this, being able to be in the past present and future.
18. The hologram thing bothers me. So if we aren't the real deal here then why even bother? Sounds like something a child would wonder like "is this all just a dream"? Otherwise interesting material.
1. I love the smell of nihilism in the morning
2. The problem, the "scientist" are all ways dealing with matter, stuff one can hold in ones hands, so to speak. They take it to a sub atomic level and at that point there all running into no walls; Like rubbing out the chemical trail of ants.......................
19. Idk if any of you have seen Leonard Susskind's youtube cosmology lectures, but every time I hear the man talk now, I expect him to stop for a couple seconds and take a bite of cookie.
20. i would love to be able to watch this but simply cannot due to brian greens annoying, condescending voice. is it just me?? maybe i should try his books instead
1. It's just you:)
21. i would love to be able to watch this but simply cannot due to the overly loud, constant and totally unnecessary background music..Why is it there????
1. Its being called background music, you should listen to the documentary itself instead of listening what's behind it....
22. It would be closer, the nearest Galaxy is Andromeda and it is heading for a collision course with the Milky Way. Although the Universe is expanding overall, the force of Gravity still pulls nearby glaxies together.
23. can any body tell me if we see starlight from millions of years ago today. were are they know like the nearest galaxy we see the light from it is 2 million light year away is the galaxy still there or has it moved
1. Well you would just have to wait another 2 million years to be sure of what's going on with it today... But predicting it, and since the Universe is expanding and all, it will most probably be a lot farther way by now, and it has most Likely changed has well.
24. Enjoyed every minute...furr sure we'll have a Q computer, if we don't destroy our environment and ourselves (all living things) before then... then, 'maybe' we get the computing brain we need to help with some of the big questions - how do we feed 10 Billion people, make peace, manage a fair political system/industrial complex, etc. As well as better predict the weather & help us build teleport machines (okay maybe humans will be hesitant at first)- just imagine teleports as transportation for cargo however, now were talkin about something useful. p.s. I am beginning to understand, probability & entanglement (i am likely delusional), however, no matter how many ways, my brain will not deal with the multi-verse - as science fiction, sure, as science fact-I need my mommy.... Who needs spook films when we've got theoretical physics :)
25. lol....Brian really likes bread loaf analogies, he explained M Theory's Branes as slices of bread as well in the "Elegant Universe" series
1. he likes dough too...$$
26. this moment never ends! There is only one moment! This one
27. Perhaps someone could explain the following to me.
Newton's idea of the "attraction" of objects to one another has been proven wrong and replaced by Einstein' s idea of warped space. In other words, the moon and earth are not attracted to one another; rather the moon travels in curved spaced around the earth.
Why, then, as in this documentary, do astronomers keep talking about the attraction of gravity? In this case they talk about having to invent dark energy to explain the faster and faster expansion of the universe, as common sense tells us gravity's attraction should slow it down.
Seems to me there might be no need for dark energy, because there is no gravitational force attracting objects to one another, just the curved space created around them.
Apologize if this is a stupid question; however, I am very much a layman but have wondering about this for quite some time.
1. You should address this to the one and only @Achems, he loves that kind of stuff and he is soooo patient with laymen and lay LOL
or at least that's my impression.
2. Only some can observe,
the use of language and word,
of which these are tools,
to confound thoughts of fools,
to set a trap out,
that some will trip, no doubt.
There are more ways then one,
to quiet the ignorance of some.
docoman 2012 :)
I like the way your mind works :)
The cryptic One. :) Edit- scratch that. The Poetic One. ;)
3. "Tell me more my little drug'es tell me more"
Little Alex from "Clockwork Orange"
4. That nadsat's time has passed along with the droog it was about, Mr. Burgess.
5. Neva,
Newtons theory of gravity was not wrong, still being in use today, it was just further extrapolated on by Einstein.
And then if it could be married with quantum gravity, which seems impossible right now, could be further extrapolated on.
You do not have to be rotating around a body to feel the effects of gravity, we can feel the earths gravity because we are trying to go through the earth by the force of earths gravity, by the earth bending spacetime, and we would go right through the earth just like a knife through butter if it were not for the electromagnetic force which is way, way, way, stronger.
6. Neva
... I see your point... but the expansion of the Universe it's not only not slowing down it´s in fact speeding up... so there should be a force causing the acceleration ... all galaxies are moving apart from one another faster and faster...
Gravity even if not like Newton described... it still warps space... like a sheet being deformed by a heavy object... smaller thing tend to fall inwards... so has matter interacts... gravity should slow down the expansion put in motion by the big bang
Picture it like this... if you throw a ball a long a plane surface, it will roll nice and easy for a long time... but if you imagine a irregular bumpy surface... it will slow down as it bumps and falls in the little holes and irregularities... In the cosmos matter creates the "bumps" in the fabric of space... thus it should slow it self down... but in fact it isn't...
dark energy is a mere speculation... maybe it's only just a consequence of the geometry of the Universe and not a force at all... no one knows yet
7. Well.. this universe is not run by "common sense". Quantum reality is way beyond our common sense, it is nearly impossible to fully understand or imagine it. We are simply not evolved to see or imagine forces or events smaller or bigger than our average space-time-scale.
If we would like to understand our universe we have to chuck common sense and rely on facts, probabilities and imagination.
8. This is 4 months after you asked it, but perhaps this will help. Newton's law of 'attraction' was coined because he didn't know what caused gravity, he could only mathematically describe it's appearance and strength. Einstein defined gravity as the warping of space due to massive objects. As explained in the documentary, Einstein created a 'cosmological constant' to prevent gravity from collapsing the universe. Today, dark energy, and dark matter are it's equivalent. With that said, if it is true that the movement of the universe is accelerating, it is far more likely that the universe is already collapsing in on itself under the force of gravity. Draw it out on a piece of paper, and everything would still appear to be moving apart, even if every galaxy is collapsing, or shrinking under it's own gravity. Since most physicists have 'chosen' to accept the idea of expansion, then dark energy and dark matter become a necessity, even if the idea is wrong. (which I believe it is) I guess, only time will tell! (excuse the pun) Live long and prosper Neva.
9. You didn't answer my question.
10. In a nutshell, the term 'attraction' is still used by physicists because of it's mathematical properties. (Newtons's law of attraction) As I stated in my previous blog, Einstein defined what Newton could not. For the layman, it is easier to understand what is visually obvious, as opposed to understanding the abstract concept of general relativity.
Any questions? Take care Neva!
28. Hi there I may be missing some point as im not a physisist or anything like that. Hendrick Kasimir's experiment with the 2 metal plates suggests to me more then what they let on. To have an effect on surfaces that small and to actually be mesureable. What happens when you scale that up to fit the empty space in between matter in space? Is that not a logical source for dark energy? He has the plates close enough to exclude some of the effect of empty space. So what he is showing is empty space's property of expansion. So i guess im just wondering why ive never heard of anyone making that connection?
1. That's a good point. I suspect that no ones linked the math together. I've played with the derivation of Casimir (yeah it's spelled with a 'C', made the same mistake myself once) Effect solving Schrödinger equation with the assumption of an empty box. From there I saw nothing to indicate the behavior they attribute to dark energy, that is, I found the Casimir effect to be attractive, not repulsive... though I do see what you mean. Maybe different boundary conditions (another shape other than a box) is in order.
2. well what I got out "Casimir's : )" experiment was that the effect was akin to something like the way atmospheric pressure will compress a sealed container with a vacuum inside. The expansive force was concentrated on the outside of the plates forcing them together because some of the force was excluded between due to the small gap And really any expansive force that we can measure at our small scale should have huge implications whan applyed to the vast emptyness of intersteller and intergalactic space
29. Can anyone explain to me what they mean by disorder? I can't see how all things go from order to disorder, and the examples in the documentary with books and wine glasses are very naive. The pages are in order from OUR perspective, since numbers have meaning to us and they become scattered. Particles doesn't think that way. Think about the formation of crystals, the rise of synchronicity in nature, in ecosystems, self assembling materials, life, architecture. These are all examples of nature making orderly structures. Atoms can't know what they look like in our scale, they just react with their environment. Tell me how entropy works on the atomic scale, on the sub atomic scale - then I will listen. Any thoughts?
1. Entropy is an thermodynamic measure... although it can be associated with the term "disorder" that we usually use in common sense... you got to be careful, cuz it´s not the same thing...
According to the second law of thermodinamics, kinetic energy can be completely converted into thermic energy, but thermic energy can´t be completely converted into kinetic energy... with entropy you try to measure the part of the energy in a system that no longer can be converted into kinetic energy in thermodynamic transformations.
this only apply´s in closed systems... without influence from the out side... so in fact the only real closed system is the Universe it self
If you take the Earth for instance it´s not a closed system... you have the sun (with low entropy) giving energy all the time, so in fact the sun lowers earth´s entropy constantly. So thermodynamic transformation never run out, and equilibrium is never found, (eventually though it will)
think of a car... with a full tank... you burn the fuel to get movement, but when you run out of fuel you can´t get the movement to turn into fuel again.... so entropy is maximum in that system... and the only way to get that car running again, is to put more fuel in the tank... so you are influencing the system from the outside... if that car was the entire Universe it would stop for ever... so eventually the Universe will run out of fuel an entropy is maximum and nothing will happen... you´ll get total equilibrium in thermodynamic transformations
Why is this related to "disorder"?
According to statistical physics the disorder of a system can be associated with (not directly but through a logarithm function) the number of accessible microstates the system can take once fulfilled the restrictions imposed by the same. Practical constraints common to thermodynamic systems usually connect to the value of the internal energy U and volume V available to the system. So increase disorder of a system means increasing the number of microstates (configurations) available to the particles of the system.
so think of an ice cube... it has a solid configuration and the particles are arranged in that structure... so to maintain that structure you have to maintain the temperature in the environment or it will fall apart (maintain the internal restrictions)... if you let it be... temperature will fluctuate and dissipate from hot areas to cold areas so you´re breaking those restrictions.... trying to find equilibrium... if temperature increases so does the movement of particles thus increasing the way those particles can be arranged in... with a total melt down of that ice cube you have maximum entropy to that system... So the state of an isolated system is always the state of maximum entropy when you maintain the internal restrictions... if you break those restrictions it will only increase entropy or eventually maintain it... never lower it...
So entropy as discussed only applys to the entire Universe... cuz every other system if part of it... influencing each other... so entropy varies from area to area.... but the overall entropy of the Universe will always increase
Hope that counted for something :)
30. Thanks Kliment,one more. If our bodies could somehow withstand traveling the speed of light, would we be able to age slower than the population?
1. That´s not how it works... you would age the same... but if you went on a voyage at the speed of light for a few days... years would have passed on the earth... so even though you would look the same, every one else would have aged several decades... but for you only a few days would have passed non the less... time is just slown down to the perspective of someone not travelling at the speed of light...
31. Dumb question. If the Earth moved faster, would we live longer?
1. No! For us the time would pass the same. But if somebody is moving slower than us from his perspective we would live longer but slower.
32. good series .. fun to watch
I have a question... sorry it´s off topic but since this is one of the most recent docs I might get an answer from one of the brains :) ...
(Imagine this... just bare with me...)
Imagine you have a really large road, with a really large vehicle... and you accelerate it to a certain amount... then inside that vehicle you have another one and you accelerate it to the same amount has the previous one... so in fact the second vehicle has the double the amount of speed for an observer out side the system... right?! now imagine you repeat this process over and over... giving exactly the same amount of energy to accelerate the new vehicle that is always inside the previous one... wouldn't that take us to the speed of light without gaining infinite mass? since it's always a different vehicle with the same amount of energy relative to the previous vehicle... the sum of all velocities at some point should add up to the speed of light to someone sitting out side the system... theoretically of curse... I'm probably saying something stupid... but it puzzled me ... if anyone could give me an answer ... that would be awesome
1. @Ricardo Rodrigues:
No matter how many times you try to jump-start the speed, like using another planets gravity to gain speed etc: you have to take time dilation into account from the observers relative perspective, and no resting mass can attain the speed of light, otherwise it would be an infinite singularity, will be static. Light will always travel at 186,000 miles a second no matter at what source and sum of velocities speed it is originating from.
2. thanks Razor... but let me just point out this.. I'm talking of different objects being accelerated, not adding speed to the same object like the gravity pull of another planet like you said... it's a vehicle inside a vehicle inside a vehicle inside a vehicle ... get my point?... think of it has a system of wheels... you accelerate a wheel to a certain amount... and inside there is another wheel... rotating along with the first one, but since it´s inside the previous one it´s stationary to the perspective of an observer inside the first wheel (like a person standing on the earth, it seems still but it is rotating with the earth) ... then you move the second wheel with the same amount of energy... then inside you have another wheel and you do the same... and so on... so every time you move a wheel it is stationary to the perspective of that environment... sorry I can't explain myself very well... it's hard to make a serious point in English :) ... hope I got the message right this time
3. It is good question. I had fun trying to figure it out :). This is what I think, although, of course, I might be wrong.
As Einstein's equations prove mathematically, you can not reach speed of light by mechanical acceleration because that would require infinite energy. You can not go round this theoretical obstacle by having multiple vehicles inside one another because every accelerating vehicle has to expend energy to accelerate, and that energy is then imparted onto the vehicle that it is in/on (think an accelerating car for example- the wheels transfer energy to the ground or whatever the car is sitting on). Therefore you could say that the energy of the last vehicle in such a system would equal the sum of the energies of all the vehicles. There is no way to reach an infinite amount of energy, no matter how many times and in what way you add up any real amounts of it. Some member of the system must have infinite energy so that the sum of all energies could be infinite. You can not solve this problem by any mechanical means whatsoever.
4. Thank you Dangis
so if I'm walking on a train wagon... the energy I spend for walking is imparted to the train... ? and that influences the entire system...?
Just want to make something clear here... I'm not saying that the first vehicle should jump start the second one and so on... I'm saying that the first one is accelerated and stays whit that speed constantly with all the others inside... that' s why I talked of wheels... so they all accelerate independently but still attached to the previous one ... I don't no if I'm explaining myself properly... but your answer made sense anyway :)
one other view... what would happen if you had a planet or star rotating at 299,792,000 metres per second, and I accelerated a car to 500 meters per second on it's surface...? (dispite all the obvious impossibilities) could I do that...? and if so ... what would be the overall speed of that car ...? observing from the earth
and thinking back to a system of vehicles inside vehicles... even if you reduce the mass of every new vehicle eventually one had to have no mass... so it doesn't work has well right?
33. The idea of a two dimensional reality at the surface of our universal sphere is incredibly significant. Pondering wave-particle duality and the infinite relevance of Pi, you begin to realize that anything (and everything) is possible.
34. The simplest comparison between Newton and quantum physics is in understanding that Newton reality assumes that if you have enough information about a particular event, (i.e. a baseball leaving a bat, the speed, the trajectory), you can predict the out come of that event, (i.e. point of impact).
Therefore, theoretically, if you had enough information of all events occurring in the Universe at any given moment, you should be able to predict the outcome of all events simultaneously, and in essence predict the future.
In a Newtonian world, all reality is therefore predestined, unfolding in a predictable manner, suggesting all of our fates are therefore predetermined regardless of how we behave.
The refreshing aspect of quantum theory is that by simply observing subatomic particles, we can alter their behavior. Heisenberg discovered that you can only detect the position or momentum of a subatomic particle, not both. Once you recognized one aspect, the other was lost through observation.
On the sub-atomic level, nothing is predetermined, eternal randomness leaves everything to chance. We aren't locked into a Newton blueprint of destiny after all.
Rather we are governed by alternative outcomes in every choice we make and therefore are truly the rulers of our own fate.
I don't believe that this was mentioned in the "Fabric of the Cosmos" series, but to me is one of the most relevant aspects of Quantum vs Newtonian theory.
1. ok, the main philosophical difference is that quantum physics gives us the chance for world that is not predetermined and not govern by destiny. But that applies only for microscopic world, everything bigger then that, the world around us we interact with, the actions we make can be predicted.. microscopic world is world of possibilities, it is uncertain if some subatomic action will appear here or there, but in the bigger picture it has no effect on macroscopic world. Example: if u decide to go and get a cup of tea, on your way to do that actually will happen many quantum physics events with all uncertainties and possibilities in your body, in the cup of tea, in the environment u going through, but that will not affect the result at all.. that you gonna get and drink that cup of tea. thus we still have no free will.. It's just the destiny:) (until we find a place for "soul" in ourselfs :D
2. No one is really sure how the macroscopic and the microscopic worlds really interact and no one reached a theory of everything. I believe that there is a theory of everything that we haven't realised yet. We might very well be off the right track but I hope one day, possibly during my lifetime, there will be a breakthrough in relation to how quantum physics and astrophysics interact... Before that happens, we will all be here expressing our mere opinions, guessing...
35. I like the first episode, talking about the non-emptimess of space: fascinating stuff that I haven't seen in documentaries before. Almost as good as Jim Al-Khalili's stuff!
36. Happy New Year Az, Iz, Razor, Epic, Vlatko, Biblelover, and Pysmythe! They're already setting off fireworks here! Gotta go ---- can't trade "real life" for this box all the time. Charles B.
37. If enough scientists of this world want multiverse, they will get multiverse and when they get that, they will still be looking for the container of it all. May be we live in a glass bowl, a sort of crystal ball filled with bubbling black champagne. It's encouraging to see that scientists are as crazy as they wish, can or allows us to do the same.
Will 2012 be exciting, existing or exiting....we shall live to see!
Happy New Year to all members of TDF and a special toast to Vlatko, Epicurus and Achems_Razor and to Brian Cox who is quite possibly the cutest of them all on a night with the stars.
1. you too Az! I hope you cheer up. you seem to be a little down lately. hang in there and stiffen that upper lip. lol
2. life's beating me... or is it the other way around.
I am at the airport in Calgary, long wait for a flight...thanks to TDF, time will fly by.
3. @AZ: Calgary, no way... you live here or just visiting ?
4. I spent just a few hours at the airport on my way to Quebec.
5. Az,
Happy new year to you also! No booze for me though, booze makes me happy.
6. That's one thing i always took in moderation with very very few exceptions. Now that i am off green leaves...
38. and wow - thanks for the tip on the brian cox vid... fantastic.. look up symphonyofscience we are all connected.
39. disregard dmxi - I found tons of info already - wikipedia has a nice article on multiverse. guess I was just disappointed that effect of multiverse gravity wasn't mentioned in this doc, unless I missed it... seems to me that over time it could have been a catalyst for the big bang. I should stick with what we know, but questions lead to more knowledge no matter how silly they seem.
1. I have watched the 1st 2 parts of it and am also cant help but notice no mention of string theory, and where is Mr. Michio Kaku, not that he has made less documentaries :)
40. @dmxi - pls post link - I've been wondering about this for quite some time.
41. could dark energy simply be gravity from other universes (multiverse) pulling ours outwardly, rather than a force pushing from within ours?
1. that theory is being considered & i loved that notion to be true but i do not dare of dreaming it's possibility!wish i could give a link but don't have one at hand at the moment.
42. Only 2:00 (minutes) into this, but I suddenly wonder about the entire notion that everything rests inside Space; as if Space itself were like a box and Matter is something that is contained within it. What about the idea that the Matter is the Space? I guess i should finish the entire documentary first, but so far it's got me thinking.
43. I think if no religion had ever spoiled the mind of people through wars and division, we would all be spiritual scientists searching for the meaning of life.
1. dear az,that is where evolution will with out a doubt lead us .the climax of our journey will be mind over matter & this is not a religious belief nor statement but just the unavoidable transition of entropy into it's purest form.spiritiuality is a surrogate for perfectly vibrating with ones surrounding (consciousness not measurable )& misused terminology misdemeanors scientific scrutiny will be ensured!so long we survive ourselves,of course!
2. Sounds a little bit like some of the latter parts of 'Childhood's End'.
3. had to google 'childhood's end' & was surprised to have found an A.C.Clarke novel.i possess a couple of clarke's tv related books but as i found out ,this seems to be his own favourit work.must give it a go.thanks for the hint .
4. It's a GREAT book, even though it's, what, 50 years old now? One of the very best sci-fi books ever, for sure.
5. I am reading a book suggested by @Pysmythe, A Beautiful Mind (I have seen the movie and the documentary on Nash). I am at page 100 or so....i am amazed to read the intricacy of how the brightest mind of those days were mainly use for military advancement. The camarades came up with a game named "So long sucker_F*** your Buddy", somewhat appropriate for the environment they were creating in order to pay for the pursuit of their personal dream of accomplishment and recognition above one an other...It is without a question that science (or at least a large part of it) is still used to pay and to advance military power.
The computer we write on are here because the military needed a better way to connect....and then the digital world expanded to expand their control over all of us.
6. 100 pages already?! You don't mess around, do you?
edit- In addition to changing ourselves, all those extremely competitive people had better get a lot more busy than they are figuring out how we can live within the planet's means, instead of figuring out how to destroy it, or all this great science is just going to be an echo in the cosmic void.
edit-edit- I know you know this...but we DO need to keep preaching it as much as possible!
7. yep...when i like something i immerse.
Cosmic void taken as comic void by the ones holding the strings.
8. Not sure if preaching works....but exemplifying does least to the one doing it.
9. I like preaching! People say I do a good job at it (I just need to always fallow my own sometimes)! ;-)
P.S. Az, I changed my avatar just for you so you can see my real face, as you mentioned once in one of your posts. Here I am with my litte buddy (age 2 at the time) . . . . and a butterfly of course! Gotta keep my icon of rebirth and new life in there! ;-)
10. Preaching is talking to others about what we think they should think....when i talk (of my beliefs) i don't care if people agree or not, i just talk so i can hear myself and see if i still believe what i am saying.
....some times i don't and most times i do.
I really like your new avatar....i wish i could make it bigger and see your eyes and see his eyes.
11. Az, you do know you can hold down Ctrl and tap the + key to make it bigger, right? The image gets a bit fuzzy the larger it gets, but it might work well enough for you.
(don't recall if I ever mentioned this before)
edit- Just don't do it with mine, please, thank you very much!
12. you don't know how good it feels to smile at your message right now! i was in need of diversion....
13. think of religion as fertiliser then it all makes sense
44. I can predict the weather for the next 10 years = artificially cloudy
45. re: plates with an extremely narrow gap colliding... dark matter, the venturi effect
re: acceleration of universal expansion... electromagnetism trumps gravitational attraction
re: red shifts.... everything loses energy over time, one of our basic physical laws. light, however seems forbidden to decelerate. an object loses energy by decelerating, or by increasing period of vibration. a red shift is an increase in period of vibration in light
re: time, perception, and the unbreaking of wine glasses: if, as proposed by the learned folks featured in this segment, time is an entropic motion, then the perception of past, now, future is immutable, and inviolable.. our future is in a less entropic state until we enter, and disturb it, thus decreasing its "orderliness"... our past, having been disturbed both by our entry and exit , has decayed to a more entropic state than our now, and our entering it can only disturb it again, thus increasing entropy rather than the decrease that would be required to re-perceive the moment
re: string theory, the multiverse, and 10 to the 500th: the huge number of imperceptible potential variables of state for the 10 dimensional strings required by the theory can only be resolved by one of three conclusions. multiverse, got the concept, but the math is wayyy off, or, though there are 10 to the 500th potential variables, a far fewer number are actually viable particles(if a variant calls for one or more simultaneous opposite, or contadictory actions, then it probably could be potentially possible, but extremely unlikely, just as a single example). the only way to really find out i guess is to sift through a few millenia of equations, and see which set actually resolve themselves to our "reality", if any might be quicker to try and rework the initial variables, and find a theory that works within the framework of our 4 dimensional plane, which would pare down the list of potential variables quite markedly
(note:editted thoughout viewing as i finished episodes to allow for my short term memory
46. check out Prof Brian Cox: A Night With the Stars, bbc 2. very nice :)
47. Episode 4 – The Origin of the Universe and everything in it.
There used to be a fundamental believe that the universe was created in a Big Bang Event 14 billion years ago.
The reasoning for the Big Bang Theory came up when it was observed that distant galaxies had an increasing red shift value.
This observation was then interpreted as the galaxies moving apart at a faster and faster rate the farther away they were from Earth.
The natural conclusion from this observation was that if the arrow of time was reversed that all of the galaxies must have started from a single point in Space-Time. The time estimate for this event was 14 billion years ago.
The Earth is 4.5 billion years old.
The only problem with the Big Bang Theory is that the original galaxy red shift has been misinterpreted and misunderstood.
Astronomical observations from nearby galaxies have shown that they may contain 2 or more Red Shift values. This information has been conveniently been ignored so that the Big Bang Theory and constantly expanding universe theory holds up.
The problem is that the Big Bang Theory is incorrect and is NOT the origin of our universe.
Problems with the Big Bang Theory.
It has been suggested from the Red Shift evidence that the farther away a galaxy is from Earth the faster it is moving away in the expanding universe.
This universal expansion will never end.
So what happens when galaxy expansion reaches the speed of light?
The expansion of the universe theory breaks down right here.
The expansion of the universe theory however broke down as soon as multiple red shifts were found coming from the same galaxy. This meant that there was another explanation for the Red Shift.
Our universe is filled with hydrogen gas and other dust particles. The longer that light from a distant galaxy is travelling through space the more likely that the smaller wave lengths of light will be reflected away. This is called the Red Shift Sunset Theory.
Over a certain distance of interstellar space only the longer wavelengths of red light will be able to make it through without being reflected away. The more distant the galaxy the more red shifted the light. Meaning that only the longest wavelengths of red light can make it through.
In the case of a galaxy showing multiple red shifts, this is an indicator of various levels of dust and gas that the light has had to travel through. The more red shifted the light the more dust and gas that the light has to travel through.
The best example to see this in real life is to 1) View the sky on a sunny day at noon. It is blue. 2) View a sunny sky at sunset. The sun's rays are now shifted to the orange red because they have to pass through more atmosphere. The sky is NOT red due to the Earth accelerating away from the sun.
Therefore the universe is not expanding with regards to a Big Bang explosion.
Therefore our universe did not originate in a Big Bang explosion.
So where did all the matter in the universe come from?
The matter in our universe comes from the basic function of Space-Time.
The function of Space-Time is to provide a location for the creation matter from pure energy.
Space-Time is dynamically active EVERYWHERE with the creation and destruction of matter at the Quantum level. This is a proven fact of Quantum mechanics and Space-Time.
Matter is simply another form of bundled energy created at the quantum level.
Albert Einstein's equation E = mcc gives us a clue.
Where E = Energy, m = mass, c = speed of light.
At any given time matter is being created and destroyed at the Quantum level in Space-Time.
More matter is being created at the quantum level than is being destroyed at any given time.
The proof of this is our planet in our solar system powered by our sun that is part of our galaxy that is part of our local group of galaxies.
1. Matter is created out of empty space at the Quantum level of Space-Time. Particles are created that become electrons and neutrons that then combine to form hydrogen the basic element of the universe.
2. Hydrogen atoms accumulate through electrostatic forces into hydrogen clouds.
3. The mass of the hydrogen clouds then warp space, resulting in a hydrogen gas ball that under the extreme force of gravity or other compressive outside force then begins a process of nuclear fusion becoming a sun.
4. The life of the sun then produces the other atoms of the periodic table.
5. The universe can NEVER run out of hydrogen atoms to create hydrogen gas clouds to enable star creation, because the very nature of Space-Time is to create particles from pure energy at the Quantum level. Quantum energy particles with a mass dependent on the energy input at the quantum level.
The process of particle creation is occurring everywhere in Space-Time.
Electrostatic forces and gravitation determine the location of galaxies and galaxy groups throughout our universe.
It is therefore not surprising that there is a web of interconnectedness between galaxies and galaxy groups throughout the universe.
The very nature of Space-Time to create particles at the Quantum level from pure energy explains the existence of EVERYTHING we see in our universe without the need of a Big Bang Theory.
1. @Arnie:
Don't really want to look stuff up so will attempt to answer from the top of my head.
The expansion from inflation is already exceeding the speed of light, it is the space itself that is expanding dragging the galaxies with it. Similar to blowing up a balloon.
Space is very, very diffused with hydrogen gas, dust and others, still considered a vacuum almost devoid of particles.
You cant class your sunlight scenario on earth as anything remotely similar with the red-shift from galaxies from space. We live in an atmosphere of 14.7 lbs. per inch in cross section atmosphere high, on our bodies. You don't have a clue as how red-shift is measured from the galaxies in space.
"Matter is being created and destroyed at the quantum level all the time"?? tell us how this is happening? (references)!!
The rest of what you are saying is something you better site some references for, or otherwise it is stuff you seem to be making up!
2. All true. But the human mind thins in point source to point source. In other words... every explanation requires a point source as a beginning and a point source as an end. So, the idea that we came from a central point and will inevitably go back to a central point is the only explanation the mind can handle.
48. Episode 2 - Critical comment on the concept that the world around us moves from an organized state to a more disorganized state. This is the law of thermodynamics and it is fundamentally wrong.
The examples shown in the episode 2 are of pages of a book being scattered, an egg breaking and so forth.
However the exact opposite is true in our universe. Our universe moves continuously from a disorganized state to a more organized state.
How can I say this?
Because life would not exist on the planet Earth, in our solar system, in our galaxy in our local group of galaxies if the law of thermodynamics was true. Meaning it would be impossible for life to exist on the planet Earth, orbiting our sun, within our solar system, within our galaxy within our local group of galaxies if the true fundamental law of the universe was not to move from a disorganized state to a more organized state.
Real life observations of the universe moving from a disorganized state to a more organized state.
1. Space-Time transforms energy into particles with mass.
2. These particles become hydrogen atoms.
3. Hydrogen atoms condense into hydrogen clouds.
4. Hydrogen clouds become suns.
5. The fusion process in a sun creates the other elements of the periodic tables.
6. A sun goes super nova and in the process a new sun and planets are formed.
7. The newly created elements allow for the creation of life on a planet.
8. That life evolves to become more and more complicated.
9. That life transforms the elements around it into more complex structure and tools.
10. I am a human being the most advanced animal on the planet Earth, using a laptop computer to watch a program created by other human beings, stored on a hard drive and sent across the internet the most advanced communications network on the planet Earth.
Conclusion: The fundamental law of the universe is to move from a disorganized state to a more organized state. One only has to look around and realize that this is the case.
The universe DOES NOT as a fundamental rule move from a organized state to a disorganized state. If it did, hydrogen atoms would not be formed in Space-Time, hydrogen gas clouds would not be formed, suns would not be formed, other elements would not be formed, planets would not be formed, life could not exist and this program would NEVER have been created.
Therefore based on the evidence that surrounds all of us in every day life the fundamental law of the universe is to move from a disorganized state to an organized state.
1. Others better equipped to do so will be along shortly to explain how you've misapprehended the laws of thermodynamics. Me? I'm just one of the local boys, lol.
2. while you have somewhat correctly elaborated the origin of life on earth, you fail to disprove the second law of thermodynamics. While we may move to a more philosophically "organized" state, the entropy of the universe has always been increasing. Entropy is not exactly order to disorder, but rather an orderly state expands from a tight mass to a gaseous, sparse plain. the examples used in the documentary are only metaphors.
3. I think your confused.
When things get more complicated, they do not get more orderly, rather more chaotic.
Chaos however is not fundamentally bad, as it paves the way for new more complicated orders:
as you explained in the formation of the Universe, our Sun, Earth, Evolution; All these these have changed throughout time to increasingly more complicated formations.
However I do not see how any of this is progressing towards more perfect order?
The opposite is probably more true, as the universe prior to the big bang in its simplicity was the order.
however simplistic order that is unchanging is disadvantageous; disorder in contrasts, and the evolution of events creates the illusion of time as progressive...
Therefore Disorder creates new order, which then through time becomes disorder for a new order.
Hope that helps
4. kind of like my sphere shaped multi universe idea that i created four hours ago, i liked your conclusion, i've never done any science classes before but i find the subject like an endless debate filled with unlimited questions a journey of thought!
and i was wondering who would i send a copy of my book of 100 new inventions to when its finished im at 42 at the minute, from designs to unknown new inventions?
5. i agree with you. I don't want to sound like Sherlock Holmes or anything, but this seems so obvious... it's elementary. lol
6. How do you define "organised"?, or disorganised for that matter?
Whether something is organised or not is simply a self-projected image of the mind. Without words and thought and human perception it's all just action; - it just "is"...
7. No, the second law of thermodynamics is correct as per present understanding of physical laws.
Secondly the universe moves from lower entropy to higher entropy. One of the key reasons why we can not create a perpetual motion machine or why we cant harness energy with exact conversion, and many other examples.
The march towards higher entropy is also the reason for emergent properties.
The examples of the hydrogen clouds to suns are accurate but a scientific treatment and understanding would tell you that, this and all the other examples you mentioned, really do support the march of the second law of increasing entropy.
One of the major research questions still being investigated is that why did the universe start at a state of extremely low entropy as compared to what entropy we observe today. This is a valid question but in no way does it question the onward march of entropy.
Also our perception biases us towards a more particle nature of our reality, but for a deeper understanding we must look at the field nature of the universe and reality.
49. @Epicurus...will certainly put his mark here.
Be back tomorrow.
1. to be honest, when Waldo is around, im not needed. lol
2. no one can replace you....but i agree Waldo is a master.
3. No, no, no... No master, trust me. You guys are making me blush here, seriousely though I am just relayng what other men figured out, not me. The key to understanding it, I mean really developing a feel for it, is mathematics. I don't mean memorizing the equations or learning how to correctly calculate them, that is important of course but it doesn't explain what the equation means to the real world we live in. You have to study the mechanics of the equation, play with the variables and see what results you can get. Still, I have a long way to go, trust me.
4. You are always needed Epi, everyone has there own unique way of explaining things, there own unique insight into the standard equations and laws that regulate our world. I am a far cry from an expert, I am just relaying what my professors have taught me really. Thanks for the compliment though, it means a lot coming from you.
50. @waldo
wow, that's a lot of detail on entropy & thanks, but that wasn't really what i was getting at. simply put, whether the bias entropy places on how things 'tend' to go is all there is to the matter of time directionality. nature [as we experience it] occurs forward, the processes only have a phenomenological coherence forward. the prospect of a ball lifting off the ground, flying into a bat i'm unswinging backward which then accelerates into the pitcher's hand is beyond merely 'improbable'. such a prospect only has relevance in our universe as the inverse of the prospect we know.
you seem to be saying a new alternative universe is spinning off of every permutation not taken in the quantum space. which seems, if i understand the concept, a rather profligate use of universes. the past is not, as we experience it, perhaps as indefinite as the theory of quantum physics would imply, according to your remarks. if i have eggs for breakfast, that observation will be the same no matter who reliably observes the fact.
at any rate, for a cosmos with so many potential careening alternatives, ours seems to be tolerably consistent and isolate, however seemingly arbitrary.
1. @RileyRampant:
Yes others can observe you having eggs now, but what of one year from now, unless you took a picture with the date on it, just by thinking back one year hence what did I have for breakfast, probabilities come into play for the (unobserved) past that effectively changes the past, and that changes the future, from Hawking "Grand Design" The past is not real, can a person grab a hold of it in his hands? no, only fleeting or not fleeting memories that by itself may not hold all the nuances of what you think happened in the past, things may be forgotten or even added.
And not new universes forming in the present but from the unlimited probability field that forms new universes for you every Planck second from all the choices that are offered, should I do this or that, this direction or that direction etc: that you yourself taking into consideration other interactions that form your own reality. And keep in mind the choices you did not take that were offered are still just as real and viable, they exist in alternate realities, from your picture book of snapshots, Re; "Julian Barbour" and his "End of Time" theory.
2. I see what you wrote....the only present that is part of that comment is in the spaces between the words.
Made me think...there is a way to think that the past and the future are the only times to exist in the mind. The present may be like a higgs...we can never grasp is sandwiched between and pushed flat like by those two plates.
I have felt the present but i can't explain it...all i can say is, it was while holes into black holes.
Weird? Ok...i don't mind that.
51. interesting that multiverses provide theoretical support to :
constant inflation
string theory dimensional contour 'random' appearance
dark energy value 'random' appearance
in the sense of showing, at least, that these 'arbitrary' values might be understood as instances distributed amongst other universes
although i didnt catch the basis for asserting that such quantities WOULD distribute randomly, merely that the incidence of multiverses would provide an OPPORTUNITY under which such a variance might occur.
the treatment of time directionality as a mere consequence of entropy seemed incomplete. there are causal relations linking before & after which this treatment seems to neglect, unless we consider that the entropy discussion is a short-hand or abstraction, for these relations.
most of this stuff is over my head, too. the fabric of the cosmos is wilder & woolier than most would, could, or would perhaps even choose to imagine.
1. @RileyRampant:
Time directionality? there is no time directionality, that is, according to Einstein theory of special relativity, that requires that all of spacetime be present at once.
And entropy, which entropy? which reality? which universe? the one that we are apparently in? the one that we are in is flipped every Planck second by our now's, new universe, our now's that are instantly transposed into our past giving us our flow of time, our past is not even real.
"Richard Feynman" in his "sum over histories" demonstrated that subatomic particles traverse infinite paths through spacetime, implicating infinite histories for any particle, which of course means many worlds, multiiverses.
In Brian Greenes "Elegant Universe" book... there are quilted...inflationary...Brane...Cyclic...Landscape...Quantum...Holographic...Simulated...Ultimate. (Multiverses)
2. I agree with the idea wholeheartedly that the past has a symmetry with the future as in there are multiple pasts and futures. It makes an interesting case in religion as well, as from the standpoint that most religions involve time travelling beings, all religions can simultaneously be right and wrong.
3. You got it exactly, entropy is the essence of those causal relationships, unless humans intervene that is. You see I can push a uphill or nonspontaneous reaction to completion and actually reduce the amount of entropy instead of increasing it. Check out the gibbs free energy equation and you will see what I mean. The catch is that after that reaction occurrs the products will eventually spontaneously decay into the lowest energy configuration, or zero point energy. This decay from a higher to a lower energy configuration is entropy and causes us to see time as moving in one direction, it also is what drives spontaneous reactions that happen in nature. Methane igniting is a great example, all one need do is provide enough energy to reach whats called activation complex and the reaction will then take off. Methane combines with oxygen to produce carbon dioxide and water plus about 211 joules of energy per mole of methane. Now we have turned one compound, which stored a great amount of potential energy, into two that have simpler structures and store much less potential energy that can be easily accessed, we have increased the amount of entropy in the universe. This is why you never see water and co2 spontaneously reacting to become methane, this would break the second law of thermodynamics as the atoms would be moving to a more ordered, higher energy state. The same goes for why you never see a bunch of bricks spontaneously assemble into a wall, unless some energy or force intervenes like a brick layer.
Now, what really freaks us out is that quantum mechanics doesn't seem to follow these rules. Instead it follows the rules of probability, it is improbable that the bricks will self assemble in any reasonable amount of time, but if we could wait for billions and billions of years, the probability works out that it could happen. Feynman gave us an equation to work out that probability and it makes excellent predictions. Due to the nature of the equation, last operation is to divide by Planck's constant, the larger the object and area you are dealing with the longer you have to wait for it to exhibit quantum characteristics. If it is small as say an electron it will continuously pop in and out of existence, here then way over there. If it is say the size of a basebell it takes billions and billions of years to do the seemingly impossible, this is why we never see it. I reccommended a lecture to Jack below, it explains all of this in great detail, you should check it out.
52. smthing i dont get , if empty space causes the two square metal plate to touch each other how can it be the principal actor causing the expansion of the univers ? isnt supposed to contract instead? is there anthing im missing here?
1. Its not the prime motivator of expansion, dark energy is. Vaccuum energy, which is the energy that exists in all empty space, is what pushes the plates together. These are two different things. Unfortunately they don't know much about dark energy other than it must exist or gravity would have stopped/slowed expansion by now. Instead expansion is speeding up, with no apparent cause.
53. Brilliant documentary!
1. OI wife where you been!?
2. Been here all the time, but quiet :)
54. documentry is partial with higgs-boson particle...not a single word about S.Bose who was also responsible for this particle...
55. That's bad math, Jack.
56. But how many smithereens make up a single Quark?
57. Watched the first hour.
Honestly?... If you've seen a number of these kinds of docs before, you could skip the first part of this as a rehash of background material you're already bound to be only too familiar with: Billiards...Space as a taut fabric with a feckless bowling ball rounding out an aesthetic butt cheek...Little satellites of cue ball paparazzi... Boring, right? Pretty much the same old illustrations and explanations seen and heard a hundred times, because there really isn't any better way to lay it all out for the People of the Word, which is the vast majority of us. But precisely at minute 25 it started to get mighty interesting. I'm glad I stuck with it, because from that point on, even though most of the material was more or less familiar to me, they actually managed to approach it in ways I don't, for the most part, ever remember seeing before. Clear and brief, too, which I really appreciated -not overly complicated with all those tangential metaphors, which is a real risk when programs like this get rolling. The explanation for the Higgs was the best I've ever seen, for example, and I realize now I had pretty much completely the wrong idea about how it must go about doing what it does. The info here at least gave me enough to take a better swipe at it, since it's impossible for me to get a grip on it. But most especially, the section on the possibilities with information stored on the "surfaces" of Black Holes, and the extension of that concept to the entire Universe, was mind-bending and inspiring. Obviously, I'm not a scientist, but I got the impression watching this that, somehow, these deeply mysterious objects may be the real key to everything, particularly insofar as the future is concerned, and not just because of the fact they will be the last things left in the Universe, quadrillions of years from now. I got the impression, to be blunt about it, that they may be involved ultimately in keeping things going in cycles, perhaps elsewhere for the time being, and perhaps "here" in the very far future. They save information, nothing of it is ever lost, the entire Universe will eventually be swallowed up by them, so where does this information end up? Is it possible it can be "assembled" somewhere as a new Big Bang? As more than one? Are they the Trashcans of the Gods, providing ways and materials for recyclable Cosmos, or just really long-lasting vacuum cleaners that the plug is finally, irrevocably gonna be pulled on one day?
Anyway, whatever may be the case with all that, after the typical start, this one turns out remarkably focused and succinct, and didn't leave me feeling nearly as muddleheaded as docs of this sort are prone to.
Moving on to the second hour.
1. @P:
The doc was good but the book of course is many times better, goes into much, much, more detail.
2. I didn't think black holes would be the last thing in the universe, I saw a documentary that talked about all the stars burning out and leaving these balls of carbon that radiate left over heat until finally even that energy decays due to entropy and the universe goes to absolute zero, dead. Then I saw another one that said no it will all rip to pieces because of expansion driven by dark energy and, another that says no expansion will stop and gravity will win as the universe is crushed into a singularity only to expand in a big bang again. But, I missed the theory that blck holes would swallow it all eventually, not saying it doesn't exist just that I missed it. Could you lay out the logic in more detail or offer a link please, I am curiose.
I know of at least one paper that would disagree with that theory, you can find it arxiv dot org, which is the Cornell University library online. The paper is titled Dark Matter Accretion into Supermassive Black Holes and is specifically at arXiv. org > astro-ph > arXiv: 0802. 2041v1 (no spaces of course). It explains that less than ten percent of the accretion disk of super massive black holes can exist of dark matter, in other words black holes would have a very hard time swallowing it all, remember there is more of it than regular matter by far. The way they work this out is by measuring the x-ray emmisions and comparing that to the amount of matter being consumed, the x-rays should account for ten percent of the mass crossing the event horizon, if it is all regular matter, and thats exactly what they find. Anyay, its a good paper I think you would enjoy reading, merry xmas by the way.
3. I believe it's in one of the docs here that it says the last thing to be going on in the Universe before the absolute heat death will be black holes emitting Hawking Radiation over trillions of years (barring the Big Rip thing, of course). Can't remember the title, but the one where the narrator is literally walking up a flight of steps one at a time while he delineates the different stages in the life-cycle of the Universe. Long, long after all those dead, regular stars finally wink out for good will just be those gravity wells syphoning off HR for eon after eon, until finally they, too, are gone.
Fact is, now that I think about it, I'm sure I misremembered about black holes swallowing up everything, and was actually thinking about the part in the cycle in which you wouldn't even be able to see other stars in the sky because of the distances separating them, and of the inability of new stars to form because of entropy. But this is really to say, when I think about everything in the outside Universe being finally dead and finally cold, there's a part of me that wants to, I suppose, anthropomorphize the nature of black holes, making them into collective gods who have, in some way as yet unforeseen, somehow saved enough of the contents and information of the Universe to allow the Story to begin again, here later, or someplace else now. Do you remember that part about the 2 dimensional information on the surface of the black hole being capable of being rendered 3 dimensionally inside of it? And of our Universe right now potentially being the same thing, writ (or projected) large? Man, what does that really mean? For all of this talk about their ability to suck up nearly everything, including light, doesn't there seem to be a strange Looking Glass quality about them? I just have this sense that there's a whole Universe on the other side of one, and that we may, in essence, be in one right now, but I guess it's just a fantasy born out of a desire to cheat death, which is probably the oldest human story there is, right?
I'll read that paper, but I sure don't expect to understand very much of it, lol. A few months ago I went on a Mersini-Houghton kick (she is SO sexy!) and pulled up four of her papers online. Of the four, I was able to understand anything at all of precisely one of them. Of course, never having gotten any farther than algebra and geometry, I certainly didn't expect to, but it was fun to try it anyway. I was giggling the whole time... I suppose I just kept hoping a breast would pop out somewhere. Never did English sound so much like Latin! If I thought it would do any good, I would try and advance more in mathematics, even at my age. In fact, about ten years ago I did try it, but just confirmed that I'm simply not wired for that kind of thinking. If the same information could be compressed into some type of formal musical structure, I'd probably have a real shot at understanding it a lot better.
Hope you had a good xmas, too, Wald0.
58. This was awesome, if you have the time ;)
59. The Higgs field. The field that allows sub atomic particles to gain mass. The Haydon collidor has been designed specifically to smash sub atomic particles together at near the speed of light in order to find a Higgs particle / Higgs field that produces gravity.
To be honest I think these quantum physicists need to rethink their approach.
If the Higgs field is responsible for sub atomic particles obtaining gravity, what on Earth makes them think that they can find the Higgs field by smashing sub atomic particles together.
It comes across as being completely silly. I'm sorry.
However the rest of the documentary looking at space as something tangible is really quite smart and explains the creation of our universe, without using the Big Bang Theory that is fundamentally wrong.
What gives particles their mass?
Einstein's equation E = mcc provides the answer m = E/cc
It has to do with the energy imparted to a quantum particle in space that then determines its mass value. This is done at the quantum level.
There is no Higg's field. There is no god particle. And their is no field that assigns mass to a particle.
What there is at the quantum level is a certain amount of energy that is applied to quantum particles / bundles of quantum energy that then transforms them into sub atomic particles with a known mass.
Smashing sub atomic particles together in the Haydon Collider can never reveal the Higgs field / Higgs particle because it does not exist.
Quantum physicists need to rethink the whole process of quantum particle creation.
When this concept is expanded out, it explains the fundamental creation of the entire universe from empty space without a Big Bang.
Matter is created out of empty space. Electrons and neutrons are created that then combine to form hydrogen the basic element of the universe.
Hydrogen atoms accumulate through electrostatic forces into hydrogen clouds.
The mass of the hydrogen clouds then warps space, resulting in a hydrogen gas ball that then begins a process of nuclear fusion becoming a sun.
The universe can NEVER run out of hydrogen atoms to create hydrogen gas clouds to anable star creation, because the very nature of space is to create quantum energy and quantum energy particles. Quantum energy particles with a mass dependent on the energy input at the quantum level.
1. Please site all your papers in the literature on all this amazing theory.
....that is what I thought.
The internet is available to anyone with a keyboard and a connection.
BTW, the sun does not produce 'the other' heavier atoms of the periodic table other than helium, Heavier atoms require supernovae.
2. I agree with you on all counts. I'm not a scientist, or even an amateur scientist - I'm just a hardcore science enthusiast but I have come to the same conclusion myself. I think you put it better than I would though. I just wish I had gone down the science path in Uni instead of the arts. I didn't realise until it was too late what I was missing out on. Listen up kids! Become scientists or you'll regret it! The more you learn the cooler it gets!
3. E=mcc is not Einstein's equation. It is E=mc2. A huge difference.
Higg's particles interact with other particles. By smashing particles together they hope to create the Higg's particle, which would be detectable, which would in turn provide evidence of the Higg's field. Scientists believe they were close, at the very limits of its capabilities, before the collider at CERN was shut down in 2000 in order for expansion. There has been indirect evidence of the Higg's particle that has been encouraging. Also the theory explains all facets of the Standard Model which other theories lack. Evidence is there that these particles exist and science must go where the evidence leads them. It would be silly not to.
People, like myself, have a deep interest in science. However, my knowledge is limited since I do not have the education in those fields. I must trust those who have worked on these projects for years...the ones who have devoted their entire lives to the study of and the experimentation in particle physics. It would seem unlikely that scientists of different nationalities, political systems and research institutions all seem to agree that the Higg's particle exists and it can be found by colliding sub-atomic particles. I would never assume to know better than these individuals unless I had unimpeachable sources that contradict them. I happen to notice that you do not have any such sources.
4. Actually E=M(CC) would be the same thing as E=MC2, he just forgoot the parenthesis. Also they are not hoping to create the higgs particle but to knock it loose, so to speak. The physics says it is possible to dislodge a piece of the higgs field, think of firing a bullet into water, you may knock one molecule of water loose from the rest when the bullet hits it. All you have to do is apply a force in the right vector orientation and strong enough to over come the attractive forces that hold water molecules together. At least this is how it was explained to me by my physics professor, I don't mind admitting though that the work they are doing at the LHC is way over my head.
I can hold my own when it comes to Newtonian or Relativistic physics, but I never got far enough to really delve into quantum mechanics with much detail. Being a chemist however I have studied extensively the quantum nature of atoms and the first level sub atomic critters, electrons, protons, neutrons (electrons mostly). It is the number of protons, nuetrons, and electrons each element has that gives it its own individual characteristics and places it in a family or group. It is how electrons interact with other electrons that decides how compounds form and what structure and charateristics they will have, well that and electrical charge in general. But, that stuff doesn't tell me much about things like the higgs field or quarks, the really tiny, odd stuff.
There is a documentary out right now, well it isn't really a documentary it is Brian Cox doing a presentation on quantum mechanics at the Royal Society lecture hall, in London. He does such a fabulous job of explaining both the Pauli exclusion principle and Heisenberg's uncertainty principle, along with the wave nature of electrons, probability, etc. Its got to be one of the best lectures I have seen. You should check it out, everyone should in fact, it is called, Professor Brian Cox: A Night With the Stars. Its not only educational it's halarious at some points, a lot of comedians and actors are there helping him with his demonstrations. Merry xmas, hope you enjoy it.
5. Now that you mention it I believe that is how it was explained in the documentary, also. I had been reading in a Fermilab site and the word "created" was used in the explanation. I have been trying to get a handle on this stuff after watching the documentary about CERN that was posted recently on this site. It is still way above my head but it is slowly becoming a little clearer. These documentaries, which prompts me to investigate further through reading material, and the insights that I get from people like yourself has been a great help.
I found the presentation by Brian Cox that you recommended and thoroughly enjoyed it. Thank you very much and a Happy Holiday Season to you too.
6. jack, multiplication works no matter the order of multiplication... MCC is equal to M(CC), or C(MC), or C(CM)... try it with simple numbers and see... if C equals 5, and M equals 8 M(CC)= 200... C(MC) or C(CM) both also resolve to an answer of 200....thus, E=MCC is indeed a valid mode of scribing einsteins equation, even if not altogether "correct" from an accepted notation view (one might also note that because of the shortcomings of our english keyboards, the typed version of the equation so often seen (E=MC2) is absolutely INCORRECT, as the equation reads "ee equals em cee squared", and NOT "ee equals em cee doubled", which is what the "keyboard version" would imply).... i figured actually explaining why is a bit better than just "bad math, jack".... lol
7. You're right. All I saw was the notation and automatically assumed it to be wrong.
8. Thats called the commutative property of multiplication.The reason I still use the parenthesis, and why is consider correct notation, is because once you introduce negative and positive intergers it can get very confusing without them. That never happens in this particular equation, due to all factors being positive numbers, but it is a habit. It also serves to emphasize that you are squaring the constant which means you are working with a quadratic equation, which becomes important when graphing fission reactions. The reason I like to see it emphasized though is because when you are explaining to someone how much energy is contained in a very small amount of mass pointing out that the speed of light is squared gives them a good impression of the huge numbers you can potentially come up with.
9. absolutely agree, waldo, that the "correct" notation is really essential once you wander out of the realm of simple multiplication of positive numbers, and the parenthises make it a bit clearer even with the simple stuff, but i can also understand not wanting to use the popular internet chat notation, which can just serve to muddy the waters for somebody just beginning to grasp relativity (and results in much less impressive potential
10. though i do like your theory, your math to explain your point has a fundamental flaw... though potential total energy of an atom that exists in its "particle bonds" (for lack of a better term in my lexicon, lol) is expressed very well by einsteins equation, that is all energy stored far ABOVE the quantum level... the relativity equation (and the rest of einsteins work) is based on the "tiny 3" (electrons, neutrons, protons), but none of the "subcompact models" of the quantum world...thus, your equation is just the "proof step" of einstein's, and CANT prove the "cause" of mass, unless of course carried and proven down to those miniscule scales, step by step until one arrives at a point where there are no more particles to "break down"... i also have to point out that something has to impart said energy to "get the ball rolling"(cause the first level of particle to excite into existance) and begin the various energy transfers that would result in the formation of more complex, larger particles... to illustrate, let's suppose our atom is a spinning flywheel.. einsteins equation illustrates only the energy stored in the flywheel's motion, but has no way of quantifying the energy used to produce the flywheel when it was manufactured, and gives no clue as to how the bessemer furnace was ignited for smelting the ore...
11. Thanks for your insights. I was under the same misapprehention as Arnie and I found your explanation very clear.
60. to infinity and beyond
61. I like that scientists are continually debunking current scientific theory. Therefore creating more bunk.
1. It's still better than Dogma.
62. Great series based on a great book.
1. that sounds freakin sweet! and also would mean that all these events, getting infinitely smaller and smaller and smaller would (or has) happened almost simultaneously to the most "unimaginably large" which really makes the buddhist belief that everything has already been achieved sound really appealing.
monkeys with typewriters man! lovely thought
64. Amazing, am half way through Brian Greene's book "The Fabric of the Cosmos" This is a much watch!
1. I recently watched a movie. I think it was called "The watchmen"
I imagine "Doctor Manhattan" would be your favourite ever super hero. (Mr Quantum)
2. You betcha!
65. I watched this under a different name a few days ago, great series.
1. In your alto ego within another multiverse?
66. Awesome series.
Awesome, no need for more words. |
f8425ce5d3caccc9 | Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
578.54 Кб
. (11.9)
Having multiplied the both parts of (11.6) (or (11.8)) by ci and summed over i , we
obtain the same result representing the relationship between the Lagrange multipliers:
(n +m)= λ1 2 E
The sample mean energy E (i.e., the sum of the mean potential energy that can be measured in the coordinate space and the mean kinetic energy measured in the momentum space) was used as the estimator of the mean energy in numerical simulations below.
Now, let us turn to the study of statistical fluctuations of a state vector in the problem under consideration. We restrict our consideration to the energy representation in the case when the expansion basis is formed by stationary energy states (that are assumed to be nondegenerate).
Additional condition (11.3) related to the conservation of energy results in the following relationship between the components
δE =s1 (Ejcjδcj +Ejδcjcj )=0. (11.10)
It turns out that both parts of the equality can be reduced to zero independently if one assumes that a state vector to be estimated involves a time uncertainty, i.e., may differ from the true one by a small time translation. The possibility of such a translation is related to the time-energy uncertainty relation.
The well-known expansion of the psi function in terms of stationary energy states, in view
of time dependence, has the form ( h =1)
ψ(x)= cj exp(iEj (t t0 ))ϕj (x)=
= cj exp(iEjt0 )exp(iEjt)ϕj (x)
In the case of estimating the state vector up to translation in time, the transformation cj = cj exp(iEjt0 )(11.12)
related to arbitrariness of zero-time reference t0 may be used to fit the estimated state vector to the
true one.
The corresponding infinitesimal time translation leads us to the following variation of a state
δcj =it0Ejcj . (11.13)
Let δc be any variation meeting both the normalization condition and the law of energy conservation. Then, from (10.15) and (11.3) it follows that
(δcj )cj =iε1 , (11.14) j
(δcj )Ejcj =iε2 , (11.15) j
where ε1 and ε2 are arbitrary small real numbers.
In analogy with Sec. 10, we divide the total variation δc into unavoidable physical fluctuation δ2c and variations caused by the gauge and time invariances:
δcj =iαcj +it0Ejcj 2cj . (11.16)
We will separate out the physical variation δ2c , so that it fulfills the conditions (11.14)
and (11.15) with a zero right part. It is possible if the transformation parameters α and t0 satisfy the following set of linear equations:
α +
. (11.17)
Eα + E
t0 2
The determinant of (11.17) is the energy variance
σE2 = E2 E 2 . (11.18)
We assume that the energy variance is a positive number. Then, there exists a unique solution of the set (11.17). If the energy dissipation is equal to zero, the state vector has the only nonzero component. In this case, the gauge transformation and time translation are dependent, since they are reduced to a simple phase shift.
In full analogy with the reasoning on the gauge invariance, one can show that in view of both the gauge invariance and time homogeneity, the transformation satisfying (11.17) provides minimization of the total variance of the variations (sum of squares of the components absolute values). Thus, one may infer that physical fluctuations are minimum possible fluctuations compatible with the conservation of norm and energy.
Assuming that the total variations are reduced to the physical ones, we assume hereafter that
(δcj )cj = 0, (11.19) j
(δcj )Ejcj = 0. (11.20) j
The relationships found yield (in analogy with Sec. 10) the conditions for the covariance
matrix Σij ciδcj :
(Σijcj )= 0 , (11.21) j
(Σij Ejcj )= 0 . (11.22) j
Consider the unitary matrix U + with the following two rows (zero and first):
(U + )0 j = cj , (11.23)
(U + )
)c j
j =0,1,...,s 1
. (11.24)
1 j
This matrix determines the transition to principle components of the variation
Uij+δcj fi . (11.25)
According to (11.19) and (11.20), we have δf0 f1 = 0 identically in new variables so
that there remain only s 2 independent degrees of freedom. The inverse transformation is
Uδf c . (11.26)
On account of the fact that δf0 f1 = 0, one may drop two columns (zero and first) in the U matrix turning it into the factor loadings matrix L
Lijδf j ci
i = 0,1,...,s 1; j = 2,3,...,s 1. (11.27)
L matrix has s rows and s 2 columns. Therefore, it provides the transition from s 2
independent variation principle components to s components of the initial variation.
In principal components, the Fisher information matrix and covariance matrix are given by
= (n +m)δij , (11.28)
fiδf j
δij . (11.29)
(n +m)
In order to find the covariance matrix for the state vector components, we will take into
account the fact that the factor loadings matrix L differs form the unitary matrix U by the
absence of two aforementioned columns, and hence,
c c
c c
L L+
, (11.30)
L L+
δc δc = L L δf
ik kj
+m . (11.31)
ik jr
Finally, the covariance matrix in the energy representation takes the form
c c 1+
σ 2
(n +m)
i j
. (11.32)
i, j = 0,1,...,s 1
It is easily verified that this matrix satisfies the conditions (11.21) and (11.22) resulting from the conservation of norm and energy.
The mean square fluctuation of the psi function is
δψδψ dx = δciϕi (x)δcjϕj (x)dx ciδci =Tr(Σ)= s 2 . (11.33) n +m
The estimation of optimal number of harmonics in the Fourier series, similar to that in Sec. 7, has the form
sopt = r +1 rf (n +m), (11.34)
where the parameters r and f determine the asymptotics for the sum of squares of residuals:
2 =
. (11.35)
The norm existence implies only that r > 0 . In the case of statistical ensemble of harmonic oscillators with existing energy, r >1. If the energy variance is defined as well, r > 2.
s =100
Figure 6 shows how the constraint on energy decreases high-energy noise. The momentumspace density estimator disregarding the constraint on energy (upper plot) is compared to that accounting for the constraint.
Fig. 6 (a) An estimator without the constraint on energy;
(b) An estimator with the constraint on energy.
0,6 a)
The sample of the size of n = m =50 ( n +m =100) was taken from the statistical ensemble of harmonic oscillators. The state vector of the ensemble had three nonzero components ( s =3).
Figure 6 shows the calculation results in bases involving s =3 and functions, respectively. In the latter case, the number of basis functions coincided with the total sample size. Figure 6 shows that in the case when the constraint on energy was taken into account, the 97 additional noise components influenced the result much weaker than in the case without the constraint.
12. Fisher Information and Variational Principle in Quantum Mechanics
The aim of this section is to show a certain relation between the mathematical problem of minimizing the energy by the variational principle in quantum mechanics and the problem of minimizing the Fisher information (more precisely, the Fisher information on the translation parameter) that may be used (and is really used) in some model problems of statistical data analysis (for detail, see [36, 37]).
Let us show that there exists a certain analogy between the Fisher information on the translation parameter and kinetic energy in quantum mechanics. This analogy results in the fact that variational problems of robust statistics are mathematically equivalent to the problems of finding the ground state of a stationary Schrödinger equation [36, 37].
U = U (x)p(x)dx U0 . (12.5)
In terms of quantum mechanics, the “loss function” U (x)
Indeed, kinetic energy in quantum mechanics is (see, e.g., [29]; for simplicity, we consider one-dimensional case):
T = − 2hm2 ψ (x)2ψx(2x)dx = 2hm2 ψx ψx dx . (12.1)
Here, m is the mass of a particle and h is the Planck’s constant.
The last equality follows from integration by parts in view of the fact that the psi function and its derivative are equal to zero at the infinity.
Assume that the psi function is real-valued and equals to the square root of the probability density:
ψ(x)= p(x). (12.2) Then ψx = 2 pp , and hence,
T =
(p )
dx =
pdx =
I (p)
. (12.3)
Here, we have taken into account that the Fisher information on the translation parameter, by definition, is [36]
I(p)= +∞p'(x)p 2 p(x) dx . (12.4)
The Fisher information (12.4) is a functional with respect to the distribution density p(x). Let us consider the following variational problem: it is required to find a distribution density
p(x) minimizing the Fisher information (12.4) at a given constraints on the mean value of the loss
is a potential.
The problem under consideration is linearized if the square root of the density, i.e., psi function is considered instead of the density itself. The variational problem is evidently reduced to minimization of the following functional:
S(ψ )= +∞(ψ′(x))2 dx −λ1 (ψ 2 (x)dx 1)+
. (12.6)
2 (U (x)ψ 2 (x)dx U0 )
Here, λ1 and λ2 are the Lagrange multipliers providing constraints on the norm of a state vector
and the loss function, respectively.
From the Lagrange-Euler equation it follows the equation for the psi function
−ψ′′+λ2Uψ = λ1ψ . (12.7)
The last equation turns into a stationary Schrödinger equation if one introduces the notation
E =
. (12.8)
Minimization of kinetic energy at a given constraints on potential energy is equivalent to minimization of the total energy. Therefore, the solution of the corresponding problem is the ground state for a particle in a given field. The corresponding result is well-known in quantum mechanics as the variational principle. It is frequently used to estimate the energy of a ground state.
The kinetic energy representations in two different forms (12.1) and (12.3) are known at least from the works by Bohm on quantum mechanics ([38], see also [30]).
The variational principle considered here is employed in papers on robust statistics developed, among others, by Huber [36]. The aim of robust procedures is, first of all, to find such estimators of distribution parameters (e.g., translation parameters) that would be stable against (weakly sensible to) small deviations of a real distribution from the theoretical one. A basic model in this approach is a certain given distribution (usually, Gaussian distribution) with few given outlying observations.
For example, if the estimator of the translation parameter is of the M- type (i.e., estimators of maximum likelihood type), the maximum estimator variance (due to its efficiency) will be determined by minimal Fisher information characterizing the distribution in a given neighborhood [36].
Minimization of the Fisher information shows the way to construct robust distributions. As is seen from the definition (12.4), the Fisher information is a positive quantity making it possible to estimate the complexity of the density curve. Indeed, the Fisher information is related to the squared derivative of the distribution density; therefore, the more complex, irregular, and oscillating the distribution density, the greater the Fisher information. From this point of view, the simplicity can be achieved by providing minimization of the Fisher information at given constraints. The Fisher information may be considered as a penalty function for the irregularity of the density curve. The introduction of such penalty functions aims at regularization of data analysis problems and is based on the compromise between two tendencies: to obtain the data description as detailed as possible using functions without fast local variations [37, 39-41].
In the work by Good and Gaskins [39], the problem of minimization of smoothing functional, which is equal to the difference between the Fisher information and log likelihood function, is stated in order to approximate the distribution density. The corresponding method is referred to as the maximum penalized likelihood method [40-41].
Among all statistical characteristics, the most popular are certainly the sample mean (estimation of the center of probability distribution) and sample variance (to estimate the deviation). Assuming that these are the only parameters of interest, let us find the simplest distribution (in terms of the Fisher information). The corresponding variational problem is evidently equivalent to the problem of finding the minimum energy solution of the Schrödinger equation (12.7) with a quadratic potential. The corresponding solution (density of the ground state of a harmonic oscillator) is the Gaussian distribution.
If the median and quartiles are used as a given parameters instead of sample mean and variance, which are very sensitive to outlying observations, the family of distributions that are nonparametric analogue of Gaussian distribution and accounting for possible data asymmetry can be found [37].
Let us state a short summary.
The root density estimator is based on the representation of the probability density as a squared absolute value of a certain function, which is referred to as a psi function in analogy with quantum mechanics. The method proposed is an efficient tool to solve the basic problem of statistical data analysis, i.e., estimation of distribution density on the basis of experimental data.
The coefficients of the psi-function expansion in terms of orthonormal set of functions are estimated by the maximum likelihood method providing optimal asymptotic properties of the method (asymptotic unbiasedness, consistency, and asymptotic efficiency). An optimal number of harmonics in the expansion is appropriate to choose, on the basis of the compromise, between two
opposite tendencies: the accuracy of the estimation of the function approximated by a finite series increases with increasing number of harmonics, however, the statistical noise level also increases.
The likelihood equation in the root density estimator method has a simple quasilinear structure and admits developing an effective fast-converging iteration procedure even in the case of multiparametric problems. It is shown that an optimal value of the iteration parameter should be found by the maximin strategy. The numerical implementation of the proposed algorithm is considered by the use of the set of Chebyshev-Hermite functions as a basis set of functions.
The introduction of the psi function allows one to represent the Fisher information matrix as well as statistical properties of the sate vector estimator in simple analytical forms. Basic objects of the theory (state vectors, information and covariance matrices etc.) become simple geometrical objects in the Hilbert space that are invariant with respect to unitary (orthogonal) transformations.
A new statistical characteristic, a confidence cone, is introduced instead of a standard confidence interval. The chi-square test is considered to test the hypotheses that the estimated vector equals to the state vector of general population and that both samples are homogeneous.
It is shown that it is convenient to analyze the sample populations (both homogeneous and inhomogeneous) using the density matrix.
The root density estimator may be applied to analyze the results of experiments with micro objects as a natural instrument to solve the inverse problem of quantum mechanics: estimation of psi function by the results of mutually complementing (according to Bohr) experiments. Generalization of the maximum likelihood principle to the case of statistical analysis of mutually complementing experiments is proposed. The principle of complementarity makes it possible to interpret the ill-posedness of the classical inverse problem of probability theory as a consequence of the lacking of the information from canonically conjugate probabilistic space.
The Fisher information matrix and covariance matrix are considered for a quantum statistical ensemble. It is shown that the constraints on the norm and energy are related to the gauge and time translation invariances. The constraint on the energy is shown to result in the suppression of high-frequency noise in a state vector approximated.
The analogy between the variational method in quantum mechanics and certain model problems of mathematical statistics is shown.
1.A. N. Tikhonov and V. A. Arsenin. Solutions of ill-posed problems. W.H. Winston. Washington D.C. 1977 .
2.L. Devroye and L. Györfi. Nonparametric Density Estimation: The L1 -View. John Wiley. New York. 1985.
3.V.N. Vapnik and A.R. Stefanyuk. Nonparametric methods for reconstructing probability densities Avtomatika i Telemekhanika 1978. Vol. 39. No. 8. P.38-52.
4.V. N. Vapnik, T. G. Glazkova, V. A. Koscheev et al. Algorithms for dependencies estimations. Nauka. Moscow. 1984 (in Russian).
5.Yu. I. Bogdanov, N. A. Bogdanova, S. I. Zemtsovskii et al. Statistical study of the time-to- failure of the gate dielectric under electrical stress conditions. Microelectronics. 1994. V. 23. N 1. P. 51 – 59. Translated from Mikroelektronika. 1994. V. 23. N1. P. 75-85.
6.Yu. I. Bogdanov, N. A. Bogdanova, S. I. Zemtsovskii Statistical modeling and analysis of data on time dependent breakdown in thin dielectric layers, Radiotekhnika i Electronika. 1995. N.12. P. 1874-1882.
7.M. Rosenblatt Remarks on some nonparametric estimates of a density function // Ann. Math. Statist. 1956. V.27. N3. P.832-837.
8.E. Parzen On the estimation of a probability density function and mode // Ann. Math. Statist. 1962. V.33. N3. P.1065-1076.
9.E. A. Nadaraya On Nonparametric Estimators of Probability Density and Regression, Teoriya Veroyatnostei i ee Primeneniya. 1965. V. 10. N. 1. P. 199-203.
10.E. A. Nadaraya Nonparametric Estimation of Probability Densities and Regression Curves. Kluwer Academic Publishers. Boston. 1989.
11.J.S. Marron An asymptotically efficient solution to the bandwidth problem of kernel density estimation. // Ann. Statist. 1985. V.13. №3. P.1011-1023.
12.J.S. Marron A Comparison of cross-validation techniques in density estimation // Ann. Statist. 1987. V.15. №1. P.152-162.
13.B.U. Park, J.S. Marron Comparison of data-driven bandwidth selectors // J. Amer. Statist. Assoc. 1990. V.85. №409. P.66-72.
14.S.J. Sheather, M.C. Jones A reliable data-based bandwidth selection method for kernel density estimation // J. Roy. Statist. Soc. B. 1991. V.53. №3. P.683-690.
15.A. I. Orlov Kernel Density Estimators in Arbitrary Spaces. in: Statistical Methods for Estimation and Testing Hypotheses. P. 68-75. Perm'. 1996 (in Russian).
16.A. I. Orlov Statistics of Nonnumerical Objects. Zavodskaya Laboratoriya. Diagnostika Materialov. 1990. V. 56. N. 3. P. 76-83.
17.N. N. Chentsov (Čensov) Evaluation of unknown distribution density based on observations. Doklady. 1962. V. 3. P.1559 - 1562.
18.N. N. Chentsov (Čensov) Statistical Decision Rules and Optimal Inference. Translations of Mathematical Monographs. American Mathematical Society. Providence. 1982 (Translated from Russian Edition. Nauka. Moscow. 1972).
19.G.S. Watson Density estimation by orthogonal series. Ann. Math. Statist. 1969. V.40. P.14961498.
20.G. Walter Properties of hermite series estimation of probability density. Ann. Statist. 1977. V.5. N6. P.1258-1264.
21.G. Walter, J. Blum Probability density estimation using delta sequences // Ann. Statist. 1979. V.7. №2. P. 328-340.
22.H. Cramer Mathematical Methods of Statistics, Princeton University Press, Princeton, 1946.
23.A. V. Kryanev Application of Modern Methods of Parametric and Nonparametric Statistics in Experimental Data Processing on Computers, MIPhI, Moscow, 1987 (in Russian).
24.R.A. Fisher On an absolute criterion for fitting frequency curves // Massager of Mathematics. 1912. V.41.P.155-160.
25.R.A. Fisher On mathematical foundation of theoretical statistics // Phil. Trans. Roy. Soc. (London). Ser. A. 1922. V.222. P. 309 – 369.
26.M. Kendall and A. Stuart The Advanced Theory of Statistics. Inference and Relationship. U.K. Charles Griffin. London. 1979.
27.I. A. Ibragimov and R. Z. Has'minskii Statistical Estimation: Asymptotic Theory. Springer. New York. 1981.
28.S. A. Aivazyan and I. S. Enyukov, and L. D. Meshalkin Applied Statistics: Bases of Modelling and Initial Data Processing. Finansy i Statistika. Moscow. 1983 (in Russian).
29.L. D. Landau and E. M. Lifschitz Quantum Mechanics (Non-Relativistic Theory). 3rd ed. Pergamon Press. Oxford. 1991.
30.D. I. Blokhintsev Principles of Quantum Mechanics, Allyn & Bacon, Boston, 1964.
31.V. V. Balashov and V. K. Dolinov. Quantum mechanics. Moscow University Press. Moscow. 1982 (in Russian).
32.A. N. Tikhonov, A. B. Vasil`eva, A. G. Sveshnikov Differential Equations. Springer-Verlag. Berlin. 1985.
33.N. S. Bakhvalov, N.P. Zhidkov, G. M. Kobel'kov Numerical Methods. Nauka. Moscow. 1987 (in Russian).
34.N. N. Kalitkin Numerical Methods. Nauka. Moscow. 1978 (in Russian).
35.N. Bohr Selected Scientific Papers in Two Volumes. Nauka. Moscow. 1971 (in Russian).
36.P. J. Huber Robust statistics. Wiley. New York. 1981.
37.Yu. I. Bogdanov Fisher Information and a Nonparametric Approximation of the Distribution Density// Industrial Laboratory. Diagnostics of Materials. 1998. V. 64. N 7. P. 472-477. Translated from Zavodskaya Laboratoriya. Diagnostika Materialov. 1998. V. 64. N. 7. P. 54-60.
38.D. Bohm A suggested interpretation of the quantum theory in terms of “hidden” variables. Part I and II // Phys. Rev. 1952. V.85. P.166-179 and 180-193
39.I.J. Good, R.A. Gaskins Nonparametric roughness penalties for probability densities // Biometrica. 1971. V.58. №2. P. 255-277.
40.C. Gu, C. Qiu Smoothing spline density estimation: Theory. // Ann. Statist. 1993. V. 21. №1. P. 217 – 234.
41.P. Green Penalized likelihood // in Encyclopedia of Statistical Sciences. Update V.2. John Wiley. 1998.
About the Author
Yurii Ivanovich Bogdanov
Graduated with honours from the Physics Department of Moscow State University in 1986. Finished his post-graduate work at the same department in 1989. Received his PhD Degree in physics and mathematics in 1990. Scientific interests include statistical methods in fundamental and engineering researches. Author of more than 40 scientific publications (free electron lasers, applied statistics, statistical modeling for semiconductor manufacture). At present he is the head of the Statistical Methods Laboratory (OAO “Angstrem”, Moscow).
e-mail: bogdanov@angstrem.ru
Оставленные комментарии видны всем.
Соседние файлы в предмете Квантовая информатика |
466d9bbc4d12278b | Perspective on: Switching Quantum Reference Frames for Quantum Measurement
This is a Perspective on "Switching Quantum Reference Frames for Quantum Measurement" by Jianhao M. Yang, published in Quantum 4, 283 (2020).
By Pierre Martin-Dussaud (Aix Marseille Univ, Université de Toulon, CNRS, CPT, Marseille, France and Basic Research Community for Physics e.V.).
Quantum reference frames
In the last few years, the communities of quantum information and quantum gravity have been working together on the notion of $\textit{quantum reference frames}$. The notion itself is not new (similar ideas go back to 1967 at least [1,2]) but recent results shed new light on it. Let me recall a few important facts.
In its original version, quantum mechanics divides the world in two: the quantum system and the classical observer. However, we have good reasons to believe that the observer is nothing but a quantum system itself. In other words, the delimitation between the quantum and the classical realms is not fundamental but can be set between any two quantum systems. Thus, physics is about describing systems from the perspective of other systems, as advocated in the $\textit{relational interpretation}$ of quantum mechanics [3].
Such a view is very familiar in general relativity. To extract observational predictions from the theory, one has to specify an observer with respect to which time, positions, velocities and accelerations are measured. Although general relativity is given abstractly as a $\textit{perspective-neutral theory}$, any of its operational implementations requires to adopt the view of some reference frame. Conversely, the various perspectives on a system relate to one another by changes of reference frame, which form a symmetry group. The absolute core of the theory is given by some abstract mathematical objects, invariant under the action of the symmetries.
Keeping in mind the horizon of quantum gravity, reference frames should be considered themselves as quantum systems. From this simple fact, hard questions arise, like: what are the relevant transformations to switch from one quantum perspective to another? or how does the world look like for a reference frame in a state of superposition?
Switching between perspectives
In [4], a concrete example is worked out and enables to draw few lessons. The model consists of three systems $A$, $B$, $C$ of which one considers the relative positions. The density matrix $\rho_{AB}^{(C)}$, describing the state of $A$ and $B$ from $C$-perspective, is transformed into the state of $B$ and $C$ from $A$-perspective, as
\rho_{BC}^{(A)} = \hat{S} \, \rho_{AB}^{(C)} \, \hat{S}^\dagger,
with the unitary operator
\hat{S} = \hat{\mathcal{P}}_{AC} \, e^{\frac{i}{\hbar} \hat{x}_A \hat{p}_B}.
To be precise, we should say that there is not a single notion of a jump from one perspective to another, as its definition relies upon an initial choice of prefered variables (here the positions). Among other conceptual takes, the formalism shows that entanglement is a reference-frame dependent notion.
Also, a change between two reference frames induces different stories for the evolution of states. As stressed by von Neumann, there are two ways for a quantum state to evolve: the unitary time evolution, given by the Schrödinger equation, and the non-unitary measurement projection, aka collapse of the wave-function. The former amounts to finding the good transformation for the hamiltonian. The latter is maybe the most intriguing: if $C$ performs a measurement on $A$ and $B$ via an apparatus $E$, how does the same process look like from the perspective of $A$? The answer is both simple and striking: for $A$, it is $B$ and $C$ which are measured by $E$ (and the observable is different).
A first-principle approach
In the original paper [4], the results mentioned above are derived from some operational considerations, more suited for concrete usage in quantum information. This approach has the drawback of remaining a bit opaque to further generalisations. Hopefully, another approach has been proposed soon after to derive the same transformations from first principles [5]. The method gets inspiration from the theory of hamiltonian constrained systems, familiar to the quantum gravity community.
At the classical level, the equivalence of the many point-of-views entails a symmetry principle, which imposes constraints on the phase space of a theory. Then, choosing a reference frame is tantamount to fixing a gauge. The quantum picture can be recovered following two paths of quantisation: the Dirac quantisation (quantise then impose the constraints) and the reduced quantisation (the other way around). Although both procedures are equivalent in simple cases, they differ in spirit.
On the one hand, the reduced quantisation can be understood as the quantisation from an internal perspective. For instance, it results in a Hilbert space $\mathcal{H}_{BC|A}$ from $A$-perspective.
On the other hand, Dirac quantisation is perspective-neutral. It results in some “agnostic” Hilbert space $\mathcal{H}_{phys}$. The latter recovers the usual description of quantum mechanics, from the point of view of an ideal classical observer, a ‘point of view from nowhere’. It also carries some redundancies that contain non physically meaningful information. Then, choosing a reference frame can be achieved as a $\textit{quantum symmetry reduction}$, which consists in a mapping between $\mathcal{H}_{phys}$ and $\mathcal{H}_{BC|A}$. Switching from one reference frame to another can now be better understood through the intermediate step of the perspective-neutral Hilbert space $\mathcal{H}_{phys}$.
A perspective-neutral measurement?
Despite the success of the first-principle approach of [5], not all of the results of [4] had been explained in a perspective-neutral context. In particular, the measurement process remained to bit fit into it. It is this stone that has been brought by the recent work of Yang [6].
It is argued that the unitary time-evolution can be implemented at the perspective-neutral level of $\mathcal{H}_{phys}$, contrary to the measurement projection that can only be formulated after the quantum symmetry reduction to $\mathcal{H}_{BC|A}$ has been performed. There is one noticeable exception to that rule: when the measured variable is independent of the variables involved in the change of reference frames.
The results of [4], that shows how the measurement process looks like from different perspectives, are recovered. Moreover, it is shown how the projection operator should be transformed when the measurement apparatus is taken as the quantum reference frame itself.
In my opinion, future work should consist in a generalisation of the ideas to more general systems than the toy-model here considered. This may also facilitate to take the conceptual lessons out of the formalism. Finally, the hamiltonian phase space approach may have to be overtaken to express general changes of reference frames, related by Lorentz transformations or diffeomorphisms. This would bring us closer to quantum gravity.
► BibTeX data
► References
[1] Y. Aharonov and L. Susskind, ``Charge Superselection Rule,'' Phys. Rev. 155 no. 5, (Mar., 1967) 1428–1431.
[2] B. S. DeWitt, ``Quantum Theory of Gravity. I. The Canonical Theory,'' Phys. Rev. 160 no. 5, (Aug., 1967) 1113–1148.
[3] C. Rovelli, ``Relational Quantum Mechanics,'' International Journal of Theoretical Physics 35 no. 8, (1996) 1637–1678, arXiv:quant-ph/9609002.
[4] F. Giacomini, E. Castro-Ruiz, and Č. Brukner, ``Quantum Mechanics and the Covariance of Physical Laws in Quantum Reference Frames,'' Nature Communications 10 no. 1, (Jan., 2019) 494.
[5] A. Vanrietvelde, P. A. Hoehn, F. Giacomini, and E. Castro-Ruiz, ``A Change of Perspective: Switching Quantum Reference Frames via a Perspective-Neutral Framework,'' Quantum 4 (Jan., 2020) 225, arXiv:1809.00556.
[6] J. M. Yang, ``Switching Quantum Reference Frames for Quantum Measurement,'' arXiv:1911.04903 [quant-ph] (Mar., 2020) , arXiv:1911.04903 [quant-ph].
Cited by
[1] Angel Ballesteros, Flaminia Giacomini, and Giulia Gubitosi, "The group structure of dynamical transformations between quantum reference frames", Quantum 5, 470 (2021).
The above citations are from Crossref's cited-by service (last updated successfully 2021-10-22 16:23:48). The list may be incomplete as not all publishers provide suitable and complete citation data.
On SAO/NASA ADS no data on citing works was found (last attempt 2021-10-22 16:23:48). |
47b461d42754143a | Download PDF
CSIR-UGC (NET) Exam for Award of Junior Research Fellowship and Eligibility for Lectureship shall be a Single Paper Test having Multiple Choice Questions (MCQs). The question paper shall be divided in three parts.
Part 'A'
Part 'B'
This part shall contain 25 Multiple Choice Questions (MCQs) generally covering the topics given in the Part ‘A’ (CORE) of syllabus. Each question shall be of 3.5 Marks. The total marks allocated to this section shall be 70 out of 200.Candidates are required to answer any 20 questions.
Part 'C'
This part shall contain 30 questions from Part ‘B’ (Advanced) and Part ‘A’ that are designed to test a candidate's knowledge of scientific concepts and/or application of the scientific concepts. The questions shall be of analytical nature where a candidate is expected to apply the scientific knowledge to arrive at the solution to the given scientific problem. A candidate shall be required to answer any 20. Each question shall be of 5 Marks. The total marks allocated to this section shall be 100 out of 200.
• There will be negative marking @25% for each wrong answer.
1. Mathematical Methods of Physics
1. Classical Mechanics
2. Newton’s laws. Dynamical systems, Phase space dynamics, stability analysis. Central force motions. Two body Collisions - scattering in laboratory and Centre of mass frames. Rigid body dynamics- moment of inertia tensor. Non-inertial frames and pseudoforces. Variational principle. Generalized coordinates. Lagrangian and Hamiltonian formalism and equations of motion. Conservation laws and cyclic coordinates. Periodic motion: small oscillations, normal modes. Special theory of relativity-Lorentz transformations, relativistic kinematics and mass–energy equivalence.
3. Electromagnetic Theory
4. Electrostatics: Gauss’s law and its applications, Laplace and Poisson equations, boundary value problems. Magnetostatics: Biot-Savart law, Ampere's theorem. Electromagnetic induction. Maxwell's equations in free space and linear isotropic media; boundary conditions on the fields at interfaces. Scalar and vector potentials, gauge invariance. Electromagnetic waves in free space. Dielectrics and conductors. Reflection and refraction, polarization, Fresnel’s law, interference, coherence, and diffraction. Dynamics of charged particles in static and uniform electromagnetic fields.
5. Quantum Mechanics
6. Wave-particle duality. Schrödinger equation (time-dependent and time-independent). Eigenvalue problems (particle in a box, harmonic oscillator, etc.). Tunneling through a barrier. Wave-function in coordinate and momentum representations. Commutators and Heisenberg uncertainty principle. Dirac notation for state vectors. Motion in a central potential: orbital angular momentum, angular momentum algebra, spin, addition of angular momenta; Hydrogen atom. Stern-Gerlach experiment. Time-independent perturbation theory and applications. Variational method. Time dependent perturbation theory and Fermi's golden rule, selection rules. Identical particles, Pauli exclusion principle, spin-statistics connection.
7. Thermodynamic and Statistical Physics
8. Laws of thermodynamics and their consequences. Thermodynamic potentials, Maxwell relations, chemical potential, phase equilibria. Phase space, micro- and macro-states. Micro-canonical, canonical and grand-canonical ensembles and partition functions. Free energy and its connection with thermodynamic quantities. Classical and quantum statistics. Ideal Bose and Fermi gases. Principle of detailed balance. Blackbody radiation and Planck's distribution law.
9. Electronics and Experimental Methods
1. Mathematical Methods of Physics
Green’s function. Partial differential equations (Laplace, wave and heat equations in two and three dimensions). Elements of computational techniques: root of functions, interpolation, extrapolation, integration by trapezoid and Simpson’s rule, Solution of first order differential equation using Runge-Kutta method. Finite difference methods. Tensors. Introductory group theory: SU(2), O(3).
1. Classical Mechanics
3. Electromagnetic Theory
5. Quantum Mechanics
7. Thermodynamic and Statistical Physics
9. Electronics and Experimental Methods
High frequency devices (including generators and detectors).
11. Atomic & Molecular Physics
13. Condensed Matter Physics
15. Nuclear and Particle Physics
Yearly pattern of questions asked from different subjects in last 5 papers conducted by CSIR-HRD of NET-JRF for PHYSICAL SCIENCES
Topics Dec 2015 June 2016 Dec 2016 June 2017 Dec 2017
3½M 5 M 3½M 5 M 3½M 5 M 3½M 5 M 3½M 5 M
Mathematical Methods of Physics 5 3 5 4 5 4 4 5 5 4
Classical Mechanics 4 4 3 3 5 3 5 3 4 3
Electromagnetic Theory 4 3 5 4 5 4 4 4 4 4
Quantum Mechanics 4 4 4 4 3 4 5 3 4 3
Thermodynamic and Statistical Physics 4 3 4 2 4 3 3 3 3 4
Electronics 4 2 4 2 3 2 3 3 3 2
Experimental Methods 1 1 1 1 1
Atomic & Molecular Physics 3 3 3 3 1 4
Condensed Matter Physics 4 3 3 3 1 3
Nuclear and Particle Physics 3 4 3 3 -- 2
Total no. of questions 25 30 25 30 25 30 25 30 25 30 |
b34aab7700f6f6f3 | General relativity
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
General relativity, also known as the general theory of relativity and Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of second order partial differential equations.
Newton's law of universal gravitation which describes classical gravity can be seen as a prediction of general relativity for the almost flat space-time geometry around stationary mass distributions. Some predications of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predications concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far all tests of general relativity have been in accord with the theory. The time dependent solutions of general relativity enables us to talk about the history of the universe, and has provided the modern framework for cosmology and lead to the discovery of Big Bang and Cosmic Microwave Background. Alternatives to general relativity are introduced but general relativity has remained the simplest theory consistent with experimental data. However it is remained unknown how to reconcile general relativity with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity; and how gravity can be unified with the three non-gravitational forces—strong, weak, and electromagnetic forces.
Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even lights, can escape from. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions includes the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe.
Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall, he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity.[3] These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present.[4] A version of non-Euclidean geometry, called Riemannian Geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity.[5] This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913.[6]
The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes.[7] In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption.[8] By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state.[9] Einstein later declared the cosmological constant the biggest blunder of his life.[10]
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"),[11] and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919,[12] instantly making Einstein famous.[13] Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity.[14] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations.[15] Ever more precise solar system tests confirmed the theory's predictive power,[16] and relativistic cosmology also became amenable to direct observational tests.[17]
General relativity has acquired a reputation as a theory of extraordinary beauty.[2][18][19] Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory.[20] Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.[21]
From classical mechanics to general relativity[edit]
Geometry of Newtonian gravity[edit]
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties.[26] A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.[27]
Relativistic generalization[edit]
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent.[32] In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure[33] or conformal geometry.
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.[34]
Einstein's equations[edit]
Einstein's field equations
On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular,
On the right-hand side, is the energy–momentum tensor. All tensors are written in abstract index notation.[39] Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the gravitational constant and the speed of light in vacuum.[40] When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
The geodesic equation is:
Total force in general relativity[edit]
In general relativity, the effective gravitational potential energy of an object of mass m rotating around a massive central body M is given by[41][42]
A conservative total force can then be obtained as[citation needed]
where L is the angular momentum. The first term represents the Newton's force of gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect.
Alternatives to general relativity[edit]
Definition and basic applications[edit]
Definition and basic properties[edit]
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems.[48] Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers.[49] Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.[50]
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly.[52] Nevertheless, a number of exact solutions are known, although only a few have direct physical applications.[53] The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe,[54] and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos.[55] Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).[56]
Consequences of Einstein's theory[edit]
Gravitational time dilation and frequency shift[edit]
Gravitational redshift has been measured in the laboratory[63] and using astronomical observations.[64] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks,[65] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS).[66] Tests in stronger gravitational fields are provided by the observation of binary pulsars.[67] All results are in agreement with general relativity.[68] However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.[69]
Light deflection and gravitational time delay[edit]
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity.[71] As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion),[72] several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light,[73] the angle of deflection resulting from such calculations is only half the value given by general relativity.[74]
Gravitational waves[edit]
Predicted in 1916[77][78] by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On February 11, 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.[79][80][81]
The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right).[82] Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.[83]
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space[84] or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves.[85] But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.[86]
Orbital effects and the relativity of direction[edit]
Precession of apsides[edit]
Newtonian (red) vs. Einsteinian orbit (blue) of a lone planet orbiting a star. The influence of other planets is ignored.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass)[88] or the much more general post-Newtonian formalism.[89] It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations).[90] Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth),[91] as well as in binary pulsar systems, where it is larger by five orders of magnitude.[92]
In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by[93]
Orbital decay[edit]
Orbital decay for PSR 1913+16: time shift (in s), tracked over 30 years (2006).[94]
Orbital decay for PSR J0737−3039: time shift (in s), tracked over 16 years (2021).[95]
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics.[97] Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737-3039, where both stars are pulsars[98] and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations.[95]
Geodetic precession and frame-dragging[edit]
Several relativistic effects are directly related to the relativity of direction.[99] One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport").[100] For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging.[101] More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.[102][103]
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable.[104] Such effects can again be tested through their influence on the orientation of gyroscopes in free fall.[105] Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction.[106] Also the Mars Global Surveyor probe around Mars has been used.[107]
Neo-Lorentzian Interpretation[edit]
Examples of prominent physicists who support neo-Lorentzian explanations of general relativity are Franco Selleri and Antony Valentini.[108]
Astrophysical applications[edit]
Gravitational lensing[edit]
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing.[109] Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.[110] The earliest example was discovered in 1979;[111] since then, more than a hundred gravitational lenses have been observed.[112] Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.[113]
Gravitational-wave astronomy[edit]
Artist's impression of the space-borne gravitational wave detector LISA
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research.[115] Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO.[116] Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes.[117] A European space-based detector, eLISA / NGO, is currently under development,[118] with a precursor mission (LISA Pathfinder) having launched in December 2015.[119]
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum.[120] They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.[121] In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.[79][80][81]
Black holes and other compact objects[edit]
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars.[122] Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center,[123] and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.[124]
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation.[125] Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars.[126] In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.[127] General relativity plays a central role in modelling all these phenomena,[128] and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.[129]
where is the spacetime metric.[132] Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions,[133] allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase.[134] Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation,[135] further observational data can be used to put the models to the test.[136] Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis,[137] the large-scale structure of the universe,[138] and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.[139]
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly.[140] There is no generally accepted description of this new kind of matter, within the framework of known particle physics[141] or otherwise.[142] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.[143]
An inflationary phase,[144] an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation.[145] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario.[146] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations.[147] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed[148] (cf. the section on quantum gravity, below).
Exotic solutions: Time travel, Warp drives[edit]
Kurt Gödel showed[149] that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking has introduced Chronology protection conjecture which is an assumption beyond those of standard general relativity to prevent time travel.
Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability. [150]
Advanced concepts[edit]
Asymptotic symmetries[edit]
The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries if any might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner[151] and Rainer K. Sachs[152] addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group — not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries.[153]
Causal structure and global geometry[edit]
Penrose–Carter diagram of an infinite Minkowski universe
There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon).[162] Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.[163]
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values.[164] Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole,[165] or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole.[166] The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.[167]
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization.[168] The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage[169] and also at the beginning of a wide class of expanding universes.[170] However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture).[171] The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.[172]
Evolution equations[edit]
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism.[174] These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified.[175] Such formulations of Einstein's field equations are the basis of numerical relativity.[176]
Global and quasi-local quantities[edit]
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass)[178] or suitable symmetries (Komar mass).[179] If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity.[180] Just as in classical physics, it can be shown that these masses are positive.[181] Corresponding global definitions exist for momentum and angular momentum.[182] There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.[183]
Relationship with quantum theory[edit]
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other.[184] However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime[edit]
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth.[185] In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.[186] Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time.[187] As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.[188]
Quantum gravity[edit]
The demand for consistency between a quantum description of matter and a geometric description of spacetime,[189] as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics.[190] Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.[191][192]
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems.[193] Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity.[194] At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").[195]
Simple spin network of the type used in loop quantum gravity
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects.[196] The theory promises to be a unified description of all particles and interactions, including gravity;[197] the price to pay is unusual features such as six extra dimensions of space in addition to the usual three.[198] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[199] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[200]
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff.[201] However, with the introduction of what are now known as Ashtekar variables,[202] this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.[203]
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced,[204] there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus,[191] dynamical triangulations,[205] causal sets,[206] twistor models[207] or the path integral based models of quantum cosmology.[208]
Current status[edit]
Observation of gravitational waves from binary black hole merger GW150914
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete.[210] The problem of quantum gravity and the question of the reality of spacetime singularities remain open.[211] Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.[212] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations,[213] while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes).[214] In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on September 14, 2015.[81][215][216] A century after its introduction, general relativity remains a highly active area of research.[217]
See also[edit]
3. ^ O'Connor, J.J.; Robertson, E.F. (May 1996). "General relativity]". History Topics: Mathematical Physics Index, Scotland: School of Mathematics and Statistics, University of St. Andrews, archived from the original on 4 February 2015, retrieved 4 February 2015
6. ^ Grossmann for the mathematical part and Einstein for the physical part (1913). Entwurf einer verallgemeinerten Relativitätstheorie und einer Theorie der Gravitation (Outline of a Generalized Theory of Relativity and of a Theory of Gravitation), Zeitschrift für Mathematik und Physik, 62, 225–261. English translate
8. ^ Einstein 1917, cf. Pais 1982, ch. 15e
11. ^ Pais 1982, pp. 253–254
12. ^ Kennefick 2005, Kennefick 2007
13. ^ Pais 1982, ch. 16
14. ^ Thorne 2003, p. 74
17. ^ Section Cosmology and references therein; the historical development is in Overbye 1999
18. ^ Wald 1984, p. 3
20. ^ Chandrasekhar 1984, p. 6
21. ^ Engler 2002
22. ^ The following exposition re-traces that of Ehlers 1973, sec. 1
23. ^ Al-Khalili, Jim (26 March 2021). "Gravity and Me: The force that shapes our lives". Retrieved 9 April 2021.{{cite web}}: CS1 maint: url-status (link)
24. ^ Arnold 1989, ch. 1
25. ^ Ehlers 1973, pp. 5f
26. ^ Will 1993, sec. 2.4, Will 2006, sec. 2
27. ^ Wheeler 1990, ch. 2
29. ^ Ehlers 1973, pp. 10f
33. ^ Ehlers 1973, sec. 2.3
34. ^ Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1
40. ^ Kenyon 1990, sec. 7.4
41. ^ Weinberg, Steven (1972). Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. John Wiley. ISBN 978-0-471-92567-5.
42. ^ Cheng, Ta-Pei (2005). Relativity, Gravitation and Cosmology: a Basic Introduction. Oxford and New York: Oxford University Press. ISBN 978-0-19-852957-6.
45. ^ At least approximately, cf. Poisson 2004a
46. ^ Wheeler 1990, p. xi
47. ^ Wald 1984, sec. 4.4
48. ^ Wald 1984, sec. 4.1
50. ^ section 5 in ch. 12 of Weinberg 1972
51. ^ Introductory chapters of Stephani et al. 2003
54. ^ Chandrasekhar 1983, ch. 3,5,6
55. ^ Narlikar 1993, ch. 4, sec. 3.3
57. ^ Lehner 2002
58. ^ For instance Wald 1984, sec. 4.4
59. ^ Will 1993, sec. 4.1 and 4.2
60. ^ Will 2006, sec. 3.2, Will 1993, ch. 4
67. ^ Stairs 2003 and Kramer 2004
69. ^ Ohanian & Ruffini 1994, pp. 164–172
72. ^ Blanchet 2006, sec. 1.3
76. ^ Will 1993, sec. 7.1 and 7.2
77. ^ Einstein, A (22 June 1916). "Näherungsweise Integration der Feldgleichungen der Gravitation". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin (part 1): 688–696. Bibcode:1916SPAW.......688E. Archived from the original on 21 March 2019. Retrieved 12 February 2016.
78. ^ Einstein, A (31 January 1918). "Über Gravitationswellen". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin (part 1): 154–167. Bibcode:1918SPAW.......154E. Archived from the original on 21 March 2019. Retrieved 12 February 2016.
79. ^ a b Castelvecchi, Davide; Witze, Witze (11 February 2016). "Einstein's gravitational waves found at last". Nature News. doi:10.1038/nature.2016.19361. S2CID 182916902. Retrieved 11 February 2016.
83. ^ For example Jaranowski & Królak 2005
84. ^ Rindler 2001, ch. 13
85. ^ Gowdy 1971, Gowdy 1974
88. ^ Rindler 2001, sec. 11.9
89. ^ Will 1993, pp. 177–181
92. ^ Kramer et al. 2006
93. ^ Dediu, Magdalena & Martín-Vide 2015, p. 141.
95. ^ a b Kramer, M.; Stairs, I. H.; Manchester, R. N.; Wex, N.; Deller, A. T.; Coles, W. A.; Ali, M.; Burgay, M.; Camilo, F.; Cognard, I.; Damour, T. (13 December 2021). "Strong-Field Gravity Tests with the Double Pulsar". Physical Review X. 11 (4): 041050. arXiv:2112.06795. Bibcode:2021PhRvX..11d1050K. doi:10.1103/PhysRevX.11.041050. ISSN 2160-3308. S2CID 245124502.
98. ^ Kramer 2004
101. ^ Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003
102. ^ Kahn 2007
106. ^ Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009
107. ^ Iorio 2006, Iorio 2010
108. ^ Einstein, Relativity, and Absolute Simultaneity. London: Routledge. 2007. ISBN 978-1134003891.
111. ^ Walsh, Carswell & Weymann 1979
113. ^ Roulet & Mollerach 1997
114. ^ Narayan & Bartelmann 1997, sec. 3.7
115. ^ Barish 2005, Bartusiak 2000, Blair & McNamara 1997
116. ^ Hough & Rowan 2000
118. ^ Danzmann & Rüdiger 2003
119. ^ "LISA pathfinder overview". ESA. Retrieved 23 April 2012.
120. ^ Thorne 1995
121. ^ Cutler & Thorne 2002
122. ^ Miller 2002, lectures 19 and 21
123. ^ Celotti, Miller & Sciama 1999, sec. 3
124. ^ Springel et al. 2005 and the accompanying summary Gnedin 2005
125. ^ Blandford 1987, sec. 8.2.4
130. ^ Dalal et al. 2006
131. ^ Barack & Cutler 2004
132. ^ Einstein 1917; cf. Pais 1982, pp. 285–288
133. ^ Carroll 2001, ch. 2
135. ^ E.g. with WMAP data, see Spergel et al. 2003
138. ^ Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005
146. ^ Spergel et al. 2007, sec. 5,6
148. ^ Brandenberger 2008, sec. 2
149. ^ Gödel 1949
150. ^ Finazzi, Stefano; Liberati, Stefano; Barceló, Carlos (15 June 2009). "Semiclassical instability of dynamical warp drives". Physical Review D. 79 (12): 124017. doi:10.1103/PhysRevD.79.124017.
151. ^ Bondi, H.; Van der Burg, M.G.J.; Metzner, A. (1962). "Gravitational waves in general relativity: VII. Waves from axisymmetric isolated systems". Proceedings of the Royal Society of London A. A269 (1336): 21–52. Bibcode:1962RSPSA.269...21B. doi:10.1098/rspa.1962.0161. S2CID 120125096.
152. ^ Sachs, R. (1962). "Asymptotic symmetries in gravitational theory". Physical Review. 128 (6): 2851–2864. Bibcode:1962PhRv..128.2851S. doi:10.1103/PhysRev.128.2851.
153. ^ Strominger, Andrew (2017). "Lectures on the Infrared Structure of Gravity and Gauge Theory". arXiv:1703.05448 [hep-th]. ...redacted transcript of a course given by the author at Harvard in spring semester 2016. It contains a pedagogical overview of recent developments connecting the subjects of soft theorems, the memory effect and asymptotic symmetries in four-dimensional QED, nonabelian gauge theory and gravity with applications to black holes. To be published Princeton University Press, 158 pages.
160. ^ Bekenstein 1973, Bekenstein 1974
162. ^ Narlikar 1993, sec. 4.4.4, 4.4.5
169. ^ Namely when there are trapped null surfaces, cf. Penrose 1965
170. ^ Hawking 1966
173. ^ Hawking & Ellis 1973, sec. 7.1
177. ^ Misner, Thorne & Wheeler 1973, §20.4
178. ^ Arnowitt, Deser & Misner 1962
180. ^ For a pedagogical introduction, see Wald 1984, sec. 11.2
182. ^ Townsend 1997, ch. 5
186. ^ Wald 1994, Birrell & Davies 1984
188. ^ Wald 2001, ch. 3
190. ^ Schutz 2003, p. 407
191. ^ a b Hamber 2009
192. ^ A timeline and overview can be found in Rovelli 2000
193. ^ 't Hooft & Veltman 1974
194. ^ Donoghue 1995
198. ^ Green, Schwarz & Witten 1987, sec. 4.2
199. ^ Weinberg 2000, ch. 31
200. ^ Townsend 1996, Duff 1996
201. ^ Kuchař 1973, sec. 3
204. ^ Isham 1994, Sorkin 1997
205. ^ Loll 1998
206. ^ Sorkin 2005
207. ^ Penrose 2004, ch. 33 and refs therein
208. ^ Hawking 1987
209. ^ Ashtekar 2007, Schwarz 2007
211. ^ section Quantum gravity, above
212. ^ section Cosmology, above
213. ^ Friedrich 2005
217. ^ See, e.g., the Living Reviews in Relativity journal.
Further reading[edit]
Popular books[edit]
Beginning undergraduate textbooks[edit]
Advanced undergraduate textbooks[edit]
Graduate textbooks[edit]
Specialists' books[edit]
Journal articles[edit]
External links[edit]
• Courses
• Lectures
• Tutorials |
3766c46cd50db47f | 06 December 2021
"Authenticity" is Not a Useful Criterion
One of the complaints that we most often see in response to the Chinese origins thesis is that it sounds implausible, or as one senior Japanese academic recently said, it sounds "unnatural".
Underlying this attitude, I think, is the idea that the Chinese passively received Buddhism. They translated sutras into Chinese and never looked back. And this implies that authentic Chinese Buddhism cannot be wholly located in China. For example, even some Chinese Buddhists (at least amongst the elite monks of Chang'an and Luoyang) seem to have believed that authenticity was predicated on the ideas, attitudes, and practices coming from the West (meaning Greater India or Central Asia).
In reality however, many Chinese were not only literate but they were intellectuals, philosophers, historians, etc. Moreover, unlike Europeans, they were not yet slaves to technology. They made conscious decisions not to let technology take over their lives.
There is ample evidence that Chinese Buddhists began composing their own texts almost immediately. And why not? Indians had been doing so for some centuries, and continued to do so long after contact with China was established. We know, partly from Chinese translations, that Mahāyāna texts in particular were never finished. While there was life in Indian Buddhism, Buddhists constantly tinkered with their texts. The Prajñāpāramitā literature was no exception. Indeed one text in about 8000 lines was transformed into a series of much longer texts (ca 2 to 3 times more material was added in each case). And all continued to evolve over time.
I see no a priori reason why a Chinese person could not attain liberation and write about it. Or even imagine what it might be like and write about that. Or contribute a culturally appropriate version of the Pure Land, or any number of other possibilities.
Another common complaint is that there is no precedent for what we say happened to the Heart Sutra. In this vein one of the reviewers of my forthcoming articles has pointed me to two articles by (fellow Kiwi) Michael Radich.
Radich, Michael. "On the Sources, Style and Authorship of Chapters of the Synoptic Suvarṇaprabhāsa-sūtra T 664 Ascribed to Paramārtha (Part1)." Annual Report of The International Research Institute for Advanced Buddhology at Soka University 17 (2014): 207-244.
———. "Tibetan Evidence for the Sources of Chapters of the Synoptic Suvarṇaprabhāsottama-sūtra T664 Ascribed to Paramārtha." Buddhist Studies Review 32 (2015): 245-270.
In these two papers, Radich explores the origins of four Chapters that were added to the Suvarṇaprabhāsa Sūtra (Suv or Golden Light Sutra). They are not found in the first translation but appear in the translation attributed to Paramārtha (now lost) and in a subsequent text that combines material from various sources, including Paramārtha's translation.
On examination, the added chapters appear to have been composed in China on the basis of existing Chinese translations.
As I read Radich this is not an example of the practice of chāo 抄 or extract-making despite being based on existing texts. Recall that the idea that the Heart Sutra is a chāo jīng 抄經 (digest text) is now fairly well established. Rather what seems to have happened with Suv is that someone, possibly Paramārtha, decided it needed something more and went about creating it in Chinese. Moreover, when the Tibetan sources are examined in detail, we find some supporting evidence for this conclusion.
This procedure is very much how Indian Buddhists composed their texts as well. The typical Buddhist text is modular: it is made from a combination of pre-existing elements, from lines and phrases to whole chapters. It was not uncommon for independently circulating smaller works to be absorbed into a larger work, e.g. the Avalokiteśvara chapter of the Saddharmapuṇḍarikā Sūtra.
Another complaint is that there is very little evidence of Chinese texts being translated into Sanskrit. But it did happen. And one of the people most often associated with this is none other than Xuanzang. We can be fairly sure, for example, that Awakening Faith in Mahāyāna was a Chinese text, translated into Sanskrit, by Xuanzang.
By the mid-seventh century, i.e. around the time the Heart Sutra was composed, Chinese Buddhists were exposed to a number of co-existing literary traditions and cultures. At that time Chang'an was not only the largest city in the world but, because of the Silk Road, it was the most cosmopolitan. So the reception of Buddhism in China was vastly more complex than a simple interaction of Buddhism with Confucianism and Daoism.
Chinese literary culture had been evolving for almost 2000 years by this time. And not only that but the fine arts were focussed on poetry and calligraphy. The idea that Chinese Buddhists would not compose their own texts looks less plausible to me than that they did. Of course they did.
Buddhism is not a revealed religion. Our texts are sacred but not sacrosanct. We change them when there is a need. And if all else fails we simply write new texts and we invent ways to authenticate them. A great deal can rest on the charisma of the individual.
Rather than thinking of Chinese Buddhism as a bodhi tree planted in the alluvial soils of the Yellow River, we should think of it as a recipe passed on through a series of friendly strangers who each adapted it to their local ingredients and tastes, to create a new dish with the same name. Culinary examples abound: pizza as served by places like Pizza Hut or Dominos is only loosely (theoretically even) connected to the traditional dish enjoyed by working people in Naples. Which is fine. It's not like I am or want to be Neapolitan. I happen to like grilled cheese on toast with a dash of tomato sauce. If I want pineapple with that, it's nobody's business but mine. To say that I can't call it pizza or that my cheese on toast is somehow inauthentic would be to miss the point. It's not like anyone calls humans and chimps inauthentic because they evolved to be different from their last common ancestor.
I recently happened to read about an idea attributed to Derek Parfit recently, although he apparently attributed it back to Buddhism. Which is this: continuity not identity over time is what matters. This is quite similar to my conclusion after reflecting on the Ship of Theseus conundrum. Identity (i.e. sameness) over time doesn't really exist because we change. But change does not preclude continuity.
If this is true, then it suggests that we have missed the point about the evolution of Buddhist texts in China, but also we have missed something important about the notion of authenticity in China.
I think we can say that for a Chinese person to consider a non-Chinese text authentic was actually a stretch. Keep in mind that some Chinese continued to see Buddhism as a foreign barbarian religion well beyond the time of Xuanzang. Authenticity in China was complex. Authentic Buddhist texts did have to have a connection with India, but it also required that the ideas be expressed in elegant Chinese. Once translated, the Indian manuscripts were seldom if ever consulted again. Initially, of course, there were no manuscripts since texts arrived in the memories of monks. But by the Tang, 100s or even 1000s of Indic and Central Asian (mostly Iranic) manuscripts were physically present in China. Few if any of them survive.
Chinese translators often consulted earlier translations when preparing new ones. But there is little evidence of going back to the source languages. Proficiency in Sanskrit was exceedingly rare then (and now).
In view of the dynamics of Chinese literary culture, it should not be surprising that Chinese Buddhists created their own texts. The argument about authenticity really gets us nowhere. The Heart Sutra is a Chinese text. We cannot reasonable say that it is not authentic simply because it was composed in Chinese. Even the fact that it is almost universally misunderstood, doesn't make that misunderstanding less authentic: on the contrary since the misunderstanding is virtually canonical, it is the correct reading that is viewed as inauthentic (sigh). People believed what they wanted to believe. That is interesting in and of itself. Asking open questions allows us to explore these beliefs and their implications. Asking binary questions about authenticity/inauthenticity tends to collapse any line of investigation.
29 November 2021
Notes on Nonduality
Today I'm typing up notes on duality and nonduality.
In Buddhist circles we tend to talk a lot about mind/body dualism. But this is a fairly new subject, introduced by Descartes. I think in the ancient world we have to think more in terms of a matter-spirit duality. And in this view body is animated matter, literally matter than has had life breathed into it. We call this kind of philosophy vitalism.
Words for the vital force that makes a living thing living across cultures tend to mean "breath" (including prāṇa, qi, spirit, animate, psyche, etc). The vital force across the ancient world, then, is breath, not mind.
This is yet another case of having to be careful not to project our modern worldview backwards in time. Mind/body was not a thing for early Buddhists, at least not a metaphysical thing. On the other hand they made an epistemic distinction between suffering that is mainly physical (kāyika) and mainly mental (cetasika). We too make this kind of distinction. A stubbed toe and a broken heart both involve real suffering, but they clearly have different sources. But this is an epistemic distinction, since it is entirely reliant on different sources of knowledge.
A while back I suggested that we never find the cognitive metaphor "mind is a container" in Buddhist texts. That is to say, Buddhists don't seem to have considered that thoughts happen "in the mind" or that the mind is a kind of "theatre of experience". Rather thoughts are the mind. Not too long ago I was writing about the fact that there is no word corresponding to the category of "emotion". Early Buddhists had many words for emotions, but they did not class them separately from thoughts, feelings, valence, or memories. I also noted that for early Buddhists memories were not entities. There is no noun that corresponds to "a memory" despite the fact that Sanskrit has multiple verbs that can mean remembering. We tend to use a Freudian concept of "a memory". These Freudian entities have a will of their own. We can try to repress a memory, but then it subconsciously affects our behaviour.
Our familiar way of carving up the world does not easily map onto early Buddhist thought. Or Prajñāpāramitā thought for that matter.
In Chapter two of Sarah Mattice's book Exploring the Heart Sutra she looks at Chinese translation techniques. She gives a useful overview of the history of Chinese Buddhist translations touching on some of the famous figures of the past, but also some modern thinkers. Unusually, Mattice is trying to help us understand how a Chinese person might understand the text. A simple example of this is the translation of kōng 空. We are used to translating this from a sectarian Madhyamaka point of view. We say "It means 'emptiness'.". Mattice shows that in translating the Heart Sutra from Chinese, it makes more sense to read it as "emptying".
Now, my orientation to this material is still not that of a Chinese-speaker. I still find myself in the old paradigm of thinking about the Heart Sutra as a Sanskrit text. There is a rationale to support this. Because the Heart Sutra is largely (though of course not entirely) passages copied from a 5th century Chinese translation of an earlier Sanskrit text, with some editing by a 7th century Chinese Buddhist monk (probably Xuanzang). So when I translate the Heart Sutra I have in mind the Indic origins of the ideas. However, I have always wondered how a Chinese-speaker would relate to the text without any of this background in Sanskrit or Indic thought. And I think this is what Mattice shows us in Chp 2. She translates the text while mentally inhabiting the mind of a Chinese speaker in the ancient world.
That said, my main audience is living Buddhists. I'm trying to make sense of the text for living, largely Anglophone Buddhists. The ideas in the Heart Sutra are repackaged fragments of Indian Buddhism that I think are best made sense today in the light of the Sanskrit (or even Gāndhārī) Prajñāpāramitā literature. I suppose I must state the obvious and say that there are many possible ways to approach this text. And they lead to different approaches to conveying the ideas in English. Back in 1980, Paul Griffiths (who coined the term Buddhist Hybrid English) suggested that translation can be an inferior way of doing this. It might be better to compose a detailed study of the text.
For historians Mattice's translation highlights many important issues and problems related to the art of translation. But I would likely point practising Buddhists in another direction (in the direction I'm trying to go), without in any way wanting to diminish Mattice's achievement.
Exploring the Heart Sutra.
Mattice, Sarah A.
Lanham: Lexington Books, 2021.
23 November 2021
I'm writing up notes on the skandhas, which is a difficult task. I wrote four long essays on skandhas, comparing the two accounts found in Sue Hamilton's book and another from the same year by Tilmann Vetter. Both authors identified every occurrence of the skandhas in the Pāli suttas (though Vetter added references from the Vināya). As I was writing those essays I came to see some rather major problems with the approach they adopted. It was so disheartening that I stopped before covering viññāna.
Both Hamilton and Vetter placed too much emphasis on the Khajjanīya Sutta (SN 22.79). This is perhaps understandable since it is the only sutta with anything like an explanation.
In the Khajjanīya Sutta we see a broken pattern of punning: each skandha is related back to an activity, usually a verb from the same root. So for example vedanā can be understood to do the action of vedayati "making known".
I didn't use rūpa as my example because the has pun gone wrong here. The word rūpa doesn't have a known verbal root. But the author of the Khajjanīya Sutta proposed that it is related to √rup "harm, destroy". The 3rd person singular indicative is ruppati. In Aṣṭa we find the same pericope, but rather than ruppati, we find rūpayati.
The verb rūpayati is one of those words that narrow-minded pedants love to hate, i.e. a denominative verb. The sutta apparently meant to say "it is called appearance (rūpa) because it appears (rūpayati)". And this pun was mixed up in Pāli destroying the connection.
Not only does the Khajjanīya Sutta get rūpa wrong, the idea that vedanā does the action of vedayati is suspect. Because Buddhists don't use this word in anything like it's etymological sense. Vedanā means something like, "the positive and negative hedonic qualities of sensory experience" (sukha-dukkha-asukhamadukkha). This is not what vedayati denotes or connotes. This meaning has been imposed on the word by Buddhists, it doesn't emerge from the etymology.
This also creates a translation problem. We see translators arguing over whether to use "feelings" or "sensations". But neither of these English words conveys "the positive and negative hedonic qualities of sensory experience".
Interestingly neuroscientists do use this concept of the "the positive and negative hedonic qualities of sensory experience", which they refer to as valance. This has yet to find it's way into popular usage.
The longer I went on working through these two secondary works and reading the primary texts they cited, the less convincing I found both accounts.
Sense can be made of the skandhas. Religious friends of mine have no problem doing skandha meditations. The approach they take is very similar to both the satipaṭṭhāna method and the mahābhūta or four/six element meditations. One settles in, then examines one's experience for a dhammabhūta, or dhātu from the list and works through the list.
The basic Buddhist approaches to meditation are found in the 37 Bodhipakkhiyādhammā. That is to say, the four foundations of mindfulness (satipaṭṭhāna), the four right efforts (sammappadhānā), four powers (iddhipādā), five faculties (indriya), five strengths (bala), seven factors of awakening (bojjhanga).
Mahāyāna texts extend this list. Notably Pañcaviṃśatisāhasrikā uses an extended list to make a point that is vital to understanding the Heart Sutra. In Chapter 16 (of Conze's translation, i.e. Kimura I-2, 75 ff), we get more answers to the question asked at the beginning of the previous chapter, i.e. "Bhagavan, what is the great-vehicle of the great enlightened ones" (katamad bhagavan bodhisattvasya mahāsattvasya mahāyānam? Kimura PvsP1-2: 58).
The answer, here, is that the Mahāyāna is precisely the extended list of Bodhipakkhiyādhammā, but with a twist. Each step is spelled out entirely normatively, ... but then qualified at the end by "and that by the yoga of nonapprehension" (taccānupalambhayogena). In other words there are no new practices involved, but one does the same practices as everyone else in the spirit of nonapprehension.
Now, Matt Orsborn has shown that this term anupalmabhyogena in the Large Sutra was translated by Kumārajīva as yǐwúsuǒdégù 以無所得故. And we find this Chinese phrase in the Xīn jīng, after the negated lists in the core section. And this gives us two important pieces of information.
1. anupalambhayogena qualifies the preceding list of negations. That is to say that the core section of the Heart Sutra says "In the absence of sensory experience... there is no form... through the yoga of nonapprehension." And this is clearly not a metaphysical statement but an epistemic or phenomenological one.
2. The Sanskrit Heart Sutra has an unexpected aprāptitvāt at this point. And this can only be an incorrect translation of the Chinese term (slam dunk proof that the Heart Sutra is Chinese). This error could easily be made by a naive translator since the immediately preceding word is wú dé 無得, i.e. aprāpti "nonattainment". The translator saw the same term in the next (five character) word and assumed it meant the same thing. They didn't notice that Kumārajīva was using suǒdé 所得 to translate words from upa√labh (i.e. upalabhate "to apprehend", upalambha "apprehension").
There is clearly both a great deal of unexplored continuity here as well as some underplayed points of difference. But we can at least understand that Guānyīn is doing a skandha meditation in the Heart Sutra, i.e. an analytic meditation which resolves sensory experience into constituents to reveal the nature of experience as impermanent, unsatisfactory, and insubstantial (the opposite of the Vedic trio of "being, consciousness, and bliss" or saccidānada).
Moreover, Guānyīn is doing it Mahāyāna-style, i.e. using the yoga of nonapprehension. Many of us have now noticed that we have instruction for something that looks exactly like what we'd expect of this type of meditation in the Cūlasuññata Sutta (MN 121). This is not an analytic approach, but uses concentration techniques--primarily inattention to the senses (amanasikāra)--to cause sensory experience to become attenuated and then vanish, leaving the meditator in the the state of absence of sensory experience (suññatāvihāra). If this is not precisely anupalambhyoga, then it must be something very like it.
What Pañc seems to be saying is that anupalambhayogena is the key idea here. This seems not to be reflected in Aṣṭasāhasrikā where the term is used much less frequently, even though the sentiment it engenders is everywhere in the use of epistemic verbs like finding (vindate), perceiving (samanupaśyati), and apprehending (upalabhate) applied (negatively) to dharmas. It is axiomatic that in the state of absence of sensory experience, there are no dharmas arising or ceasing. And in Prajñāpāramitā, at least, this absence is not reified (I argue that it is reified by Nāgārjuna).
So even the analytical meditations of earlier Buddhism become, in the Prajñāpāramitā milieu, ways of bringing sensory experience to a halt and finding oneself in the state of absence [of sensory experience]. One cannot think at all in that state, let alone thinking in analytic terms, but the process of getting to absence does involve paying attention to what is currently present (asuññā) or absent (suññā).
So we not only have to try to understand skandhas generally (which is difficult because of the lack of clear sources), and we have to try to understand them in the highly changeable Indian Prajñāpāramitā milieu spanning several centuries, and we have to try to understand what they meant in Tang China. I can cover the first two, but a proper Sinologist needs to look into the last one (maybe they already have?)
There is so much basic research left to do that it is embarrassing. But it is not being done. One reason is that Conze poisoned the well and his acolytes accept the view that "water is poison". Another is that no one is allowed to say anything new about Buddhism in academia these days. But anyone could have done what we have done to date to repair this mischief. We're still taking the low-hanging fruit left by the School of Highly Irrational Interpretations of Texts.
17 September 2021
Hostility To Change In Buddhist Studies (And Elsewhere).
There is a story in Adam Becker's book What Is Real? part of which he admits might be apocryphal, but which nevertheless accurately conveys the social dynamic in physics in the 1950s. It is true that in 1952, Max Dresden gave a lecture on the work of David Bohm to an audience of physics luminaries at Princeton's Institute for Advanced Study. Dresden himself would have been happy to ignore Bohm, but his students pestered him to read Bohm's paper outlining an alternative approach to quantum mechanics. Bohm's idea is that the quantum world is literally particles and waves combined: with the particle carrying the physical properties and the wave guiding the motion of the particle (the idea is also known as a pilot wave theory). The interesting thing about this, as Becker relates, is that "Bohm's theory was mathematically equivalent to 'normal' quantum physics" (90).
What Bohm showed was that the Schrödinger equation was consistent with at least two different and mutually exclusive descriptions of physical reality. But there can be only one reality. Other descriptions of physical realities consistent with the Schrödinger equation soon followed, but Bohm's was the first alternative to emerge. The Copenhagen supremacy was dead at that point. But it has not been replaced in university textbooks because, despite many alternative proposals, none of them is known to be the right one. In the absence of a good model, students are taught the bad one that is most familiar.
Bohm had previously done highly regarded work at Princeton. In 1952, Bohm was out of the mainstream and living in exile in Brazil because of problems with the US State Dept arising from his left-wing politics (it was the McCarthy era). Dresden finished his presentation (including the maths) and the floor was opened to questions. He was expecting some push back from the audience about this but was unprepared for the wave of vitriol that washed over him. As Becker recounts it:
"One person called Bohm a 'public nuisance'. Another called him a traitor, still another said he was a Trotskyite. As for Bohm's ideas, they were dismissed as mere 'juvenile deviationism', and several people implied that Dresden himself was at fault as a physicist to have take Bohm seriously. Finally, Robert Oppenheimer, the director of the Institute spoke up.... "if we cannot disprove Bohm, then we must agree to ignore him." (90, My emphasis)
"if we cannot disprove Bohm, then we must agree to ignore him."—Oppenheimer (allegedly)
In his 1980 book, Wholeness and the Implicate Order, Bohm suggested that "the scientific way of thinking is stereotypically stubborn" (3). Another physicist, Max Planck, lent credence to this supposition when, frustrated with the lack of progress in quantum theory he quipped "science proceeds one funeral at a time". This turns out to be a paraphrase of something more subtle that he wrote in his 1949 "Scientific Autobiography":
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. . . . An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth." (1950: 33, 97)
Of course, to be fair to physics, Planck's ideas were widely accepted by the time he wrote this and many of them bear his name, e.g. Planck's constant. Still, Planck and Bohm were not alone in thinking this way. A decade later, in The Structure of Scientific Revolutions (1962), Thomas Kuhn wrote:
"Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change… " (90).
I take the message here to be that only people not invested in the status quo are flexible enough to change it. And we can note that it is intuitively the case not only in science, but in every aspect of life. The use of "men" to mean "people" is a paradigm that has changed in my lifetime because a generation of women forced us to rethink gender. And rightly so. Science is no longer dominated by men by virtue of their gender roles in society. Women make excellent scientists and scholars.
Speaking of women in science, Professor Katalin Karikó, has recently been reported in a UK newspaper as saying
“If so many people who are in a certain field would come together in a room and forget their names, their egos, their titles, and just think, they would come up with so many solutions for so many things, but all these titles and whatever get in the way,” (emphasis added)
Karikó, now a senior vice-president for RNA protein replacement therapies at BioNTech in Germany, "endured decades of scepticism over her work and was demoted and finally kicked out of her lab while developing the technology that made the Pfizer and Moderna vaccines possible" (my emphasis).
Interestingly Karikó says that the adversarial competitiveness disappeared when she moved from academia to industry, where all that counts is an efficacious product. Still, if academic science proceeds one funeral at the time, industrial science makes progress only on what is profitable for shareholders.
I cite these examples to show that intellectual discourse can be and frequently is reluctant to change, and that even at the heart of academic physics, politics play a role. There is a general resistance to new ideas whoever proposes them and however they do it, even in the hardest of "hard sciences". However, and this is especially true in Buddhist Studies, this is not a healthy scepticism so much as it is dogmatism and/or egotism. When our title, job, role in society, and our very identity are bound up with a particular story, we don't want to know that the story is inaccurate. This is hardly rocket science. Economists call this the sunk cost fallacy. This is when we stay to the end of a bad movie because the tickets were expensive and we want to "get our money's worth". Sometimes known as throwing good money after bad.
But it is not just resistance to the innovative. There is another, darker aspect to the Buddhist Studies culture. Quite a number of Buddhists Studies academics are mean. I have some public examples to discuss, but I also have many comments sent to me in private and in confidence that confirm this. Many people tell me I'm better off out of it.
Meanness is endemic in Buddhist Studies. And it mainly seems to involve men being egotistical and treating Buddhist Studies as a zero-sum game. Charles Prebish observed that when he was an early career academic:
“I was convinced that Buddhist Studies, as it was developing in North America, was misguided. In the first place, most of the role models for this blooming discipline: Edward Conze, Leon Hurvitz, Alex Wayman, and a few others, were amongst the meanest individuals in academe [sic]... they seemed to take real delight in humiliating students rather than encouraging them.” (Prebish 2019, cited in Attwood 2020).
Despite a few difficult encounters over the years, I took this to be relatively contained in the past, but in 2020 two women in Buddhist Studies posted a video chat posted to YouTube, titled It's Not Rigor, It's hazing. In the discussion they related how different male colleagues had deliberately humiliated them at separate public events. I found the link via Twitter account and it is interesting to see that several other women has similar experiences. For example, Stephanie Balkwill tweeted: "What got me was that every[one] else saw it that way at the time and did nothing, continuing to work with the person. I have subsequently learned that this behavior is habitual by him and evidently everybody knows it.";
Note, this is not online trolling, This is in real life, in person, in public, in your face trolling. As I say, I have many examples of this that I can't use without breaking confidences. Watching this video made me rethink some other encounters I'd had.
It's notable that neither woman in the video named names. Nor does anyone name names in public. Even though it's an open secret and "everybody knows it", nobody talks about it in the open. I presume this is because the bullies are still in their academic posts, still on hiring and promotion committees, still the editors of journals. If you want a career in academia, you can't join the #metoo movement. Power is the ability to silence your victims. I'm not saying Dan Lusthaus is Harvey Weinstein, but he does bully with impunity.
Anecdotally, I hear that a lot of early career scholars are abandoning traditional Buddhist Studies centred on philology, and are being attracted to other disciplines. Women especially seem to be branching out into Women's Studies, Gender Studies, and Queer Studies, although applying the ideas and practices of these other disciplines to studying Buddhism. They often study contemporary Buddhism, thereby avoiding any confrontation with the traditional angry male philologist. Choosing to be based in another field entirely, seemingly, provides a more conducive and supportive environment for doing research.
I want to make it clear that within Buddhist Studies my experience has been mixed. I am grateful to a number of generous peers and mentors who have enabled me to publish around 20 articles on various topics in various scholarly journals. There are many good people in this field; people who are happy to hear from a serious outsider asking for advice or for a copy of an article. I try to thank them in notes, but I doubt I've conveyed just how much help and assistance and encouragement I've received over the years.
Nonetheless, in early Sept 2021, Dan Lusthaus was busy trying to publicly humiliate me because we disagreed over an interpretation of some facts regarding mantra and dhāraṇī. Here is his last comment on this issue:
"Yes, we've all come to understand that your supporting evidence is your own theories, not the actual texts and what they say. And when the texts indicate something other than what fits your theory, you misread them."
NOTE: 5 Nov, Silk has deleted the discussion in which this comment was made. I'm not sure it can ever be recovered. The comment above was copied and pasted in early Sept 2021.
By the way, if this is true, what does it say about the many Buddhist Studies academics who have read my articles and recommended them for publication and published them? When you pick up shit to fling at someone else, you end up with shit on your hands, Dan.
Although Lusthaus may well sincerely believe his mean-spirited remark, it is clearly is false. My friends in academia not only assure me of this, but they also say that this bad behaviour is typical for Lusthaus (sound familiar?). I am playing the game of scholarship to the best of my ability and I have published ten articles on the Heart Sutra in scholarly journals offering expert peer-review. Each article has persuaded an editor and at least one reviewer (supposedly an expert in the field) that the article should be read by other academics and considered on its merits. I have no leverage over these people and they have no obligation to publish my work if it is substandard, and they are not shy about saying so, especially in anonymous reviews. And of course, many anonymous reviews are extremely mean.
It's hard to say what Lusthaus gets from being mean to me. Lusthaus has tried to bully me several times in the past. I've encountered him a few times over 25 years, mostly in the annals of the listserv Buddha-L which he now runs. I've seen him do this to numerous other people. The fact that Lusthaus is a bully is widely known in the field. Because of this, one friend in academia urged me, privately, to "not take him seriously". In my experience ignoring bullies does not stop the bullying. And having someone go out of their way to try to publicly humiliate you is tiresome and counterproductive, even if everyone knows he's a bully.
I can sort of understand some academics circling the wagons to exclude me—a self-taught amateur—but the same people have been doing this to Jan Nattier—a consummate professional scholar and educator—for thirty years. Nattier's 1992 proposal that the Heart Sutra was composed in Chinese is a new paradigm and casts doubt on much that has been said about this and other Prajñāpāramitā texts. Moreover the close reading of the text that follows in Huifeng (2014) and in my many articles, shows that Nattier was exactly right and that we really do need a new paradigm for understanding the Heart Sutra and for Prajñāpāramitā.
Lusthaus published some comments in 2003 that he asserted undermined Nattier's thesis but I showed that Lusthaus was merely deducing his axioms. This is the process by which a series of logical deductions will eventually reproduce your starting assumptions as valid conclusions. When we assume that the Heart Sutra was composed in Sanskrit, i.e. if this proposition is treated as axiomatic, and then apply deductive reasoning to the early Chinese commentaries, after a few deductive steps, we can conclude that the Heart Sutra was composed in Sanskrit and it looks like the conclusion is inferred only from reading the commentaries. In fact, the deduction doesn't come from the commentaries, it comes from the axiom itself. All deductive reasoning is subject to this limitation. I refuted Lusthaus's assertions in print in my Pacific World article: "The History of the Heart Sutra as a Palimpsest", showing that his reading of the text and his logic were flawed. So maybe he's still mad about this. I've known other male Buddhist Studies academics hold a grudge in this same way.
I certainly have many limitations, as a scholar and as a person. I'm keenly aware of this. But I carefully try to work within my limits and one or two friendly academics read every article before I submit them. Every statement I've made is the result of a careful analysis, checked and rechecked by me and several other knowledgeable people. It's backed by textual evidence and by previous scholarship (where possible). Not only is everything I have said in my articles testable, but it's clear what kind of evidence would refute it. No one has presented that kind of evidence yet. As soon as they do, I will certainly change my tune. Unfortunately, arguing can be trumped by shunning... "if we cannot disprove Jayarava, then we must agree to ignore him."
As a scholar with no formal "training" (see the video mentioned above for comments on this term) there is nothing special or clever about what I do. I see myself as feasting on the ample low-hanging fruits that others have ignored. Mostly, I'm just stating the obvious in ridiculous amounts of detail. One of my best articles (Epithets 2017) was a more organised and complete version of one of Jan Nattier's footnotes which explores some ideas proposed by Yamabe Nobuyoshi (1992 fn 54a). I checked with Nattier and Yamabe before publishing this refinement of their idea. And I'm happy to be doing this scut work. Honestly, I'm honoured to be tidying up after Jan Nattier, she is an inspiration to me. I never set out to change the world. I only set out to read the Heart Sutra. It's not my fault if the existing scholarship has missed the blindingly obvious. I'm just the messenger. I was as surprised as everyone else that no one had seen what I see. Now I can't unsee it and I have been attempting to communicate it. Ten articles later, there are still low hanging fruit that no one can see because they refuse to acknowledge that fruit even exist. Ironically, the deliberate withholding of attention is central to understanding Prajñāpāramitā (my interpretation of Huifeng 2014).
This meanness and use of public humiliation is not new to me. Indeed this has been a feature of my life. People use coercion and manipulation in attempts to control or negate other people all the time. It's a kind of sickness for a social primate, but in my experience (across cultures) this is the norm in life. Buddhism does not escape it (as we have learned to our great cost in the West) and Buddhist studies is mired in it. Bullying and shunning are commonplace.
Studying the Heart Sutra
I never even wanted to study the Heart Sutra. I'm still not that interested in it. But I had the opportunity to audit Sanskrit classes at Cambridge University with Vincenzo Vergiani and Eivind Kahrs (who was appointed to K. R. Norman's post when he retired). This was before Cambridge University finally killed off Indology and ancient Indian languages. I read Sanskrit in 2012 because they no longer offered Pāli and everyone told me (rightly) that knowing Sanskrit would improve my Pāli. As well as many textbook passages, I read stories from the Hitopadeśa, most of the Sānkhyakārikā, verses from the Mahābhārata, and passages from the Vākyapadya. I just wanted to read a Sanskrit Buddhist text, but I fully intended to keep my focus on Pāli.
What drew me into studying the Heart Sutra was the mistake I found in the first sentence of Conze's Sanskrit text: a transitive verb treated as intransitive, a noun in the wrong case, and a misplaced colon. The simple addition of an anusvāra (धाधां) and omission of anusvāra is the most common scribal error in these manuscripts. At least two of the extended text manuscripts have the noun in the correct case (making it the object of the transitive verb). A difficult nonsense sentence is transformed into a relatively straightforward three clause sentence. Lacking confidence back then, it took me 10,000 words to describe this problem and propose a solution (Attwood 2015). I covered all the bases, with help from Jonathan Silk and Jan Nattier on the Tibetan texts.
This initial insight was not dependent on Chinese origins or Nattier's work in general. It was all about Sanskrit grammar. No one else had seen this error in a text first published by Conze in 1946, revised in 1948, 1967, and translated numerous times. It's 2021 now, and long overdue for academia to wake up and think about this and my other grammatical points (Attwood 2018a, 2020a). Whether they agree with me over Chinese origins or not, these are basic questions of Sanskrit grammar.
I naively thought that if I published this small discovery (which I did in 2015) that academics and Buddhists alike would be like, "Oh yeah, now that you point it out...". I thought perhaps some might go as far as citing my discovery. However, in the intervening six years, not one single academic has discussed my article let alone adopted my suggested correction. The whole article was recently summarily dismissed in a footnote by senior Japanese scholar Saitō Akira (2021), in favour of the defective reading that makes no sense in Sanskrit.
Buddhist Studies academics have long preferred the defective version of the Heart Sutra and loudly praised Conze for his "meticulous scholarship" in producing a defective edition, a lousy translation, and a harebrained mystical interpretation. This preference for familiar confusion over unfamiliar clarity is inconsistent with objectivity, the primary defining characteristic of scholarship. Objectivity, as Carl R. Trueman has said, is not neutral. Objectivity shows that all answers are not equal and some are wrong. Reality is a particular way, at least on scales relevant to the human sensorium, and not any other way. Objectivity is as much as part of philology, history, and philosophy as it is of science.
How can I make sense of this refusal to even consider the possibility of change?
Belief Is An Emotion About An Idea.
It is well known that people often resist changing their beliefs when directly challenged, especially when these beliefs are central to their identity. In some cases, exposure to counterfactual evidence may even increase a person’s confidence that his or her cherished beliefs are true. Reed Berkowitz, discussing the similarities between QAnon and live action role-playing games, cites an article by Kaplan et al (2016).*
"Strongly held beliefs are literally a part of us. As such, attacks on core beliefs are treated very much as attacks on us, even as strongly as a physical attack." Berkowitz (2020)
* For a popular account of Kaplan et al's research see Resnick (2017) "A new brain study sheds light on why it can be so hard to change someone's political beliefs".
Kaplan (2016) notes that, presented with "counterevidence" (i.e. counterfactual evidence), "people experience negative emotions borne of conflict between the perceived importance of their existing beliefs and the uncertainty created by the new information." New information can create cognitive dissonance.
Berkowitz (2020)
This suggests that by presenting an alternative reading of the Heart Sutra Nattier generated negative emotions amongst those committed to a traditional reading, both conservative religieux and scholars alike. This religieux/scholar distinction is thin or absent in Buddhist Studies and in traditionally Buddhist countries, Buddhist Studies is completely dominated by religieux. Apparently no one sees the conflict of interests in this.
And it's not just that a Chinese Heart Sutra asks these men to change their minds. It goes a bit deeper than this. Because in confirming that the Heart Sutra is a Chinese digest text and the Sanskrit text a poor translation passed off as Indian, we are asking them to publicly admit they were wrong all this time. And this is a major challenge to their egos. Some people feel threatened by counterfactuals.
With respect to the Heart Sutra change is especially hard, heterodoxy is viewed especially negatively, and new information treated with heightened suspicion amongst the religieux in academic, simply because they are religieux in academia. The two conservatisms multiply. New information, even something as simple as a minor grammar correction, creates strong negative emotions in religieux (including academic religieux) because it conflicts with long held, cherished beliefs about the Heart Sutra, but also because it conflicts with the very identity of the religieux. Two strong emotional reactions combine into a perfect storm of denial and aggression. And this is expressed as intellectual incredulity and emotional hostility.
Some years ago, a chance meeting led me to look into the work of Hasok Chang, Professor of the History and Philosophy of Science at Cambridge University. I was very struck by his inaugural lecture for example, and his book Is Water H20? which covers many of the same themes in more detail. One of Chang's main themes is that pluralism at certain stages of knowledge-seeking is an advantage. According to Chang's liberal view of science, having competing explanations strengthens science. His striking example is that the much maligned idea of phlogiston actually had more going for it than Lavoisier's idea based on transfer of oxygen to and from metals. Thanks to Lavoisier's relentless self-promotion we have to say that fluorine "oxidises" hydrogen when they react to form hydrogen fluoride, even though the reaction does not involve oxygen at all. A better generalisation is that electrons flow from hydrogen to fluorine. And phlogiston being a hypothetical fluid, would have provided a much better model for this process. But Lavoisier was more popular and persuasive than Priestly. Phlogiston was the Betamax of chemistry.
In my view Nattier (1992) is the single most important article ever published on the Heart Sutra. I still pore over it all the time. It's a tour de force of modern, secular, scholarship. A paradigm-slaying piece of writing. I find it exhilarating. And yet it has largely been ignored or, in Japan, subjected to disingenuous theological refutations and apologetics of the type: "The Heart Sutra cannot be Chinese because we believe it is Indian." Nattier opened the door to a completely new reading of the Heart Sutra as concerned with epistemology rather than metaphysics. Not my suggestion, by the way, but Huifeng's (aka Matthew Orsborn):
“It is our view that this shifts emphasis from an ontological negation of classical lists, i.e. ‘there is no X’, to an epistemological stance. That is, when the bodhisattva is ‘in emptiness’, i.e. the contemplative meditation on the emptiness of phenomena, he is ‘engaged in the non-apprehension’ of these phenomena” (Huifeng 2014: 103).
We expect religieux to be sensitive to heterodoxy and to respond negatively to it, even to react violently. The sunk cost fallacy following huge investment of time and resources promoting orthodoxy virtually ensure this. Issues of belonging, identify, and status within a community are keenly felt by religieux and academics alike, and for the similar reasons. In Buddhist Studies a substantial proportion of the community are both academics and religieux. Even those academics who are not overtly religious, tend to be in love with Buddhism (and thus cannot see it objectively). If a scholar's first name is "Bhikkhu", then they are overwhelmingly likely to be a Theravāda apologist, though one of them got quite mad at me for saying so to his face a few years ago. Most academics are too canny to advertise their religious affiliations via the use a religious name in an academic context. It would be interesting to see some objective measure of how many Buddhist Studies academics think of themselves as "Buddhist". A good research project for someone studying contemporary Buddhist Studies.
Meanness is, to some extent, just something we meet in everyday life and have to deal with. Including in our workplace, though usually work culture norms do put a lid on it: it's pretty unusual to see public humiliation these days as it's considered harassment. People are mean for all kinds of reasons, and these may not be obvious from the outside. Often it's a cry for help. We can offer people who behave meanly compassion on a good day, but being subject to their abuse does make it hard to think clearly or respond creatively in the moment.
Still, while we can delve into the psychology of meeting counterfactual evidence and the negative emotional responses it generates, to explain the phenomenon, the bottom line is that trying to humiliate colleagues is not acceptable behaviour. It has likely aborted many promising careers in academia. My other idol, Sue Hamilton, for example, left academia and never looked back. Anecdote suggests many Buddhist Studies academics are decamping for greener pastures that offer a more collegial working environment and a coherent body of theory to work with.
Unchecked meanness makes for an unproductive environment. I'm sure it has contributed to driving people away from studying Prajñāpāramitā: a sub-field that everyone agrees is of central importance to understanding Buddhism, but in which almost no one works.
The academic field of Buddhist Studies needs to address this issue of senior academics publicly humiliating students and junior colleagues. But the problem that Buddhist Studies has no core set of values or theory remains. It's a field, but without a discipline. An Order without a rule. Senior academics have power but there are not enough checks and balances. And this is why abusive behaviour got established and continues to be a problem. And why the people who want to change it are fighting an uphill battle.
Quite honestly I'm tired of talking about the Heart Sutra. I'm just repeating myself now. I have a few loose ends to tie up and then I'm going to do something else. And chances are that my research will go on being suppressed by academia despite meeting all the criteria for serious consideration. Perhaps it is just too radical. Or perhaps I have to hope I outlive Lusthaus and co? Trouble is I'm fifty-five (old for a heretic) and not in great health, so that strategy lacks appeal.
I have either made a good argument in my ten peer-reviewed articles on the Heart Sutra or I have not. I don't expect a Nobel Prize or an honorary doctorate (though I'd accept the latter). Rather, if I have then I deserve to be taken seriously, and if I have not then I have earned the right to see a proper refutation in print (not just a short footnote) and to have a right of reply.
However, before this basic level of respect is afforded to me, I'd like to see Jan Nattier get her dues. Nattier deserves to get the lion's share of the credit. She is my ādiguru and my work is almost entirely derived from hers (one or two minor points about Sanskrit grammar notwithstanding). I also think that Huifeng/Matthew Orsborn's contribution has been massively under appreciated. Give them the credit they are due, and what is due to me as a systematiser of their work, will fall into place. I'm relatively unimportant in this story.
If you have not already, then please read Nattier (1992) and Huifeng (2014). Read them properly, slowly, read all of the notes, think about the method, follow the evidence. If you have a better explanation for the discrepancies between the passages copied from Pañcaviṃśatisāhasrikā and the versions found in the Hṛdaya then, by all means publish it. Prove us wrong, if you can
Becker, Adam. (2018). What Is Real? John Murray.
Berkowitz, Reed. (2020). "A Game Designer’s Analysis Of QAnon: Playing with reality". Medium.com.
Chang, Hasok. (2010). "The Hidden History of Phlogiston: How Philosophical Failure Can Generate Historiographical Refinement." HYLE – International Journal for Philosophy of Chemistry, 16 (2), 47-79.
——. (2012). Is Water H20? Evidence, Realism, and Pluralism. Springer.
Kaplan, J., Gimbel, S. & Harris, S. (2016). "Neural correlates of maintaining one’s political beliefs in the face of counterevidence." Nature: Scientific Reports 6, 39589. https://doi.org/10.1038/srep39589
Kuhn, Thomas S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
Planck, Max. (1949). Scientific autobiography and Other Paper. Williams & Norgate.
Resnick, Brian. (2017). "A new brain study sheds light on why it can be so hard to change someone's political beliefs: Why we react to inconvenient truths as if they were personal insults." Vox. Updated Jan 23, 2017, 8:37am EST. https://www.vox.com/science-and-health/2016/12/28/14088992/brain-study-change-minds
NOTE: 25 October 2021. I found this:
Paré, D., Quirk, G.J. (2017). "When scientific paradigms lead to tunnel vision: lessons from the study of fear." npj: Science of Learning 2, 6 https://doi.org/10.1038/s41539-017-0007-4
Abstract ...Here we argue that while much data is consistent with the fear model of amygdala function, it has never been directly tested, in part due to overreliance on the fear conditioning task. In support of the fear model, amygdala neurons appear to signal threats and/or stimuli predictive of threats. However, recent studies in a natural threat setting show that amygdala activity does not correlate with threats, but simply with the movement of the rat, independent of valence. This was true for both natural threats as well as conditioned stimuli; indeed there was no evidence of threat signaling in amygdala neurons. Similar findings are emerging for prefrontal neurons that modulate the amygdala. These recent developments lead us to propose a new conceptualization of amygdala function whereby the amygdala inhibits behavioral engagement...
15 August 2021
The Dogma: On Not Taking Nāgārjuna Seriously (Seriously!)
I wrote this for my Facebook group on Heart Sutra research. As I haven't posted anything here for a while I thought I'd repost it.
In response to a post about the word tathatā, two people responded by rehearsing aspects of Madhyamaka dogma. I'm just going to call this the Dogma and people who promote the Dogma as Dogmatics. When people cite the Dogma they present it as a transcendent truth that brooks no contradiction, though it is also frequently (and unironically) presented as a series of contradictions.
I want to address anyone who takes the Dogma seriously by explaining why I don't take it or them seriously.
The Dogma is a body of religious rhetoric that emerges at a time when sectarian Buddhism was maturing. Mahāyāna Buddhism is still nascent, but exists as an uncoordinated series of reforms centering around the problem of the absent Buddha. Gautama sought his own liberation and left this world, leaving us to find our own way out. And later Buddhists found this narrative intolerable (even selfish), so they changed it in various ways, some of which are (in essence) what we now call Mahāyāna.
The foundation of Dogma is principally associated with Nāgārjuna who is believed to be a real person that lived near the beginning of the first millennium of the Common Era. But Dogma has been augmented numerous times by commentators (right up to the present). Most scholars now question the orientation of Nāgārjuna. For example, it is apparent that in composing the Dogma, Nāgārjuna was not re-interpreting Prajñāpāramitā. When he cites scripture, he cites Sanskrit translations of early Buddhist texts. Some have questioned whether he would have identified as Mahāyāna at all. But in proposing the Dogma, Nāgārjuna was making a clear break with early Buddhist rhetoric.
The Dogma makes a number of erroneous assumptions that lead it to dubious conclusions: 1. that dependent arising is a theory of everything; 2. that experience is reality; 3 that existence must be permanent; 4. the experience of emptiness is reality. So let's take each of these in turn.
1. Dependent Arising
As hinted at above, dependent arising was never intended to be a theory of everything. Early Buddhists set out to explain how experience arises. Simple observation shows us that the dynamics of objects are not the same as the dynamics of experience. This is implicit in early Buddhist texts.
Somewhere along the line Buddhists began to apply dependent arising to everything. When the only tool you have is a hammer, everything starts to look like a nail.
I can easily imagine things that are physically impossible, that defy the laws of physics. I can imagine flying, for example. Not possible in reality, possible in imagination. Because the contents of our minds don't behave like real things. They are like illusions.
If we make dependent arising a theory of everything then contradictions ensue. We end up saying that things don't really exist because they are dependent on other things. But think about it. Why would anyone say something like this? What is is about dependency that makes an object unreal. Is a rock any less solid because is was formed by a process? No.
2. Experience versus Reality
Early Buddhists appear to have understood that sensory experience was different from reality. The Dogma, by contrast, refuses to make this distinction. In the Dogma, experience is a lesser form of reality. But experience is not reality. Experience is experience. Experience is what happens when a sentient subject encounters an object. Experience is subjective, that is to say that its mode of existence is subjective.
A good way of talking about it is Thomas Metzinger's use of the term "virtual". We don't have a self, we have a virtual self model, generated by the brain. As a virtual rather than a real thing, our sense of self has qualities and characteristics associated with subjectivity. For example, how we see ourselves is affected by mood. Our virtual model can be disrupted by drugs which do not change "reality", they change the way the brain generates our virtual self model.
3. Existence
The Dogma has a perverse definition of "real". I understand that some people may want to undermine the Abhidharma approach by criticising the nature of categories of experience. The fact that such categories rely on the concept of svabhāva qua distinctive characteristic smacks of essentialism.
But there svabhāva is an epistemic term: it is how experience appears to us, not the thing in itself. Moreover, when we categorise dharmas, we are mainly concerned with thoughts, feelings, and emotions.
It is useful, for example, to distinguish the ethical character of a thought. Was is motivated by greed? Or by generosity? And by "useful" here I mean soteriological. This distinction is important for anyone wanting to live an ethical life, and if you believe in liberation from rebirth in saṃsāra then it is an essential concept to understand.
In arguing against perceived (but in fact nonexistent) essentialism in Abhidharma, the Dogma changes the meaning of svabhāva so that it definitely is essentialist. Now it means the sole condition for the existence of an object. And it is trivial to show that this entity cannot be real, since nothing can be the sole condition for its own existence. Everything is more complex than that.
So how does this trivialism take on such gravitas in the Dogma? It's partly because people who adopt the Dogma attribute their own definition of svabhāva to other people (who almost certainly never did held that view and definitely do not now). Having created the strawman, they triumphantly burn it down. But so what? No one believes it anyway.
4. Emptiness is reality.
The final point is that, in the Dogma, it is assumed that the absence of sensory experience is reality. And this is the heart of the matter. It is this assumption that leads to all of the others.
We all know, either first or second hand, that the cessation of sense experience without the loss of awareness is a profound and potentially life-changing experience. And it's fairly obvious that the techniques to bring experience to a halt were in widespread use in the Ganges valley by the time of the second urbanisation, from about the 6th Century BCE onwards. The new cities attracted Brahmin immigration from the West, too, which is another story.
We should not be too harsh on this point. The assertion--that lack of experience is reality--is one that is common in Indian religious thought. The cessation of sense experience was taken to be reality by Brahmins, Jaina, and Sāṃkhyakas as well as Bauddhikas.
But here's the thing. The cessation of experience is simply the cessation of experience, it is not reality. And this can be seen in how different religions interpret it as Brahman, ātman, puruṣa, jīva, pudgalaadvaitaśūnyatā, etc.
Perhaps the problem is the preternatural clarity of mind that accompanies cessation; the purity of a mind without content, is hyperreal. The very vividness of the state makes it seem more real than reality. Certainly it can be more attractive than reality. Because in that state all ones desires and discontents cease along with other kinds of thought.
Still, the conclusion that reality is the absence of sense experience is fundamental to the Dogma. And it allows Dogmatics a peculiar form of rhetoric which I sum up this way: everything the Dogmatic says is true, while everything the non-Dogmatic says is an illusion, a conceptual proliferation.
I've dealt with this rhetoric for more than 25 years now. At first it worked as expected on me. When I tried to ask certain types of questions that seemed natural to me, a Dogmatic would simply shut down the conversation by pointing out that my questions were based on conventional reality or illusions. The truth is the Dogma and anything else is simply and self-evidently false.
The choice with Dogmatics is either to accept the Dogma or be dismissed as a deluded pṛthagjāna.
However, I reject the framing of the discussion in Dogmatic terms. I see no reason to believe that the cessation of sense experience gives one insights into the nature of reality. One cannot know more by closing off all sources of knowledge about the thing one wishes to know, one can only know less.
I grant that one may discover something about the way that our minds create our virtual models of body, self, and world. And how we use these virtual models to navigate our way through a complex and ever-changing world, especially the social world. The social world deserves a much greater prominence in our thinking about Buddhism. But this is all the province of epistemology. And the result, in Buddhism, is always some kind of knowledge: an epistemic inquiry resulting in epistemic insights.
I'm not arguing within the Dogma framework because it is both false and perverse. Nāgārjuna is not someone I revere at all. I count him the worst philosopher in history, precisely because he does not examine his own assumptions, even when the result is nonsense or contradiction.
The biggest problem with the Dogma is that Dogmatics hold it to be a self-evident truth that not only resists external criticisms, but resists all criticism. It is Holy Writ that can never be challenged. Like Richard Feynman, I'd rather have questions that cannot be answered than answers that cannot be questioned. But the thing is that we can answer many seemingly intractable questions if we only give up Dogma. Dogma is the greatest impediment. It is an extreme view, a wrong view.
The resolution of this issue is simply to make a distinction between metaphysics and epistemology and allow that Buddhism is principally concerned with the latter. Our conclusions about what we know, especially what we know about the cessation of sense experience, can be interpreted as metaphysics, but they need not be.
Religious dogmas now pose the greatest threat to the long-term survival of Buddhism. On the other hand, secular interest in "awareness without content" is now the subject of scientific scrutiny and is already beginning to escape from the religious chains in which it has been bound. Like the preliminary practices we put under the heading of "mindfulness" the practices that culminate in what we call "emptiness" are on the verge of escaping into the secular world. And that is something to celebrate.
Note: I've added a new tab at the top of the page where I'm going to keep a running bibliography of works that I think are relevant to the topic of secular emptiness.
04 June 2021
Naturalism and Unnaturalism
Something I read recently prompted me to think about whether I would call myself an atheist. I have probably referred to myself as an atheist in the past. Buddhism is widely considered to be an atheistic religion in that while many Buddhists treat buddhas as gods, few of us believe that god to be a creator or controller. Despite growing scepticism about the traditional claims of Buddhism, I still think of myself as "religious" in the sense of living committed to a set of rules. I sometimes say that I am religious but not spiritual. See my series of essays on "spiritual".
Theism is essentially the idea that everything depends on God, however God is conceived. Thinking about this it seemed strange for me to even have a position on such things because they are completely irrelevant to my worldview. I see the value of being aware of some of the history of the influence of the various churches in shaping the modern world, modernism being largely an organised rebellion against church claims to authenticity and authority. But theism is not relevant to me in any other way. The scientific study of religion shows that it is not what it claims to be. Which is not to say that religion is bad, just that we have to get below the surface of the claims made by priests and to look at the sociology and neuroscience of religion in order to get at the truth about religion.
It seems to me now that it would be silly for me to define my worldview in terms of things I don't believe in. Because, of all the possible things that humans believe, the vast majority of them are not things I believe. I don't believe in unicorns, fairies, Santa, utopias, and so on. But I don't claim to be an aunicornist. The label "atheist" does not inform a reader directly as to what my values and beliefs are. If I am going to state my beliefs, why would I do it with respect to a minor religious cult that has never had any appeal for me. So what am I, if not an atheist. I would say that I am a naturalist.
Naturalism comes in many varieties and, indeed, encourages pluralism. Naturalism has its starting point in the natural world, the world that we experience and interact with as humans. The world that we perceive through our senses, but also the world of which we are wholly a part. The physical world, but also the world of human culture. There may be other worlds or other non-experiential aspects of this world, but we cannot know them. And we need say nothing more, except that our explanations of what we can experience have no gaps that suggest the need for other worlds.
Naturalism as a metaphysics is based on and informed by a particular approach to knowledge. We observe the world, notice regularities and try to infer what such regularities connote. We can use the conclusions of these inferences to make predictions about what we will experience next and then test this. And this works surprisingly well for understanding the physical world. Different approaches must be taken to understand human culture because it is a much higher order of complexity than physical objects. For example, reductionism seldom makes for an interesting approach to human affairs.
Within the realm of science, predictions will have a degree of accuracy and precision that we compare with what we see. If the accuracy and precision reaches a threshold then we say that prediction was accurate and precise. That threshold may be formal, such as a statistic measure such as 5σ or 99% confidence, but for lay people it may just be informal and heuristic. Scientists ideally accompany every measurement with an indication of measurement error, and measure of accuracy and precision. So we might say the Higgs Boson has a mass 125.10 ± 0.14 GeV to a 5σ confidence level. The error is due to our measurements, not to nature.
We take results more seriously if someone has measured them by some other means and reached a similar or better level of accuracy and precision. Sometimes the confirmation or "comparing notes" part is left out of the naturalist epistemology, but it is essential. We generally call this approach to knowledge empiricism, although strictly speaking empiricism is the idea that all knowledge comes from sensory experience. Modern empiricism is a collective and collaborative enterprise that influences all other approaches to knowledge.
We can also study the "humanities", i.e. the forms and products of human cultures, from how human societies function, to behavioural norms, to how we make and appreciate art. All this is still part of the natural world. Where weather is a complex system comprised of simple parts, a human society is a complex arrangement of complex parts. Historians, according to Hans-Georg Gadamer are less interested in universal laws, but focus on a single event and try to understand it in context. Still, as Carl R Trueman has subsequently observed, "objectivity is not neutral or unbiased" (2010: 27ff). Objectivity by its very nature excludes the majority of explanations.
In recent years the division between science and humanities has thinned, but there is an incorrigible tendency to see them as incompatible. My layered approach to reality is set out in a three part essay. I argued that each layer adds structure and organisation to the previous, creating new complex entities with emergent properties. This is not the same as simply changing scale, since life is an offshoot from the middle of the scale of mass, length, and energy of the universe. Whether or not life exists elsewhere, it exists here and any theory of reality that does not include life or human culture in all its complexities is useless. So, for example, the idea that a unification of general relativity and quantum field theory would become a theory of everything is simply nonsense. Physics is useless when it comes to describing human behaviour. I accept that physics certainly provides limits to what is possible. Interestingly, in phrasing it this way I have stumbled on a principle of constructor theory as enunciated by David Deutsch and Chiara Marletto. Physics limits what life can be like, but it does not determine what life actually is or what creatures evolve into being. It does not because of emergent properties at higher levels of organisation, piled on top of each other, that are not predicted by the lower level theories. Nothing about either relativity or quantum theory suggests that sapient beings will emerge to discover these explanations. And it's not that they are vague on this subject, rather there is nothing about those theories that predicts sentience or sapience as a possibility. They can be applied retroactively, but not with any great explanatory power. Determinism does not necessarily survive emergence.
For a naturalist, then, the natural world is what can be inferred to exist and what can be known. Naturalism argues that if something exists and can be known it is part of the natural world. We can also say that if something doesn't exist it cannot be known. If something cannot be known, then we can say nothing definite about it. Our best route to knowledge is allowing observation to guide theory, principally by comparing notes on close observations of the natural world, keeping in mind that all acts of explanation are also acts of interpretation (simply because of the our human apparatus).
Accurate and precise knowledge of the natural world has transformed human lives beyond measure, for better or worse. There are, of course, ethical and moral questions raised by naturalism. For example, it has given us tools that can be used for good or ill. A bulldozer can be used to quickly prepare a building site for the building of homes or it can be used to level areas of essential rainforest (sometimes these are the same action). But the work that one person can do with a bulldozer is thousands of times more than one person prior to the invention of high carbon steel and internal combustion engines. Technology magnifies human abilities, without similarly transforming human aesthetics or ethics. How we interpret events has become even more important because of this magnification. And how we interpret events has also become subject to empiricist scrutiny (much more so than when Gadamer was writing).
For naturalists, then, the focus is the natural world. Anything other than the natural world is unnatural. To believe in some unnatural agent, entity, or realm is a form of unnaturalism, and one who accepts unnaturalism is an unnaturalist. Thus, for me the question is not, "What is an atheist?", rather it is "What is an unnaturalist?"
Unnaturalism is a neologism of mine. It is the flipside of naturalism. As I use it, unnaturalism is a broad term that takes in disbelief in the natural world per se, such as Indian beliefs that the world is māyā "an illusion", as well as a range beliefs about unnatural agents, entities, or forces that exist beyond the scope of the natural world (and thus beyond the scope of the naturalist epistemology).
Unnaturalists often assert that unnatural agents are able to interact with the natural world, but this is a contradiction in terms. If agents interact with the natural world then, ipso facto, they must be part of the natural world and thus bound by the patterns of behaviour that we see in the natural world. Or else we have to rewrite our explanations to include them and there seems no necessity to do this.
Unnaturalism seems to begin with animism, which, for example, appears to be ubiquitous amongst hunter-gatherers (Peoples, Duda, and Marlowe 2016). This is the view that the natural world is full of sentient agents, seen and unseen who interact with the natural world, but exist outside of it. A modern form of animism is panpsychism in which all matter is, in some inexplicable way, "conscious". Belief in life-after-death is a common unnaturalist belief and with ancestor worship is found in about 80% of hunter-gatherers. At the other end of the spectrum are large organised religions based on sets of unnatural beliefs, notably an omnipotent, omniscient god. I will look more closely at theism and deism in the next section.
Some terminological issues crop up. For example, some unnaturalists refer to their beliefs as "supernatural" suggesting something above the natural world, "metaphysical" suggesting something beyond the natural world, or "paranormal" suggesting a reality alongside the natural world. In my view, all these separate terms can be dismissed as hair-splitting since they all involve rejecting the naturalistic account of the world. They are therefore better categorised simply as unnaturalism, unnatural views asserting unnatural agents, entities, forces, etc.
By definition, anything unnatural is beyond the scope of naturalism: we cannot interact with or know an unnatural world and it cannot interact with us. This does not exclude the possibility of unnatural phenomena, but it does exclude the possibility of experiencing them or gaining knowledge of them. There are epistemic limits, and the knowable is ipso facto the natural and vice versa. Normally we need not bother with the unnatural because we cannot know anything about it. However, unnaturalists claim to have unnatural knowledge. If we press the unnaturalist for evidence they must demure because evidence implies the natural world.
Part of the problem here is the teleological fallacy, i.e. the fallacy that everything happens for a reason. For a naive person this seems a reasonable heuristic and compatible with commonsense views on causation. Causation is tricky since Hume pointed out that it's really just a regular sequence of events; where one thing regularly precedes another we say it "caused" it. In this view, causation is metaphysical, that is to say we don't see a separate event that we can label "causation". Discussions of causation tend to refer to billiard balls colliding and other such mechanistic ideas: we see one ball strike another and both travel off in new directions. Can we say that one ball causes the other to move? Formulations of laws of motion do not include anything that might indicate causation. We can describe two balls colliding, for example, using conservation of momentum but . My view is that our understanding of causation comes from our early experience of gaining control of our bodies. We will things to happen, like willing our hand to grasp an object, and after a while that starts to happen. The model for causation is the connection of the desire for something to happen followed by that very thing happening. As John Searle is fond of saying, "I will my arm to go up, and the damn thing goes up." (I think he says this in every lecture of his on YouTube).
Even if we get a grasp on causation, a cause is not a reason, though the two terms are easy to confuse precisely because our internal model for causation is that of desire making our limbs move. The classical view of reasons is that they are explanations of causes. The teleological fallacy can be restated as: the reason something happens is because something causes it. But this assumes that all sequences of events are regular and that nothing novel ever happens. And of course new and one-off things happen all the time. Even so, in the classical view, reasons are still ideas about why things happen. The classical view sees reasons as prior to actions.
But then, as naturalists, we have too look at the evidence and it turns out that reason isn't like this (See Huge Mercier and Dan Sperber. The Enigma of Reason). It turns out that experiments show that reasons are generated post hoc to rationalise decisions made by unconscious inferential processes. So it turns out that the classical view of reason is another result of unnaturalism, i.e. the result of thinking about reasoning in the abstract instead of observing reasoning in practice. And here we can see the importance of interpretation in explanations of history. If reasons are post hoc then our accounts of history in terms of the psychological motivations of individuals are likely to be inaccurate. Reasons don't drive behaviour at all. In fact, behaviour drives reasons.
So when someone who is open to unnatural beliefs comes to understand that the universe has a beginning they may infer that the universe has a cause and they frame in terms of some agent causing the universe to begin for some reason. Even if we eliminate the overtly unnatural elements, we are still left with the possibility that something caused the universe to come into existence. With the present state of our knowledge that cause is unnatural and we cannot know anything about it. This epistemic limit is open to exploitation by unnaturalists; they may claim to know, through unnatural means, about that cause. An unnaturalist may ignore the epistemic limit, adopt the teleological fallacy, and infer that the universe came into being for a reason and further that if there is a reason, that there must be an agent that is not part of the natural world (since the natural world for them is that which was created). Now we have an unnatural agent with superpowers creating the universe for reasons, though these reasons are typically held to be unfathomable because in practice we cannot discern any unnatural agents. And this brings us to the most visible form of unnaturalism: belief in a creator god.
Theism as a form of Unnaturalism
Unnaturalism has a much longer history than naturalism. For most of human history most people have been unnaturalists. Unnatural ideas like animism, disembodied minds, or post-mortem existence have seemed plausible to most human beings who ever lived. Since the emergence of naturalism these kinds of ideas have been marked out by terms such as metaphysical, supernatural, or paranormal.
Theism begins to emerge with the Zoroastrian religion, the first of the monotheisms. The dates of Zoroaster are disputed, but are generally in the range 1200-800 BCE. I have argued, for example, that aspects of Zoroastrianism may have influenced the development of Buddhism. So theism is relatively new in human evolution, but relatively old in human history.
We can usefully contrast theism with deism. In deism, God made the world, set it in motion, but is no longer involved in the world. Some Jews take this approach, concluding that God was more involved in the world during the infancy of humanity, but now as a mature species, how we live is up to us. An increasingly common form of deism is that the idea that God was responsible for the initial conditions of the universe and the big bang that set things running, but after that God just let things play out as they will. This kind of God is also called otiose or "uninvolved". And note that this idea of setting the initial conditions and allowing them to evolve according to dynamical laws of motion is the main paradigm of physics. However, this paradigm fails to account for many phenomena, notably living organisms, prompting David Deutch and Chiara Marletto to pursue constructor theory which promises to construe physics in exact terms as counterfactuals: what is possible at any given time and what is not.
The retreat to deism allows some Christians to reconcile with science using a God of the gaps argument, since science cannot tell us the reason for the initial conditions of the universe. When we consider the fine tuning problem, why the universe is conducive to life at all, it seems that the epistemic gap in which deists locate God is increasingly small. The physical parameters of our universe are fine-tuned to allow life to exist. Tiny variations of physical constants like the charge of the electron would make life impossible. Even if God was the first cause, he had little or no choice about how to make the universe. In other words, God had no free will when it came to creation of a universe in which sapient creatures would be capable of thinking about God. And what is the point of worshipping a God who was last active 13.8 billion years ago and who had no free will? There is no deist soteriology; the universe is what it is and there is nothing god can do to change it.
Note that I am also not a deist. The neologism adeist has been used informally, e.g. Daniel Finke's blog post: I am an Agnostic Adeist and a Gnostic Atheist. Some Buddhists are deists in the sense that they talk about an ultimate reality or a ground of being.
Theists, by contrast, believe in the ongoing active involvement of God. Unlike deism, theism makes testable predictions. Theists' claims about God entail that processes we expect to be random will sometimes not be random because God intervenes on behalf of his followers. Christians, we might argue, should be more lucky than others, suffer less from disease, accidents, and other misfortunes. No such bias in the universe has ever been detected. As a simple matter of fact, Christians don't get a smoother ride, but suffer every bit as much as everyone else. Indeed, lately some Christians have been arguing that they are treated unfairly, which suggests that not only is God not tipping the balance in their favour, He is tipping it against them. So theism looks to be false for this reason and many others.
This is a corollary argument deriving from the problem of evil, i.e. the problem of why a loving creator would make a world plagued by so much misery and suffering. Charles Darwin was dissuaded from Christian theism by the existence of parasitic wasps:
I have explored different accounts of why unnaturalism was so successful and persistent; see, for example, my two-part essay: Why Are Karma and Rebirth (Still) Plausible (for Many People)? Part I and Part II.
Although human beings are fully encompassed by the set of natural things, our minds are not limited to thinking in terms of the natural world. We can imagine unicorns, for example. Not only this, but we can proliferate stories about unicorns, complete with imagery. Search for unicorn online and you will find millions of references, images, theories, stories, and so on. But none of this makes unicorns a real thing. We will never meet a unicorn in the natural world. For many people this distinction can easily be blurred, especially when it comes to God. But ideas about god such as theists embrace are not universal by any means:
"Ancestor spirits or high gods who are active in human affairs were absent in early humans, suggesting a deep history for the egalitarian nature of hunter-gatherer societies." (Peoples, Duda, and Marlowe 2016)
To sum up, then, atheism is a reaction to, and thus still defined in terms of, the Christian worldview. The term itself accepts the normative value of Christian ideas. Atheism is not-theism. To me, God is irrelevant, a trivial problem easily dismissed before getting on with the serious business of understanding the world. And it makes no sense to define my worldview with respect to something irrelevant and trivial. "Atheist" is a Christian label for non-Christians.
Please don't call me an atheist. I'm a naturalist. And in my worldview theists are unnaturalists
Of course we can discuss unnaturalism, but it is pointless to do so on the terms of unnaturalists because their views are unnatural. The study of unnatural beliefs is part of anthropology and sociology and best undertaken from a naturalist view point. It is important to objectively understand unnaturalism through careful study because so many people have unnatural views and act upon them. We need to understand and appreciate how people behave under the influence of unnaturalism because unnaturalism is still widespread and influential.
And, of course, theism is not the only variety of unnaturalism; I've mentioned also deism and animism, for example. By lumping various forms of unnaturalism together we are better able to generalise the ideas involved in unnaturalism.
Deutsch, D. (2013). "Constructor theory". Synthese. 190 (18): 4331–4359. https://arxiv.org/ftp/arxiv/papers/1210/1210.7439.pdf
Gadamer, Hans-Georg. (1975) Truth and Method. Bloomsbury Academic.
Marletto, C. (2015). "Life without design: Constructor theory is a new vision of physics, but it helps to answer a very old question: why is life possible at all?" Aeonhttps://aeon.co/essays/how-constructor-theory-solves-the-riddle-of-life
Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.
Peoples, H.C., Duda, P. & Marlowe, F.W. 2016. "Hunter-Gatherers and the Origins of Religion." Human Nature 27, 261–282. https://doi.org/10.1007/s12110-016-9260-0
Trueman, Carl R. 2010. Histories and Fallacies: Problems Faced in the Writing of History. Wheaton, Ill.: Crossway.
Related Posts with Thumbnails |
7b389b1d32b37931 | JMPJournal of Modern Physics2153-1196Scientific Research Publishing10.4236/jmp.2014.518201JMP-52543ArticlesPhysics&Mathematics Physics in Discrete Spaces: On Fundamental Interactions ierrePeretto1*Laboratory of Physics and Modelling of Condensed Matter, Grenoble, France* E-mail:Pierre.peretto@lpmmc.cnrs.fr041220140518204920628 October 20142 November 2014 25 November 2014© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY).
This contribution is the third of a series of articles devoted to the physics of discrete spaces. After the building of space-time [1] and the foundation of quantum theory [2] one studies here how the three fundamental interactions could emerge from the model of discrete space-time that we have put forward in previous contributions. The gauge interactions are recovered. We also propose an original interpretation of gravitational interactions.
Gauge Interactions Gravitation Mond Theory Cosmological Constant Principle of Equivalence
1. Introduction
the particle to particle interactions are carried by three sorts of fields, the electroweak field, the strong field, and the gravitation field. The most striking feature is the enormous difference between their intensities. The electric force is stronger than the gravitation force by more than forty orders of magnitude. This is the hierarchy problem.
We have put forward in [1] a model of discrete space-time where the universe is comprised of the simplest physical systems that one can imagine, namely the cosmic bits. The cosmic bits interact through 2-bodies (binary) random links such as and 4-bodies (quarternary) random links such as. We associate the gauge interactions (electroweak and strong) with and the gravitation interactions with. This will be the guideline of this contribution.
The article is, accordingly, divided in two main sections. In the first section we show how the gauge symmetry interactions naturally emerge from the model of discrete spaces that we propose. In the second section we introduce a new interpretation of gravitation based on a mechanism, somehow similar to the Van der Waals interaction, where quantum wave fluctuations play the role of electric dipole fluctuations.
2. Gauge Interactions2.1. The Yang-Mills Theory of Interactions: A Reminder
According to Yang-Mills theory [3] , the physical space is not limited to the usual 4-dimensional continuum: to every point of the continuum one must also associate an internal space. Then the physical space becomes a fibre bundle, a space that can be locally defined as the Cartesian product of two manifolds, the fibres and a basis. In the Yang-Mills fibre bundle the usual 4-dimensional continuum plays the role of fibres, and the internal spaces the role of a basis of the fibre bundle.
Whereas a position increment dx is enough to define the derivative operator in a 4-dimensional continuous space that is into a fibre, in a fibre bundle one must also take into account an increment between the internal spaces of neighbouring world points. Then the usual derivative is to be replaced by a covariant derivative. The term is called a parallel displacement. Some symmetry transformations may be defined in internal spaces and, if physics is left invariant under these transformations, there are called gauge transformations. Yang-Mills theory assumes that the gauge transformations are Lie groups, where is the dimension of the matrix representation of the group. The theory associates a particular interaction to a given Lie group: U(1) for electromagnetic interaction, SU(2) for weak interactions, and finally, SU(3) for strong interactions. Each Lie group then introduces a specific parallel displacement called a gauge field.
In this section we show that the Yang-Mills theory can be transposed in the framework of the model of discrete spaces that we propose. The ill-defined concepts used by the Yang-Mills theory, such as the notion of internal spaces, are now given a physical meaning. Moreover unanswered questions posed by this theory, for example the choice of the relevant Lie groups, are also given a response.
2.2. Gauge Symmetry
In the model of discrete spaces that we put forward, the universe is made of basic cells called world points, and the physical points of the Yang-Mills theory are similar to world points. The internal space of a world point is the space spanned by its possible states. This space is d-dimensional with (for more information see [1] ). A gauge symmetry group is a group whose elements leave physics unchanged. Nothing determines a particular orientation of the d axes of coordinates in the internal space of world points. Therefore any permutation of axes or any unitary transformation of the internal space must leave physics unchanged. Physics, therefore, must be invariant with respect to permutations of axes, that is, to the operations of symmetric permutation group S4 (since). It must also be indifferent to unitary transformations U(4) of the internal space. S4 and U(4) are gauge symmetry groups of the model of discrete spaces. The relevant symmetry groups, however, must comply with both gauge groups. The group S4 has five irreducible representations, namely two 1-dimensional representations (and), one 2-dimensional representation, and two 3-dimensional representations (and) (the table of characters of is given in [1] ). The particles that transform according to are fermions and those that transform according to and are bosons [2] . The operations of U(4), however, must be compatible with the representations of and therefore there are three, and only three, relevant unitary gauge symmetry groups:
a) U(1) which is associated with irreducible representations or.
b) SU(2) which is associated with irreducible representation.
c) and, finally, SU(3) which is associated with irreducible representations or.
There are no other gauge groups.
2.3. Covariant Derivatives in Discrete Spaces
All properties of discrete spaces are derived from a very general Lagrangian, where is the state of the physical system, that is a set of states with of the N world points i of the physical system. A state is a 4-dimensional vector:
The Lagrangian writes
is square, random, matrix that describes the world points to world points interactions. is a set of N matrices that describes the interactions between the four components of (for a more detailed presentation of the model see [1] and [2] ).
The notion of partial derivatives is introduced in discrete spaces through the matrix D obtained by factorizing the operator of Lagrangian (1).
According to the LDU (Lower triangular, Diagonal, Upper triangular) Banaciewicz theorem may, indeed, be written as where D is a random, , upper triangular matrix that can be interpreted as a discrete differential operator. An increment along the axis of component of state of world point i is defined by
and the partial derivative by
an operation that can be symbolized by in the usual 4-dimensional continuum. is the size of a world point.
Physics must be left unchanged under the operations of a unitary gauge group. As discussed above, the physical system is a fibre bundle where the fibres are given by D and the basis by. In a specific transformation in Equation (2) becomes where, an element of Lie group, is given by
is a unitary matrix and therefore where C stands for hermitian conjugation. The operators ta are the generators of and the parameters determine a particular transformation of
the internal space of world point i. We consider infinitesimal transformations. A
first order expansion approximation of is:
Then the derivation operation becomes
Finally, one finds a covariant derivative indeed., one of the associated
parameters, materializes a gauge field.
2.4. The Lagrangians of Gauge Fields
In the Lagrangian (1), rewritten as
one replaces the operators D by their covariant expressions (3). Then (4) is made of three terms.
a) We first recover the Lagrangian term of the free quantum particle field:
b) Then we obtain the Lagrangian term of gauge fields
If we only consider local contributions we must look at terms with. Since this expressions becomes:
c) and finally, a local interaction term between the particle and the gauge fields:
2.5. Electro-Weak Interactions
In this section we recover the results of the GSW (Glashow, Salam, and Weinberg) theory of electroweak interactions [4] . The interest of this section is less in the derivation of the theory, which is classical, but, rather, in the answers to questions posed by this theory.
According to the GSW theory the gauge group for leptons would not be U(1) or SU(2) or SU(3) but a combined group, namely. We consider the contribution of this gauge symmetry group to the local Lagrangian where the vacuum state is the state that minimizes the local Lagrangian
under the constraint (see [1] ). One finds
The contribution writes
Here the matrices are the generators of Lie group. The generators of SU(2) are the three Pauli matrices and the generators of U(1) are scalar numbers. For electroweak interactions one has, accordingly,
From a mathematical point of view this expression is meaningless because it mixes a scalar with two dimensional matrices. The problem is resolved by introducing the simple following transformation between the scalar 1 and the two dimensional matrix.
The value of the electroweak Lagrangian in vacuum is then given by (with c = 1)
Instead of a 4-dimensional real representation of, a two dimensional complex representation may also be used, namely
In this representation the vacuum state is written
By introducing this state in Equation (7) and by defining
one has
Finally with
the results of the Glashow, Salam and Weinberg (GWS) theory are recovered. The eigenvalues of the last 2-dimensional matrix are associated with the field and associated with the electromagnetic field A. The electroweak interaction therefore compels the photon mass to be strictly zero but this result only holds for the chosen vacuum (5). The GSW theory, however, does not answer the following questions.
a) What is the mechanism that binds the groups U(1) and SU(2)?
We shall not treat that subject here in detail but it can be shown that the entire organization of particles of the Standard Model can be recovered by assuming that the seed of a particle does not involve a single world point but a pair of world points, instead, one with a bosonic character, the other with a fermionic character, an idea close to super-symmetric (Suzy) approaches. According to this interpretation the leptons would transform as and the quarks as where are three irreducible representations of the symmetric group of permutations of four objects. Combined with the unitary symmetry group U(4) the gauge group of leptons is accordingly. The Suzy mechanism has been introduced to (partly) remedy the divergences that appear in Feynman diagrams but the price to pay is a doubling of the number of particles (bosinos that are fermions associated with bosons and sfermions that are bosons associated with fermions). Here there is no need for such a doubling because, in our approach, the ordinary particles are made of pairs of bosonic and fermionic world points and are super-symmetric in essence. For the time being no super-symmetric particles has been experimentally found. One also could say that the success of the GSW theory is a good support of our Suzy-like model of particles.
b) What is the vacuum state?
The vacuum state is chosen in the GSW theory so as to make the photon mass vanish. How to justify this choice whereas the Higgs vacuum states
with are all equivalent?
In our interpretation the vacuum state of an isolated world point is necessarily asymmetric. It is given by Equation (6) which is precisely the vacuum state used in the GSW theory.
c) How to determine the Weinberg parameter?
The vacuum state is fully oriented along the time axis. The three Pauli matrices are associated with the three space dimensions. The three dimensions of space and the time dimension constitute an affine space with a dilatation factor given by c. It is then convenient to assume that. In [1] we have seen that the (dimensionless) speed of light is given by (with) and that the dimensionality d of space- time is determined by. The Weinberg angle is defined by
The experimentally accessible parameter is
Since, one has and therefore
The experimental value is
which is consistent with the prediction. With this value as a datum we find
3. Gravitation3.1. A Link between Discrete Spaces and General Relativity
A state of vacuum with, given by Equation (6), for all world points i is not acceptable because such a vacuum would have no space dimensions. One observes that is not allowed because it ignores the uncertainty principle or, more precisely, the notion of zero point motion. Let be the state of the fundamental mode of the harmonic oscillator that represents the dynamics of the electromagnetic field. is a
Gaussian function. Then vacuum is defined as a state with
with. Vacuum then recovers spatial dimensions.
The central hypothesis of general relativity is that the metric matrices are site dependent in space-time and that these modifications are caused by masses and, more generally, by non vanishing energy densities. In our model this hypothesis transforms the vacuum metric matrix into. The developed Lagrangian of a free particle in a space is then given by
The length increments of space along the dimension at world point i are given by
where l* is the size of a world point. This gives
In the continuous limit the expression reads
This action is the starting point of General Relativity. The main conclusions of General Relativity, in particular the Einstein equation, may then be derived by using the usual (covariance) arguments. More precisely General Relativity may also be seen as a gauge field theory where space-time is analogous to a fiber bundle. The fibers of the bundle are (approximate) copies of the Minkowskian metrics and the basis of the bundle is space- time itself. This is exactly the scheme that appears in the present approach. The fibers are (approximate) copies of and the basis of the bundle, that is the connection between the fibers, is provided by the matrix. In the present approach there is no longer a contradiction between quantum and general relativity theories because both theories become irrelevant below the metric limit l*, quantum theory because a quantum state can no longer be defined and general relativity because the metric matrices disappear.
3.2. Weak Gravitation Fields
Let us now consider weak gravitation fields. In weak gravitation fields the local metric matrix may be written as
where is the four dimensional coordinate associated with world point i and. The Lagrangian of a free particle is modified accordingly
The second term on the right hand side may be seen as a three-body interaction where the field is scattered by a tensor field. If the perturbation is assumed to be so weak that the modifications of the (inertial) masses of particles are negligible, the Lagrangian is not modified either, and one must write
that can be satisfied only if
This expression is a propagation equation that describes the dynamics of a massless tensor field implying 10 independent components. Let us consider the trace of this tensor (its dilatation component)
Its dynamics is then given by
This is the propagation equation of a massless gravitation wave travelling at the speed of light c. Its propagator writes
3.3. Quantum Mechanics in Curved Spaces
In this section we look for the quantum dynamics of a particle evolving in such curved spaces. The Lagrangian
is minimized under the set of N constraints. The expression to be minimized is
where is a site dependent Lagrange multiplier. This yields the following eigenvalue equation
Making explicit every component of world point polarizations, this equation gives
Following the argument already given in [1] the 4-dimensional wave function satisfies the following equation
This equation describes the quantum dynamics of a particle in the framework of the theory of general relativity. In flat spaces, the Klein-Gordon equation is recovered. This description of quantum states in gravitation fields is similar to the approach called “quantum mechanics in curved spaces” [5] . There is, however, an essential difference: the mass of the particle is site-dependent, , which makes the calculations much more difficult. In strongly distorted space-time metrics, the particle can even lose its identity.
3.4. Indirect (Fluctuation) Interactions
Particles interact through gauge interactions by an exchange of bosonic particles (photons, vector bosons or gluons) that result from the quantization of gauge fields. These are direct interactions, but besides those interactions there are also indirect interactions where two physical systems interact through fluctuations of their internal structures.
The best known example of indirect interactions is the Van der Waals interaction. In physical systems housing positive and negative electric charges, the centres of gravity of positive and negative charges may not match, giving rise to fluctuating electric dipoles. The dipoles interact and some dipole orientations lower the energy, the closer the systems the larger the energy lowering. The result is an attractive force that decreases as.
Another example is the nuclear indirect interaction between nucleons. The mechanism at work arises from the fluctuations of quark colour charges. The fluctuations are transmitted between the various quarks that compound a nucleus through gluons (which are colour charged). The result is a strong, attractive, short range interaction.
Since the fluctuation interactions are always attractive and since the gravitation interaction is also attractive, we suggest that the gravitational interaction is an indirect interaction. The polarization of a world point, in fact the quantum wave, may indeed fluctuate due both to cosmic noise b and finite size n of a world point.
3.5. Newton Gravitational Attraction
The polarization amplitude of a world point is given by the thermal average of an order parameter s
with. The cosmic noise parameter b may be interpreted as the inverse of a temperature. We have seen in [1] that the partition function of this system writes
The polarization is therefore a random Gaussian variable with a mean square deviation given by
The fluctuations vanish when n, or b, or J, goes to infinity. The vanishing of fluctuations characterizes the so- called mean field approximation. Then the realized state is the state that minimizes the free energy, a technique called saddle point approximation. A large fluctuation of destabilizes the eigenstates solutions of the equation. If is a random Gaussian variable with standard deviation, (where is a constant) is also a random Gaussian variable with standard deviation and therefore the relevant standard deviation of the world points polarizations must be modified along the following formula
The eigenvalue of a system where a world point i houses a particle P is. Since (see [1] ) the fluctuations modify the polarization of world point i of the system by an amount
where is a factor of the order of 1.
Similarly, the polarization perturbation of a world point j housing another particle Q is
The propagation of gravitation waves creates an interaction between the two particles P and Q that, owing to the weakness of vertices, may be calculated by using a low order perturbation expansion. The lowest order writes
By using Equation (8) the static gravitation interaction is given by
The Fourier transform of in a d-dimensional space behaves as. This expression yields for d = 3.
Finally, the Newton expression of attractive gravitational forces is recovered:
The gravitation constant is proportional to 1/n. Let us see how the experimental value of the gravitation constant gives an indication as regards the orders of magnitude of n and l* the size of a world point.
The Planck length is given by. is the Planck mass:. One has and. is the smallest length that has still a physical meaning. Since a cosmic bit is the most simple system one can imagine, one assumes that is the (non phy- sically measurable) size of a cosmic bit.
In other respects, the electric potential between two electrons at a distance is
whereas the gravitational potential between these two electrons is.
With and the ratio between the two potentials is
(is the fine structure constant). According to the present interpretation this ratio varies as. Assuming that the number of bits belonging to a world point is then of the order of
a very large number indeed. n is determined by the ratio of second order comic bits interactions to
fourth order interactions and, more precisely, by minimizing a Landau-type free energy
[1] .
One has and as assumed above.
This large difference between and would explain the large intensity gap between the gauge interactions and gravitation interaction (the hierarchy problem).
The number of cosmic bits in a world point is given by
that is and therefore
the size scale of world points that has been used so far. The energy corresponding to this size is
very far (by four orders of magnitude) from the possibilities of available machines even those of the LHC.
Finally the size of a coherent domain [2] , that is the limit of classical mechanics, is such that
about two hundred times the size of an hydrogen atom.
3.6. Mond Theory
The anomalous motion of outer stars in a galaxy has led the astrophysicists to introduce an invisible matter that they called dark or hidden matter [6] . No such matter has been directly found so far and its only experimentally measurable effect is the bending of light rays, a consequence of general relativity. Milgrom has put forward another explanation for the anomaly. At very large distances the Newton dynamics would have to be modified [7] (Mond is for Modified Newton dynamics). Instead of the classical Newton attraction
Milgrom suggests that one must write
with for, and for. At large distances, the gravitation force is then given by
is the Milgrom range. There is, in principle, a way to deduce the parameter from experimental observation. In Newton dynamics, the speed of stars in a galaxy disk is given by, that is
In Milgrom dynamics, that is
a constant as observed. can be measured and one has. The problem is that one does not know the value of M the mass of the galaxy bulb.
Since the motion of stars in the disks of galaxies is determined by the Mond dynamics the Milgrom parameter must be of the order of, or less than, the galactic bulbs radii. The galaxy bulbs diameters are of the order of (light year) to. We chose the value but the value of could be smaller.
The model of discrete universe that we propose can provide an explanation of Mond theory based upon the possible modifications of world point dimensionality d under the influence of polarization fluctuations. Let us recall that the internal space of a world point may be considered as a d-dimensional space where the polarization is a d-dimensional vector [1] :
and. The Lagrangian of a world point is expressed in term of polarization components and the partition function Z in term of their thermal averages. Z, given by
has been computed in [1] . The result is
Every component fluctuates with a standard deviation
If the fluctuations of one component of exceed the associated dimension is lost, and the internal space of world point, instead of being a (3 + 1)-dimensional, becomes a (2 + 1)-dimensional space. The proportion of such world points is given by
where is the Gauss distribution
Then by letting one has
Since we find
The form of the gravitation interaction associated with (2 + 1)-dimensional world points is modified because the Fourier transform (7) is now 2-dimensional. The potential in a 2-dimensional space becomes
Finally, the Milgrom attractive gravitational forces is
As a whole the gravitation interaction becomes
with. Then if and if as assumed by Mil-
The disappearance of a dimension may be interpreted as the shrinking of the lengths associated to that dimension by a factor. This reminds the shrinking of non observable dimensions in string theories (where 10 or 11 dimensional spaces are reduced to the classical 4-dimensional spaces) with the difference that we have here a mechanism that really put the process at work. For example a standard length is now seen as a length.
3.7. Cosmological Constant
A world point i eventually loses another dimension if the fluctuations perturb simultaneously two field components and of the polarization of i. The probability for such a situation to occur is . Then the internal space of world point becomes (1 + 1)-dimensional and the interaction potential becomes
The associated gravitation force is
a repulsive constant force that acts as a negative pressure exactly as does the cosmological constant.
The formula gathering the various contributions to the gravitation forces is
In Equation (13) is the cosmological constant.
Since the distance travelled by light in one year is, and, with,. Finally, which is close to the experimental observed value. The agreement is striking but it must not be taken too strictly because it depends on a poorly known parameter, the Milgrom range. The main interest of the derivation is that it seems to give the right orders of magnitude to the cosmological constant.
The distance rM where the cosmological expansion takes the lead over the Milgrom dynamics is , a value of the order of the size of the observable universe.
3.8. On Dark Matter
Up to now the possible effect of dark matter has not been taken into account. Dark matter has been introduced to account for the rotation curves of stars gravitating at the peripheries of galaxies. The Mond theory proposes another explanation and dark matter seems to be no longer necessary. The study of galaxies clusters shows that this is not the case. The gravitation forces are, in the Mond theory, exactly known. They are central and enable an exact calculation of the motions of galaxies in a cluster of galaxies to be carried out. The observation of the galaxy cluster 1E0657-56 (the Bullet) does not support the calculations. Dark matter is still necessary. Moreover dark matter is also necessary to account for the formation of galaxies. There is, however, no direct experimental evidence for their material existence except for their gravitational lens effects.
The model of discrete space-time that we put forward may provide another interpretation. Let us consider the metric matrix of the model [1] . It writes
This metric matrix is sensitive to variations of cosmic noise b and since the speed of light is given by a variation of cosmic noise b leads to a variation of the speed of light c. Space then behaves as a refractory medium and a non uniform repartition of cosmic noise is reflected by the bending of light rays exactly as gravitational lenses would do. Astrophysicists generally tend to interpret the deviations by the presence of matter although no matter is necessarily involved in the process. Given the agreement between the experimental and the computed values of the cosmological constant, a theory that does not take dark matter into account, is satisfactory, and one must conclude that the universe is flat and its dimension is everywhere. This remark does not jeopardize the existence of dark matter. We can define our universe as a set of world points where and define dark matter as regions where is in between these two limits.
3.9. Principle of Equivalence
The parameter that appears in the Klein Gordon equation is called the mass of particle P. It is given by where is the eigenvalue of the following equation
In the classical limit this mass is the mass parameter that appears in the Schrödinger equation and, finally, through the Erhenfest equations, the mass of the Newton equation
Therefore is the inertial mass of P.
is also the parameter that appears in the Newton gravitational force (9). Therefore is the gravitational mass of P. We conclude that, a proof of the principle of equivalence.
4. Discussions and Conclusions
In contribution [1] we put forward a model of discrete space-time that we consider to be a convenient framework for the description of natural phenomena. To be accepted, this statement must be supported by a proof that the model can account for the main issues of theoretical physics. Some have been studied in [1] and [2] . Here we show that the four fundamental interactions may be understood in the framework of this model. It allows a natural introduction of gauge interactions. Moreover, it suggests the idea that gravitation could be an effect of fluctuations of world point polarizations (quantum states). The fluctuations are caused, on one hand, by the finite size n of world points and, on the other, by the cosmic noise b. The former effect gives a solution to the hierarchy problem because n is so large that the gravitation forces are extremely weak compared to gauge interactions. The later, the cosmic noise b, leads to the idea that the dimensionality d is not given once and for all. Large enough cosmic noise fluctuations may result as a decrease of d. For the attractive gravitation law is that of Newton, for one finds the attractive gravitation law of Milgrom and, finally, for d = 1 + 1, one finds an extremely weak, repulsive interaction that reminds the effects of the cosmological constant. Instead of increasing d, as in string theories, we think that the physical phenomena can be better understood by decreasing d.
Obviously, the introduction of a cosmic noise must have large consequences in cosmology. Although this issue is out of the scope of this article we would like to mention briefly a few effects of b.
Below everything disappears, space, fields and particles, a situation that reminds a pre Big-Bang state.
If the speed of light becomes imaginary and so is the case of time. The concept of imaginary times has been proposed by Hawking to cope with the difficulties set by the initial state of the universe [8] . One also sees that the metric matrix becomes Euclidean which yields another solution to these difficulties [9] .
Finally the speed of light diverges for which could give a physical solution to the inflation problem.
I would like to heartily thank Pr. Bart Van Tiggelen. He does not believe in my ideas but he strongly incited me to continue.
ReferencesPeretto, P. (2014) Journal of Modern Physics, 8, 563-575., P. (2014) Journal of Modern Physics, 14, 1370-1386., C.N. and Mills, R. (1954) Physical Review, 96, 191-195., J. and Salam, A. (1973) Physical Review, D8, 1240., T. (2004) Introduction to Quantum Fields in Curved Space-Time and the Hawking Effect. arXiv: gr-qc/0308048v3Tayler, R.J. (1991) Hidden Matter. Elis Horwood, Chichester.Milgrom, M. (1983) Astrophysical Journal, 270, 365., J.B. and Hawking, S. (1983) Physical Review, D28, 2860., G.C. (1956) Physical Review, 101, 1830. |
401c5a119c49488b |
Publications of the Astronomical Society of Australia
blank image Search
blank image blank image
blank image
Advanced Search
Journal Home
Current Issue
All Issues
Special Issues
Research Fronts
Open Access Article << Previous | Contents Vol 29(4)
The Fine-Tuning of the Universe for Intelligent Life
L. A. Barnes
Institute for Astronomy, ETH Zurich, Switzerland, and Sydney Institute for Astronomy, School of Physics, University of Sydney, Australia. Email: L.Barnes@physics.usyd.edu.au
(844.9 KB)
Export Citation
Publications of the Astronomical Society of Australia 29(4) 529-564 http://dx.doi.org/10.1071/AS12015
Submitted: 6 February 2012 Accepted: 24 April 2012 Published: 7 June 2012
The fine-tuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of fine-tuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger’s recent book The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us. Stenger claims that all known fine-tuning cases can be explained without the need for a multiverse. Many of Stenger’s claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the fine-tuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger’s book, or read independently.
Keywords: cosmology: theory — history and philosophy of astronomy
1 Introduction
The fine-tuning of the universe for intelligent life has received much attention in recent times. Beginning with the classic papers of Carter (1974) and Carr & Rees (1979), and the extensive discussion of Barrow & Tipler (1986), a number of authors have noticed that very small changes in the laws, parameters and initial conditions of physics would result in a universe unable to evolve and support intelligent life.
We begin by defining our terms. We will refer to the laws of nature, initial conditions and physical constants of a particular universe as its physics for short. Conversely, we define a ‘universe’ be a connected region of spacetime over which physics is effectively constant1. The claim that the universe is fine-tuned can be formulated as:
FT: In the set of possible physics, the subset that permit the evolution of life is very small.
FT can be understood as a counterfactual claim, that is, a claim about what would have been. Such claims are not uncommon in everyday life. For example, we can formulate the claim that Roger Federer would almost certainly defeat me in a game of tennis as: ‘in the set of possible games of tennis between myself and Roger Federer, the set in which I win is extremely small’. This claim is undoubtedly true, even though none of the infinitely-many possible games has been played.
Our formulation of FT, however, is in obvious need of refinement. What determines the set of possible physics? Where exactly do we draw the line between ‘universes’? How is ‘smallness’ being measured? Are we considering only cases where the evolution of life is physically impossible or just extremely improbable? What is life? We will press on with the our formulation of FT as it stands, pausing to note its inadequacies when appropriate. As it stands, FT is precise enough to distinguish itself from a number of other claims for which it is often mistaken. FT is not the claim that this universe is optimal for life, that it contains the maximum amount of life per unit volume or per baryon, that carbon-based life is the only possible type of life, or that the only kinds of universes that support life are minor variations on this universe. These claims, true or false, are simply beside the point.
The reason why FT is an interesting claim is that it makes the existence of life in this universe appear to be something remarkable, something in need of explanation. The intuition here is that, if ours were the only universe, and if the causes that established the physics of our universe were indifferent to whether it would evolve life, then the chances of hitting upon a life-permitting universe are very small. As Leslie (1989, p. 121) notes, ‘[a] chief reason for thinking that something stands in special need of explanation is that we actually glimpse some tidy way in which it might be explained’. Consider the following tidy explanations:
1. This universe is one of a large number of variegated universes, produced by physical processes that randomly scan through (a subset of) the set of possible physics. Eventually (or somewhere), a life-permitting universe will be created. Only such universes can be observed, since only such universes contain observers.
2. There exists a transcendent, personal creator of the universe. This entity desires to create a universe in which other minds will be able to form. Thus, the entity chooses from the set of possibilities a universe which is foreseen to evolve intelligent life2.
These scenarios are neither mutually exclusive nor exhaustive, but if either or both were true then we would have a tidy explanation of why our universe, against the odds, supports the evolution of life.
Our discussion of the multiverse will touch on the so-called anthropic principle, which we will formulate as follows:
AP: If observers observe anything, they will observe conditions that permit the existence of observers.
Tautological? Yes! The anthropic principle is best thought of as a selection effect. Selection effects occur whenever we observe a non-random sample of an underlying population. Such effects are well known to astronomers. An example is Malmquist bias — in any survey of the distant universe, we will only observe objects that are bright enough to be detected by our telescope. This statement is tautological, but is nevertheless non-trivial. The penalty of ignoring Malmquist bias is a plague of spurious correlations. For example, it will seem that distant galaxies are on average intrinsically brighter than nearby ones.
A selection bias alone cannot explain anything. Consider quasars: when first discovered, they were thought to be a strange new kind of star in our galaxy. Schmidt (1963) measured their redshift, showing that they were more than a million times further away than previously thought. It follows that they must be incredibly bright. How are quasars so luminous? The (best) answer is: because quasars are powered by gravitational energy released by matter falling into a super-massive black hole (Zel'dovich 1964; Lynden-Bell 1969). The answer is not: because otherwise we wouldn’t see them. Noting that if we observe any object in the very distant universe then it must be very bright does not explain why we observe any distant objects at all. Similarly, AP cannot explain why life and its necessary conditions exist at all.
In anticipation of future sections, Table 1 defines some relevant physical quantities.
Table 1. Fundamental and derived physical and cosmological parameters
Click to zoom
2 Cautionary Tales
There are a few fallacies to keep in mind as we consider cases of fine-tuning.
The Cheap-Binoculars Fallacy: ‘Don’t waste money buying expensive binoculars. Simply stand closer to the object you wish to view’3. We can make any point (or outcome) in possibility space seem more likely by zooming-in on its neighbourhood. Having identified the life-permitting region of parameter space, we can make it look big by deftly choosing the limits of the plot. We could also distort parameter space using, for example, logarithmic axes.
A good example of this fallacy is quantifying the fine-tuning of a parameter relative to its value in our universe, rather than the totality of possibility space. If a dart lands 3 mm from the centre of a dartboard, is it obviously fallacious to say that because the dart could have landed twice as far away and still scored a bullseye, therefore the throw is only fine-tuned to a factor of two and there is ‘plenty of room’ inside the bullseye. The correct comparison is between the area of the bullseye and the area in which the dart could land. Similarly, comparing the life-permitting range to the value of the parameter in our universe necessarily produces a bias toward underestimating fine-tuning, since we know that our universe is in the life-permitting range.
The Flippant Funambulist Fallacy: ‘Tightrope-walking is easy!’, the man says, ‘just look at all the places you could stand and not fall to your death!’. This is nonsense, of course: a tightrope walker must overbalance in a very specific direction if her path is to be life-permitting. The freedom to wander is tightly constrained. When identifying the life-permitting region of parameter space, the shape of the region is irrelevant. An elongated life-friendly region is just as fine-tuned as a compact region of the same area. The fact that we can change the setting on one cosmic dial, so long as we very carefully change another at the same time, does not necessarily mean that FT is false.
The Sequential Juggler Fallacy: ‘Juggling is easy!’, the man says, ‘you can throw and catch a ball. So just juggle all five, one at a time’. Juggling five balls one-at-a-time isn’t really juggling. For a universe to be life-permitting, it must satisfy a number of constraints simultaneously. For example, a universe with the right physical laws for complex organic molecules, but which recollapses before it is cool enough to permit neutral atoms will not form life. One cannot refute FT by considering life-permitting criteria one-at-a-time and noting that each can be satisfied in a wide region of parameter space. In set-theoretic terms, we are interested in the intersection of the life-permitting regions, not the union.
The Cane Toad Solution: In 1935, the Bureau of Sugar Experiment Stations was worried by the effect of the native cane beetle on Australian sugar cane crops. They introduced 102 cane toads, imported from Hawaii, into parts of Northern Queensland in the hope that they would eat the beetles. And thus the problem was solved forever, except for the 200 million cane toads that now call eastern Australia home, eating smaller native animals, and secreting a poison that kills any larger animal that preys on them. A cane toad solution, then, is one that doesn’t consider whether the end result is worse than the problem itself. When presented with a proposed fine-tuning explainer, we must ask whether the solution is more fine-tuned than the problem.
3 Stenger’s Case
We will sharpen the presentation of cases of fine-tuning by responding to the claims of Victor Stenger. Stenger is a particle physicist whose latest book, ‘The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us’4, makes the following bold claim:
‘The most commonly cited examples of apparent fine-tuning can be readily explained by the application of a little well-established physics and cosmology. ...Some form of life would have occurred in most universes that could be described by the same physical models as ours, with parameters whose ranges varied over ranges consistent with those models. And I will show why we can expect to be able to describe any uncreated universe with the same models and laws with at most slight, accidental variations. Plausible natural explanations can be found for those parameters that are most crucial for life. ...My case against fine-tuning will not rely on speculations beyond well-established physics nor on the existence of multiple universes.’ (Foft 22, 24)
Let’s be clear on the task that Stenger has set for himself. There are a great many scientists, of varying religious persuasions, who accept that the universe is fine-tuned for life, e.g. Barrow, Carr, Carter, Davies, Dawkins, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, Wilczek5. They differ, of course, on what conclusion we should draw from this fact. Stenger, on the other hand, claims that the universe is not fine-tuned.
4 Cases of Fine-Tuning
What is the evidence that FT is true? We would like to have meticulously examined every possible universe and determined whether any form of life evolves. Sadly, this is currently beyond our abilities. Instead, we rely on simplified models and more general arguments to step out into possible-physics-space. If the set of life-permitting universes is small amongst the universes that we have been able to explore, then we can reasonably infer that it is unlikely that the trend will be miraculously reversed just beyond the horizon of our knowledge.
4.1 The Laws of Nature
Are the laws of nature themselves fine-tuned? Foft defends the ambitious claim that the laws of nature could not have been different because they can be derived from the requirement that they be Point-of-View Invariant (hereafter, PoVI). He says:
‘...[In previous sections] we have derived all of classical physics, including classical mechanics, Newton’s law of gravity, and Maxwell’s equations of electromagnetism, from just one simple principle: the models of physics cannot depend on the point of view of the observer. We have also seen that special and general relativity follow from the same principle, although Einstein’s specific model for general relativity depends on one or two additional assumptions. I have offered a glimpse at how quantum mechanics also arises from the same principle, although again a few other assumptions, such as the probability interpretation of the state vector, must be added. ...[The laws of nature] will be the same in any universe where no special point of view is present.’ (Foft 88, 91)
4.1.1 Invariance, Covariance and Symmetry
We can formulate Stenger’s argument for this conclusion as follows:
1. LN1. If our formulation of the laws of nature is to be objective, it must be PoVI.
2. LN2. Invariance implies conserved quantities (Noether’s theorem).
3. LN3. Thus, ‘when our models do not depend on a particular point or direction in space or a particular moment in time, then those models must necessarily [emphasis original] contain the quantities linear momentum, angular momentum, and energy, all of which are conserved. Physicists have no choice in the matter, or else their models will be subjective, that is, will give uselessly different results for every different point of view. And so the conservation principles are not laws built into the universe or handed down by deity to govern the behavior of matter. They are principles governing the behavior of physicists.’ (Foft 82)
This argument commits the fallacy of equivocation — the term ‘invariant’ has changed its meaning between LN1 and LN2. The difference is decisive but rather subtle, owing to the different contexts in which the term can be used. We will tease the two meanings apart by defining covariance and symmetry, considering a number of test cases.
Galileo’s Ship: We can see where Stenger’s argument has gone wrong with a simple example, before discussing technicalities in later sections. Consider this delightful passage from Galileo regarding the brand of relativity that bears his name:
‘Shut yourself up with some friend in the main cabin below decks on some large ship, and have with you there some flies, butterflies, and other small flying animals. Have a large bowl of water with some fish in it; hang up a bottle that empties drop by drop into a wide vessel beneath it. With the ship standing still, observe carefully how the little animals fly with equal speed to all sides of the cabin. The fish swim indifferently in all directions; the drops fall into the vessel beneath; and, in throwing something to your friend, you need throw it no more strongly in one direction than another, the distances being equal; jumping with your feet together, you pass equal spaces in every direction. When you have observed all these things carefully, ...have the ship proceed with any speed you like, so long as the motion is uniform and not fluctuating this way and that. You will discover not the least change in all the effects named, nor could you tell from any of them whether the ship was moving or standing still.’ (Quoted in Healey (2007, chapter 6).).
Note carefully what Galileo is not saying. He is not saying that the situation can be viewed from a variety of different viewpoints and it looks the same. He is not saying that we can describe flight-paths of the butterflies using a coordinate system with any origin, orientation or velocity relative to the ship.
Rather, Galileo’s observation is much more remarkable. He is stating that the two situations, the stationary ship and moving ship, which are externally distinct are nevertheless internally indistinguishable. The two situations cannot be distinguished by means of measurements confined to each situation (Healey 2007, Chapter 6). These are not different descriptions of the same situation, but rather different situations with the same internal properties.
The reason why Galilean relativity is so shocking and counterintuitive is that there is no a priori reason to expect distinct situations to be indistinguishable. If you and your friend attempt to describe the butterfly in the stationary ship and end up with ‘uselessly different results’, then at least one of you has messed up your sums. If your friend tells you his point-of-view, you should be able to perform a mathematical transformation on your model and reproduce his model. None of this will tell you how the butterflies will fly when the ship is speeding on the open ocean. An Aristotelian butterfly would presumably be plastered against the aft wall of the cabin. It would not be heard to cry: ‘Oh, the subjectivity of it all!’
Galilean invariance, and symmetries in general, have nothing whatsoever to do with point-of-view invariance. A universe in which Galilean relativity did not hold would not wallow in subjectivity. It would be an objective, observable fact that the butterflies would fly differently in a speeding ship. This is Stenger’s confusion: PoVI does not imply symmetry.
Lagrangian Dynamics: We can see this same point in a more formal context. Lagrangian dynamics is a framework for physical theories that, while originally developed as a powerful approach to Newtonian dynamics, underlies much of modern physics. The method revolves around a mathematical function called the Lagrangian, where t is time, the variables qi parameterise the degrees of freedom (the ‘coordinates’), and . For a system described by L, the equations of motion can be derived from L via the Euler–Lagrange equation.
One of the features of the Lagrangian formalism is that it is covariant. Suppose that we want to use different coordinates for our system, say si, that are expressed as functions of the old coordinates qi and t. We can express the Lagrangian L in terms of t, si and by substituting the new coordinates for the old ones. Crucially, the form of the Euler–Lagrange equation does not change — just replace q with s. In other words, it does not matter what coordinates we use. The equations take the same form in any coordinate system, and are thus said to be covariant. Note that this is true of any Lagrangian, and any (sufficiently smooth) coordinate transformation si(t, qj). Objectivity (and PoVI) are guaranteed.
Now, consider a specific Lagrangian L that has the following special property — there exists a continuous family of coordinate transformations that leave L unchanged. Such a transformation is called a symmetry (or isometry) of the Lagrangian. The simplest case is where a particular coordinate does not appear in the expression for L. Noether’s theorem tells us that, for each continuous symmetry, there will be a conserved quantity. For example, if time does not appear explicitly in the Lagrangian, then energy will be conserved.
Note carefully the difference between covariance and symmetry. Both could justifiably be called ‘coordinate invariance’ but they are not the same thing. Covariance is a property of the entire Lagrangian formalism. A symmetry is a property of a particular Lagrangian L. Covariance holds with respect to all (sufficiently smooth) coordinate transformations. A symmetry is linked to a particular coordinate transformation. Covariance gives us no information whatsoever about which Lagrangian best describes a given physical scenario. Symmetries provide strong constraints on the which Lagrangians are consistent with empirical data. Covariance is a mathematical fact about our formalism. Symmetries can be confirmed or falsified by experiment.
Lorentz Invariance: Let’s look more closely at some specific cases. Stenger applies his general PoVI argument to Einstein’s special theory of relativity:
‘Special relativity similarly results from the principle that the models of physics must be the same for two observers moving at a constant velocity with respect to one another. ...Physicists are forced to make their models Lorentz invariant so they do not depend on the particular point of view of one reference frame moving with respect to another.’
This claim is false. Physicists are perfectly free to postulate theories which are not Lorentz invariant, and a great deal of experimental and theoretical effort has been expended to this end. The compilation of Kostelecký & Russell (2011) cites 127 papers that investigate Lorentz violation. Pospelov & Romalis (2004) give an excellent overview of this industry, giving an example of a Lorentz-violating Lagrangian:
where the fields bμ, kμ and Hμν are external vector and antisymmetric tensor backgrounds that introduce a preferred frame and therefore break Lorentz invariance; all other symbols have their usual meanings (e.g. Nagashima 2010). A wide array of laboratory, astrophysical and cosmological tests place impressively tight bounds on these fields. At the moment, the violation of Lorentz invariance is just a theoretical possibility. But that’s the point.
Ironically, the best cure for a conflation of ‘frame-dependent’ with ‘subjective’ is special relativity. The length of a rigid rod depends on the reference frame of the observer: if it is 2 metres long it its own rest frame, it will be 1 metre long in the frame of an observer passing at 87% of the speed of light6. It does not follow that the length of the rod is ‘subjective’, in the sense that the length of the rod is just the personal opinion of a given observer, or in the sense that these two different answers are ‘uselessly different’. It is an objective fact that the length of the rod is frame-dependent. Physics is perfectly capable of studying frame-dependent quantities, like the length of a rod, and frame-dependent laws, such as the Lagrangian in Equation 1.
General Relativity: We turn now to Stenger’s discussion of gravity.
‘Ask yourself this: If the gravitational force can be transformed away by going to a different reference frame, how can it be ‘real’? It can’t. We see that the gravitational force is an artifact, a ‘fictitious’ force just like the centrifugal and Coriolis forces. ...[If there were no gravity] then there would be no universe. ...[P]hysicists have to put gravity into any model of the universe that contains separate masses. A universe with separated masses and no gravity would violate point-of-view invariance. ...In general relativity, the gravitational force is treated as a fictitious force like the centrifugal force, introduced into models to preserve invariance between reference frames accelerating with respect to one another.’
These claims are mistaken. The existence of gravity is not implied by the existence of the universe, separate masses or accelerating frames.
Stenger’s view may be rooted in the rather persistent myth that special relativity cannot handle accelerating objects or frames, and so general relativity (and thus gravity) is required. The best remedy to this view to sit down with the excellent textbook of Hartle (2003) and don’t get up until you’ve finished Chapter 5’s ‘systematic way of extracting the predictions for observers who are not associated with global inertial frames ...in the context of special relativity’. Special relativity is perfectly able to preserve invariance between reference frames accelerating with respect to one another. Physicists clearly don’t have to put gravity into any model of the universe that contains separate masses.
We can see this another way. None of the invariant/covariant properties of general relativity depend on the value of Newton’s constant G. In particular, we can set G = 0. In such a universe, the geometry of spacetime would not be coupled to its matter-energy content, and Einstein’s equation would read Rμν = 0. With no source term, local Lorentz invariance holds globally, giving the Minkowski metric of special relativity. Neither logical necessity nor PoVI demands the coupling of spacetime geometry to mass-energy. This G = 0 universe is a counterexample to Stenger’s assertion that no gravity means no universe.
What of Stenger’s claim that general relativity is merely a fictitious force, to be derived from PoVI and ‘one or two additional assumptions’? Interpreting PoVI as what Einstein called general covariance, PoVI tells us almost nothing. General relativity is not the only covariant theory of spacetime (Norton 1995). As Misner, Thorne & Wheeler (1973, p. 302) note: ‘Any physical theory originally written in a special coordinate system can be recast in geometric, coordinate-free language. Newtonian theory is a good example, with its equivalent geometric and standard formulations. Hence, as a sieve for separating viable theories from nonviable theories, the principle of general covariance is useless.’ Similarly, Carroll (2003) tells us that the principle ‘Laws of physics should be expressed (or at least be expressible) in generally covariant form’ is ‘vacuous’. We can now identify the ‘additional assumptions’ that Stenger needs to derive general relativity. Given general covariance (or PoVI), the additional assumptions constitute the entire empirical content of the theory.
Finally, general relativity provides a perfect counterexample to Stenger’s conflation of covariance with symmetry. Einstein’s GR field equation is covariant — it takes the same form in any coordinate system, and applying a coordinate transformation to a particular solution of the GR equation yields another solution, both representing the same physical scenario. Thus, any solution of the GR equation is covariant, or PoVI. But it does not follow that a particular solution will exhibit any symmetries. There may be no conserved quantities at all. As Hartle (2003, pp. 176, 342) explains:
‘Conserved quantities ...cannot be expected in a general spacetime that has no special symmetries ...The conserved energy and angular momentum of particle orbits in the Schwarzschild geometry7 followed directly from its time displacement and rotational symmetries. ...But general relativity does not assume a fixed spacetime geometry. It is a theory of spacetime geometry, and there are no symmetries that characterize all spacetimes.’
The Standard Model of Particle Physics and Gauge Invariance: We turn now to particle physics, and particularly the gauge principle. Interpreting gauge invariance as ‘just a fancy technical term for point-of-view invariance’, Stenger says:
‘If [the phase of the wavefunction] is allowed to vary from point to point in space-time, Schrödinger’s time-dependent equation ...is not gauge invariant. However, if you insert a four-vector field into the equation and ask what that field has to be to make everything nice and gauge invariant, that field is precisely the four-vector potential that leads to Maxwell’s equations of electromagnetism! That is, the electromagnetic force turns out to be a fictitious force, like gravity, introduced to preserve the point-of-view invariance of the system. ...Much of the standard model of elementary particles also follows from the principle of gauge invariance.’ (Foft 86–88)
Remember the point that Stenger is trying to make: the laws of nature are the same in any universe which is point-of-view invariant.
Stenger’s discussion glosses over the major conceptual leap from global to local gauge invariance. Most discussions of the gauge principle are rather cautious at this point. Yang, who along with Mills first used the gauge principle as a postulate in a physical theory, commented that ‘We did not know how to make the theory fit experiment. It was our judgement, however, that the beauty of the idea alone merited attention’. Kaku (1993, p. 11), who provides this quote, says of the argument for local gauge invariance:
‘If the predictions of gauge theory disagreed with the experimental data, then one would have to abandon them, no matter how elegant or aesthetically satisfying they were. Gauge theorists realized that the ultimate judge of any theory was experiment.’
Similarly, Griffiths (2008) ‘knows of no compelling physical argument for insisting that global invariance should hold locally’ [emphasis original]. Aitchison & Hey (2002) says that this line of thought is ‘not compelling motivation’ for the step from global to local gauge invariance, and along with Pokorski (2000), who describes the argument as aesthetic, ultimately appeals to the empirical success of the principle for justification. Needless to say, these are not the views of physicists demanding that all possible universes must obey a certain principle8. We cannot deduce gauge invariance from PoVI.
Even with gauge invariance, we are still a long way from the standard model of particle physics. A gauge theory needs a symmetry group. Electromagnetism is based on U(1), the weak force SU(2), the strong force SU(3), and there are grand unified theories based on SU(5), SO(10), E8 and more. These are just the theories with a chance of describing our universe. From a theoretical point of view, there are any number of possible symmetries, e.g. SU(N) and SO(N) for any integer N (Schellekens 2008). The gauge group of the standard model, SU(3) × SU(2) × U(1), is far from unique.
Conclusion: We can now see the flaw in Stenger’s argument. Premise LN1 should read: If our formulation of the laws of nature is to be objective, then it must be covariant. Premise LN2 should read: symmetries imply conserved quantities. Since ‘covariant’ and ‘symmetric’ are not synonymous, it follows that the conclusion of the argument is unproven, and we would argue that it is false. The conservation principles of this universe are not merely principles governing our formulation of the laws of nature. Neother’s theorems do not allow us to pull physically significant conclusions out of a mathematical hat. If you want to know whether a certain symmetry holds in nature, you need a laboratory or a telescope, not a blackboard. Symmetries tell us something about the physical universe.
4.1.2 Is Symmetry Enough?
Suppose that Stenger were correct regarding symmetries, that any objective description of the universe must incorporate them. One of the features of the universe as we currently understand it is that it is not perfectly symmetric. Indeed, intelligent life requires a measure of asymmetry. For example, the perfect homogeneity and isotropy of the Robertson–Walker spacetime precludes the possibility of any form of complexity, including life. Sakharov (1967) showed that for the universe to contain sufficient amounts of ordinary baryonic matter, interactions in the early universe must violate baryon number conservation, charge-symmetry and charge-parity-symmetry, and must spend some time out of thermal equilibrium. Supersymmetry, too, must be a broken symmetry in any life-permitting universe, since the bosonic partner of the electron (the selectron) would make chemistry impossible (see the discussion in Susskind 2005, p. 250). As Pierre Curie has said, it is asymmetry that creates a phenomena.
One of the most important concepts in modern physics is spontaneous symmetry breaking (SSB). The power of SSB is that it allows us
‘...to understand how the conclusions of the Noether theorem can be evaded and how a symmetry of the dynamics cannot be realized as a mapping of the physical configurations of the system.’ (Strocchi 2007, p. 3)
SSB allows the laws of nature to retain their symmetry and yet have asymmetric solutions. Even if the symmetries of the laws of nature were logically necessary, it would still be an open question as to precisely which symmetries were broken in our universe and which were unbroken.
4.1.3 Changing the Laws of Nature
What if the laws of nature were different? Stenger says:
‘...what about a universe with a different set of ‘laws’? There is not much we can say about such a universe, nor do we need to. Not knowing what any of their parameters are, no one can claim that they are fine-tuned.’ (Foft 69)
In reply, fine-tuning isn’t about what the parameters and laws are in a particular universe. Given some other set of laws, we ask: if a universe were chosen at random from the set of universes with those laws, what is the probability that it would support intelligent life? If that probability is robustly small, then we conclude that that region of possible-physics-space contributes negligibly to the total life-permitting subset. It is easy to find examples of such claims.
1. A universe governed by Maxwell’s Laws ‘all the way down’ (i.e. with no quantum regime at small scales) would not have stable atoms — electrons radiate their kinetic energy and spiral rapidly into the nucleus — and hence no chemistry (Barrow & Tipler 1986, p. 303). We don’t need to know what the parameters are to know that life in such a universe is plausibly impossible.
2. If electrons were bosons, rather than fermions, then they would not obey the Pauli exclusion principle. There would be no chemistry.
3. If gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 1030 times greater than the average density of the universe.
4. If the strong force were a long rather than short-range force, then there would be no atoms. Any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size and incapable of complexity.
5. If, in electromagnetism, like charges attracted and opposites repelled, then there would be no atoms. As above, we would just have undifferentiated lumps of matter.
6. The electromagnetic force allows matter to cool into galaxies, stars, and planets. Without such interactions, all matter would be like dark matter, which can only form into large, diffuse, roughly spherical haloes of matter whose only internal structure consists of smaller, diffuse, roughly spherical subhaloes.
We should be cautious, however. Whatever the problems of defining the possible range of a given parameter, we are in a significantly more nebulous realm when we consider the set of all possible physical laws. It is not clear how such a fine-tuning case could be formalised, whatever its intuitive appeal.
4.2 The Wedge
Moving from the laws of nature to the parameters those laws, Stenger makes the following general argument against supposed examples of fine-tuning:
‘[T]he examples of fine-tuning given in the theist literature ...vary one parameter while holding all the rest constant. This is both dubious and scientifically shoddy. As we shall see in several specific cases, changing one or more other parameters can often compensate for the one that is changed.’ (Foft 70)
To illustrate this point, Stenger introduces ‘the wedge’. I have produced my own version in Figure 1. Here, x and y are two physical parameters that can vary from zero to xmax and ymax, where we can allow these values to approach infinity if so desired. The point (x0, y0) represents the values of x and y in our universe. The life-permitting range is the shaded wedge. Stenger’s point is that varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x0, y0), thus missing most of parameter space. The probability of a life-permitting universe, assuming that the probability distribution is uniform in (x, y) — which, as Stenger notes, is ‘the best we can do’ (Foft 72) — is the ratio of the area inside the wedge to the area inside the dashed box.
Figure 1 The ‘wedge’: x and y are two physical parameters that can vary up to some xmax and ymax, where we can allow these values to approach infinity if so desired. The point (x0, y0) represents the values of x and y in our universe. The life-permitting range is the shaded wedge. Varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x0, y0), thus missing most of parameter space.
4.2.1 The Wedge is a Straw Man
In response, fine-tuning relies on a number of independent life-permitting criteria. Fail any of these criteria, and life becomes dramatically less likely, if not impossible. When parameter space is explored in the scientific literature, it rarely (if ever) looks like the wedge. We instead see many intersecting wedges. Here are two examples.
Barr & Khan (2007) explored the parameter space of a model in which up-type and down-type fermions acquire mass from different Higgs doublets. As a first step, they vary the masses of the up and down quarks. The natural scale for these masses ranges over 60 orders of magnitude and is illustrated in Figure 2 (top left). The upper limit is provided by the Planck scale; the lower limit from dynamical breaking of chiral symmetry by QCD; see Barr & Khan (2007) for a justification of these values. Figure 2 (top right) zooms in on a region of parameter space, showing boundaries of 9 independent life-permitting criteria:
Figure 2 Top row: the left panel shows the parameter space of the masses of the up and down quark. Note that the axes are loge not log10; the axes span ~60 orders of magnitude. The right panel shows a zoom-in of the small box. The lines show the limits of different life-permitting criteria, as calculated by Barr & Khan (2007) and explained in the text. The small green region marked ‘potentially viable’ shows where all these constraints are satisfied. Bottom row: Anthropic limits on some cosmological variables: the cosmological constant Λ (expressed as an energy density ρΛ in Planck units), the amplitude of primordial fluctuations Q, and the matter to photon ratio ξ. The white region shows where life can form. The coloured regions show where various life-permitting criteria are not fulfilled, as explained in the text. Figure from Tegmark et al. (2006). Figures reprinted with permission; Copyright (2006, 2007) by the American Physical Society.
Click to zoom
1. Above the blue line, there is only one stable element, which consists of a single particle Δ++. This element has the chemistry of helium — an inert, monatomic gas (above 4 K) with no known stable chemical compounds.
2. Above this red line, the deuteron is strongly unstable, decaying via the strong force. The first step in stellar nucleosynthesis in hydrogen burning stars would fail.
3. Above the green curve, neutrons in nuclei decay, so that hydrogen is the only stable element.
4. Below this red curve, the diproton is stable9. Two protons can fuse to helium-2 via a very fast electromagnetic reaction, rather than the much slower, weak nuclear pp-chain.
5. Above this red line, the production of deuterium in stars absorbs energy rather than releasing it. Also, the deuterium is unstable to weak decay.
6. Below this red line, a proton in a nucleus can capture an orbiting electron and become a neutron. Thus, atoms are unstable.
7. Below the orange curve, isolated protons are unstable, leaving no hydrogen left over from the early universe to power long-lived stars and play a crucial role in organic chemistry.
8. Below this green curve, protons in nuclei decay, so that any atoms that formed would disintegrate into a cloud of neutrons.
9. Below this blue line, the only stable element consists of a single particle Δ, which can combine with a positron to produce an element with the chemistry of hydrogen. A handful of chemical reactions are possible, with their most complex product being (an analogue of) H2.
A second example comes from cosmology. Figure 2 (bottom row) comes from Tegmark et al. (2006). It shows the life-permitting range for two slices through cosmological parameter space. The parameters shown are: the cosmological constant Λ (expressed as an energy density ρΛ in Planck units), the amplitude of primordial fluctuations Q, and the matter to photon ratio ξ. A star indicates the location of our universe, and the white region shows where life can form. The left panel shows ρΛ vs. Q3ξ4. The red region shows universes that are plausibly life-prohibiting — too far to the right and no cosmic structure forms; stray too low and cosmic structures are not dense enough to form stars and planets; too high and cosmic structures are too dense to allow long-lived stable planetary systems. Note well the logarithmic scale — the lack of a left boundary to the life-permitting region is because we have scaled the axis so that ρΛ = 0 is at x = –∞. The universe re-collapses before life can form for ρΛ –10–121 (Peacock 2007). The right panel shows similar constraints in the Q vs. ξ space. We see similar constraints relating to the ability of galaxies to successfully form stars by fragmentation due to gas cooling and for the universe to form anything other than black holes. Note that we are changing ξ while holding ξbaryon constant, so the left limit of the plot is provided by the condition ξ ≥ ξbaryon. See Table 4 of Tegmark et al. (2006) for a summary of 8 anthropic constraints on the 7 dimensional parameter space (α, β, mp, ρΛ, Q, ξ, ξbaryon).
Examples could be multiplied, and the restriction to a 2D slice through parameter space is due to the inconvenient unavailability of higher dimensional paper. These two examples show that the wedge, by only considering a single life-permitting criterion, seriously distorts typical cases of fine-tuning by committing the sequential juggler fallacy (Section 2). Stenger further distorts the case for fine-tuning by saying:
‘In the fine-tuning view, there is no wedge and the point has infinitesimal area, so the probability of finding life is zero.’ (Foft 70)
No reference is given, and this statement is not true of the scientific literature. The wedge is a straw man.
4.2.2 The Straw Man is Winning
The wedge, distortion that it is, would still be able to support a fine-tuning claim. The probability calculated by varying only one parameter is actually an overestimate of the probability calculated using the full wedge. Suppose the full life-permitting criterion that defines the wedge is,
where ε is a small number quantifying the allowed deviation from the value of y/x in our universe. Now suppose that we hold x constant at its value in our universe. We conservatively estimate the possible range of y by y0. Then, the probability of a life-permitting universe is Py = 2ε. Now, if we calculate the probability over the whole wedge, we find that Pw ≤ ε/(1 + ε) ≈ ε, where we have an upper limit because we have ignored the area with y inside Δy, as marked in Figure 1. Thus10 Py ≥ Pw.
It is thus not necessarily ‘scientifically shoddy’ to vary only one variable. Indeed, as scientists we must make these kind of assumptions all the time — the question is how accurate they are. Under fairly reasonable assumptions (uniform probability etc.), varying only one variable provides a useful estimate of the relevant probability. The wedge thus commits the flippant funambulist fallacy (Section 2). If ε is small enough, then the wedge is a tightrope. We have opened up more parameter space in which life can form, but we have also opened up more parameter space in which life cannot form. As Dawkins (1986) has rightly said: ‘however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive’.
This conclusion might be avoided with a non-uniform prior probability. One can show that a power-law prior has no significant effect on the wedge. Any other prior raises a problem, as explained by Aguirre (2007):
‘...it is assumed that [the prior] is either flat or a simple power law, without any complicated structure. This can be done just for simplicity, but it is often argued to be natural. ...If [the prior] is to have an interesting structure over the relatively small range in which observers are abundant, there must be a parameter of order the observed [one] in the expression for [the prior]. But it is precisely the absence of this parameter that motivated the anthropic approach.’
In short, to significantly change the probability of a life-permitting universe, we would need a prior that centres close to the observed value, and has a narrow peak. But this simply exchanges one fine-tuning for two — the centre and peak of the distribution.
There is, however, one important lesson to be drawn from the wedge. If we vary x only and calculate Px, and then vary y only and calculate Py, we must not simply multiply Pw = Px Py. This will certainly underestimate the probability inside the wedge, assuming that there is only a single wedge.
4.3 Entropy
We turn now to cosmology. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. Stenger argues in response that if the universe starts out at the Planck time as a sphere of radius equal to the Planck length, then its entropy is as great as it could possibly be, equal to that of a Planck-sized black hole (Bekenstein 1973; Hawking 1975). As the universe expands, an entropy ‘gap’ between the actual and maximum entropy opens up in regions smaller than the observable universe, allowing order to form.
Note that Stenger’s proposed solution requires only two ingredients — the initial, high-entropy state, and the expansion of the universe to create an entropy gap. In particular, Stenger is not appealing to inflation to solve the entropy problem. We will do the same in this section, coming to a discussion of inflation later.
There are a number of problems with Stenger’s argument, the most severe of which arises even if we assume that his calculation is correct. We have been asked to consider the universe at the Planck time, and in particular a region of the universe that is the size of the Planck length. Let’s see what happens to this comoving volume as the universe expands. 13.7 billion years of (concordance model) expansion will blow up this Planck volume until it is roughly the size of a grain of sand. A single Planck volume in a maximum entropy state at the Planck time is a good start but hardly sufficient. To make our universe, we would need around 1090 such Planck volumes, all arranged to transition to a classical expanding phase within a temporal window 100 000 times shorter than the Planck time11. This brings us to the most serious problem with Stenger’s reply.
Let’s remind ourselves of what the entropy problem is, as expounded by Penrose (1979). Consider our universe at t1 = one second after the big bang. Spacetime is remarkably smooth, represented by the Robertson-Walker metric to better than one part in 105. Now run the clock forward. The tiny inhomogeneities grow under gravity, forming deeper and deeper potential wells. Some will collapse into black holes, creating singularities in our once pristine spacetime. Now suppose that the universe begins to recollapse. Unless the collapse of the universe were to reverse the arrow of time12, entropy would continue to increase, creating more and larger inhomogeneities and black holes as structures collapse and collide. If we freeze the universe at t2 = one second before the big crunch, we see a spacetime that is highly inhomogeneous, littered with lumps and bumps, and pockmarked with singularities.
Penrose’s reasoning is very simple. If we started at t1 with an extremely homogeneous spacetime, and then allowed a few billion years of entropy increasing processes to take their toll, and ended at t2 with an extremely inhomogeneous spacetime, full of black holes, then we must conclude that the t2 spacetime represents a significantly higher entropy state than the t1 spacetime. We conclude that we know what a high-entropy big bang spacetime looks like, and it looks nothing like the state of our universe in its earliest stages. Why didn’t our universe begin in a high entropy, highly inhomogeneous state? Why did our universe start off in such a special, improbable, low-entropy state?
Let’s return to Stenger’s proposed solution. After introducing the relevant concepts, he says:
‘...this does not mean that the local entropy is maximal. The entropy density of the universe can be calculated. Since the universe is homogeneous, it will be the same on all scales.’ (Foft 112)
Stenger simply assumes that the universe is homogeneous and isotropic. We can see this also in his use of the Friedmann equation, which assumes that spacetime is homogeneous and isotropic. Not surprisingly, once homogeneity and isotropy have been assumed, the entropy problem doesn’t seem so hard.
We conclude that Stenger has failed to solve the entropy problem. He has presented the problem itself as its solution. Homogeneous, isotropic expansion cannot solve the entropy problem — it is the entropy problem. Stenger’s assertion that ‘the universe starts out with maximum entropy or complete disorder’ is false. A homogeneous, isotropic spacetime is an incredibly low entropy state. Penrose (1989) warned of precisely this brand of failed solution two decades ago:
‘Virtually all detailed investigations [of entropy and cosmology] so far have taken the FRW models as their starting point, which, as we have seen, totally begs the question of the enormous number of degrees of freedom available in the gravitational field ...The second law of thermodynamics arises because there was an enormous constraint (of a very particular kind) placed on the universe at the beginning of time, giving us the very low entropy that we need in order to start things off.’
Cosmologists repented of such mistakes in the 1970’s and 80’s.
Stenger’s ‘biverse’ (Foft 142) doesn’t solve the entropy problem either. Once again, homogeneity and isotropy are simply assumed, with the added twist that instead of a low entropy initial state, we have a low entropy middle state. This makes no difference — the reason that a low entropy state requires explanation is that it is improbable. Moving the improbable state into the middle does not make it any more probable. As Carroll (2008) notes, ‘an unnatural low-entropy condition [that occurs] in the middle of the universe’s history (at the bounce) ...passes the buck on the question of why the entropy near what we call the big bang was small’.13
4.4 Inflation
4.4.1 Did Inflation Happen?
We turn now to cosmic inflation, which proposes that the universe underwent a period of accelerated expansion in its earliest stages. The achievements of inflation are truly impressive — in one fell swoop, the universe is sent on its expanding way, the flatness, horizon, and monopole problem are solved and we have concrete, testable and seemingly correct predictions for the origin of cosmic structure. It is a brilliant idea, and one that continues to defy all attempts at falsification. Since life requires an almost-flat universe (Barrow & Tipler 1986, p. 408ff.), inflation is potentially a solution to a particularly impressive fine-tuning problem — sans inflation, the density of a life-permitting universe at the Planck time must be tuned to 60 decimal places.
Inflation solves this fine-tuning problem by invoking a dynamical mechanism that drives the universe towards flatness. The first question we must ask is: did inflation actually happen? The evidence is quite strong, though not indubitable (Turok 2002; Brandenberger 2011). There are a few things to keep in mind. Firstly, inflation isn’t a specific model as such; it is a family of models which share the desirable trait of having an early epoch of accelerating expansion. Inflation is an effect, rather than a cause. There is no physical theory that predicts the form of the inflaton potential. Different potentials, and different initial conditions for the same potential, will produce different predictions.
While there are predictions shared by a wide variety of inflationary potentials, these predictions are not unique to inflation. Inflation predicts a Gaussian random field of density fluctuations, but thanks to the central limit theorem this isn’t particularly unique (Peacock 1999, p. 342, 503). Inflation predicts a nearly scale-invariant spectrum of fluctuations, but such a spectrum was proposed for independent reasons by Harrison (1970) and Zel'dovich (1972) a decade before inflation was proposed. Inflation is a clever solution of the flatness and horizon problem, but could be rendered unnecessary by a quantum-gravity theory of initial conditions. The evidence for inflation is impressive but circumstantial.
4.4.2 Can Inflation Explain Fine-Tuning?
Note the difference between this section and the last. Is inflation itself fine-tuned? This is no mere technicality — if the solution is just as fine-tuned as the problem, then no progress has been made. Inflation, to set up a life-permitting universe, must do the following14:
1. I1. There must be an inflaton field. To make the expansion of the universe accelerate, there must exist a form of energy (a field) capable of satisfying the so-called Slow Roll Approximation (SRA), which is equivalent to requiring that the potential energy of the field is much greater than its kinetic energy, giving the field negative pressure.
2. I2. Inflation must start. There must come a time in the history of the universe when the energy density of the inflaton field dominates the total energy density of the universe, dictating its dynamics.
3. I3. Inflation must last. While the inflaton field controls the dynamics of the expansion of the universe, we need it to obey the slow roll conditions for a sufficiently long period of time. The ‘amount of inflation’ is usually quantified by Ne, the number of e-folds of the size of the universe. To solve the horizon and flatness problems, this number must be greater than ~60.
4. I4. Inflation must end. The dynamics of the expansion of the universe will (if it expands forever) eventually be dominated by the energy component with the most negative equation of state w = pressure/energy density. Matter has w = 0, radiation w = 1/3, and typically during inflation, the inflaton field has w ≈ –1. Thus, once inflation takes over, there must be some special reason for it to stop; otherwise, the universe would maintain its exponential expansion and no complex structure would form.
5. I5. Inflation must end in the right way. Inflation will have exponentially diluted the mass-energy density of the universe — it is this feature that allows inflation to solve the monopole problem. Once we are done inflating the universe, we must reheat the universe, i.e. refill it with ordinary matter. We must also ensure that the post-inflation field doesn’t possess a large, negative potential energy, which would cause the universe to quickly recollapse.
6. I6. Inflation must set up the right density perturbations. Inflation must result in a universe that is very homogeneous, but not perfectly homogeneous. Inhomogeneities will grow via gravitational instability to form cosmic structures. The level of inhomogeneity (Q) is subject to anthropic constraints, which we will discuss in Section 4.5.
The question now is: which of these achievements come naturally to inflation, and which need some careful tuning of the inflationary dials? I1 is a bare hypothesis — we know of no deeper reason why there should be an inflaton field at all. It was hoped that the inflaton field could be the Higgs field (Guth 1981). Alas, it wasn’t to be, and it appears that the inflaton’s sole raison d’être is to cause the universe’s expansion to briefly accelerate. There is no direct evidence for the existence of the inflaton field.
We can understand many of the remaining conditions through the work of Tegmark (2005), who considered a wide range of inflaton potentials using Gaussian random fields. The potential is of the form V(φ) = mv4f(φ/mh), where mv and mh are the characteristic vertical and horizontal mass scales, and f is a dimensionless function with values and derivatives of order unity. For initial conditions, Tegmark ‘sprays starting points randomly across the potential surface’. Figure 3 shows a typical inflaton potential.
Figure 3 An example of a randomly-generated inflaton potential. Thick lines show where the Slow Roll Approximation holds (SRA); thin lines show where it fails. The stars show four characteristic initial conditions. Three-pointed: the inflaton starts outside the SRA regions and does not re-enter, so there is no inflation. Four-pointed: successful inflation. Inflation will have a beginning, and end, and the post-inflationary vacuum energy is sufficiently small to allow the growth of structure. Five-pointed: inflation occurs, but the post-inflation field has a large, negative potential energy, which would cause the universe to quickly recollapse. Six-pointed: inflation never ends, and the universe contains no ordinary matter and no structure. Figure from Tegmark (2005), reproduced with permission of IOP Publishing Ltd.
Requirement I2 will be discussed in more detail below. For now we note that the inflaton must either begin or be driven into a region in which the SRA holds in order for the universe to inflate, as shown by the thick lines in Figure 3.
Requirement I3 comes rather naturally to inflation: Peacock (1999, p. 337) shows that the requirement that inflation produce a large number of e-folds is essentially the same as the requirement that inflation happen in the first place (i.e. SRA), namely φstart ≫ mPl. This assumes that the potential is relatively smooth, and that inflation terminates at a value of the field (φ) rather smaller than its value at the start. There is another problem lurking, however. If inflation lasts for 70 e-folds (for GUT scale inflation), then all scales inside the Hubble radius today started out with physical wavelength smaller than the Planck scale at the beginning of inflation (Brandenberger 2011). The predictions of inflation (especially the spectrum of perturbations), which use general relativity and a semi-classical description of matter, must omit relevant quantum gravitational physics. This is a major unknown — transplanckian effects may even prevent the onset of inflation.
I4 is non-trivial. The inflaton potential (or, more specifically, the region of the inflaton potential which actually determines the evolution of the field) must have a region in which the slow-roll approximation does not hold. If the inflaton rolls into a local minimum (at φ0) while the SRA still holds (which requires V(φ0) ≫ mPl2/8π d2V/dφ2|φ0 Peacock 1999, p. 332), then inflation never ends.
Tegmark (2005) asks what fraction of initial conditions for the inflaton field are successful, where success means that the universe inflates, inflation ends and the universes doesn’t thereafter meet a swift demise via a big crunch. The result is shown in Figure 4.
Figure 4 The thick black line shows the ‘success rate’ of inflation, for a model with mh/mPl as shown on the x-axis and mv = 0.001mPl. (This value has been chosen to maximise the probability of Q = Qobserved ≈ 2 × 10–5). The success rate is at most ~0.1%. The other coloured curves show predictions for other cosmological parameters. The lower coloured regions are for mv = 0.001mPl; the upper coloured regions are for mv = mh. Figure adapted from Tegmark (2005), reproduced with permission of IOP Publishing Ltd.
The thick black line shows the ‘success rate’ of inflation, for a model with mh/mPl as shown on the x-axis and mv = 0.001mPl. (This value has been chosen to maximise the probability that Q = Qobserved ≈ 2 × 10–5). The coloured curves show predictions for other cosmological parameters. The lower coloured regions are for mv =0.001mPl; the upper coloured regions are for mv = mh. The success rate peaks at ~0.1 percent, and drops rapidly as mh increases or decreases away from mPl. Even with a scalar field, inflation is far from guaranteed.
If inflation ends, we need its energy to be converted into ordinary matter (Condition I5). Inflation must not result in a universe filled with pure radiation or dark matter, which cannot form complex structures. Typically, the inflaton will to dump its energy into radiation. The temperature must be high enough to take advantage of baryon-number-violating physics for baryogenesis, and for γ + γ → particle + antiparticle reactions to create baryonic matter, but low enough not to create magnetic monopoles. With no physical model of the inflaton, the necessary coupling between the inflaton and ordinary matter/radiation is another postulate, but not an implausible one.
Requirement I6 brought about the downfall of ‘old’ inflation. When this version of inflation ended, it did so in expanding bubbles. Each bubble is too small to account for the homogeneity of the observed universe, and reheating only occurs when bubbles collide. As the space between the bubbles is still inflating, homogeneity cannot be achieved. New models of inflation have been developed which avoid this problem. More generally, the value of Q that results from inflation depends on the potential and initial conditions. We will discuss Q further in Section 4.5.
Perhaps the most pressing issue with inflation is hidden in requirement I2. Inflation is supposed to provide a dynamical explanation for the seemingly very fine-tuned initial conditions of the standard model of cosmology. But does inflation need special initial conditions? Can inflation act on generic initial conditions and produce the apparently fine-tuned universe we observe today? Hollands & Wald (2002b)15 contend not, for the following reason. Consider a collapsing universe. It would require an astonishing sequence of correlations and coincidences for the universe, in its final stages, to suddenly and coherently convert all its matter into a scalar field with just enough kinetic energy to roll to the top of its potential and remain perfectly balanced there for long enough to cause a substantial era of ‘deflation’. The region of final-condition-space that results from deflation is thus much smaller than the region that does not result from deflation. Since the relevant physics is time-reversible16, we can simply run the tape backwards and conclude that the initial-condition-space is dominated by universes that fail to inflate.
Readers will note the similarity of this argument to Penrose’s argument from Section 4.3. This intuitive argument can be formalised using the work of Gibbons, Hawking & Stewart (1987), who developed the canonical measure on the set of solutions of Einstein’s equation of General Relativity. A number of authors have used the Gibbons–Hawking–Stewart canonical measure to calculate the probability of inflation; see Hawking & Page (1988), Gibbons & Turok (2008) and references therein. We will summarise the work of Carroll & Tam (2010), who ask what fraction of universes that evolve like our universe since matter-radiation equality could have begun with inflation. Crucially, they consider the role played by perturbations:
Perturbations must be sub-dominant if inflation is to begin in the first place (Vachaspati & Trodden 1999), and by the end of inflation only small quantum fluctuations in the energy density remain. It is therefore a necessary (although not sufficient) condition for inflation to occur that perturbations be small at early times. ...the fraction of realistic cosmologies that are eligible for inflation is therefore P(inflation) ≈10–6.6×107.
Carroll & Tam casually note: ‘This is a small number’, and in fact an overestimate. A negligibly small fraction of universes that resemble ours at late times experience an early period of inflation. Carroll & Tam (2010) conclude that while inflation is not without its attractions (e.g. it may give a theory of initial conditions a slightly easier target to hit at the Planck scale), ‘inflation by itself cannot solve the horizon problem, in the sense of making the smooth early universe a natural outcome of a wide variety of initial conditions’. Note that this argument also shows that inflation, in and of itself, cannot solve the entropy problem17.
Let’s summarise. Inflation is a wonderful idea; in many ways it seems irresistible (Liddle 1995). However, we do not have a physical model, and even we had such a model, ‘although inflationary models may alleviate the ‘fine tuning’ in the choice of initial conditions, the models themselves create new ‘fine tuning’ issues with regard to the properties of the scalar field’ (Hollands & Wald 2002b). To pretend that the mere mention of inflation makes a life-permitting universe ‘100 percent’ inevitable (Foft 245) is naïve in the extreme, a cane toad solution. For a popular-level discussion of many of the points raised in our discussion of inflation, see Steinhardt (2011).
4.4.3 Inflation as a Case Study
Suppose that inflation did solve the fine-tuning of the density of the universe. Is it reasonable to hope that all fine-tuning cases could be solved in a similar way? We contend not, because inflation has a target. Let’s consider the range of densities that the universe could have had at some point in its early history. One of these densities is physically singled out as special — the critical density18. Now let’s note the range of densities that permit the existence of cosmic structure in a long-lived universe. We find that this range is very narrow. Very conveniently, this range neatly straddles the critical density.
We can now see why inflation has a chance. There is in fact a three-fold coincidence — A: the density needed for life, B: the critical density, and C: the actual density of our universe are all aligned. B and C are physical parameters, and so it is possible that some physical process can bring the two into agreement. The coincidence between A and B then creates the required anthropic coincidence (A and C). If, for example, life required a universe with a density (say, just after reheating) 10 times less than critical, then inflation would do a wonderful job of making all universes uninhabitable.
Inflation thus represents a very special case. Waiting inside the life-permitting range (L) is another physical parameter (p). Aim for p and you will get L thrown in for free. This is not true of the vast majority of fine-tuning cases. There is no known physical scale waiting in the life-permitting range of the quark masses, fundamental force strengths or the dimensionality of spacetime. There can be no inflation-like dynamical solution to these fine-tuning problems because dynamical processes are blind to the requirements of intelligent life.
What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the life-permitting range. As such, we would be solving a fine-tuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward p.
4.5 The Amplitude of Primordial Fluctuations Q
Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is Q ≈ 2 × 10–5, meaning that in the early universe the density at any point was typically within 1 part in 100 000 of the mean density. What if Q were different?
‘If Q were smaller than 10–6, gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial ‘mix’ of atoms, dark energy and radiation were the same as our own. On the other hand, a universe where Q were substantially larger than 10–5 — were the initial ‘ripples’ were replaced by large-amplitude waves — would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn’t fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe ...Stars would be packed too close together and buffeted too frequently to retain stable planetary systems.’ (Rees 1999, p. 115)
Stenger has two replies:
‘[T]he inflationary model predicted that the deviation from smoothness should be one part in 100 000. This prediction was spectacularly verified by the Cosmic Background Explorer (COBE) in 1992.’ (Foft 106)
‘While heroic attempts by the best minds in cosmology have not yet succeeded in calculating the magnitude of Q, inflation theory successfully predicted the angular correlation across the sky that has been observed.’ (Foft 206)
Note that the first part of the quote contradicts the second part. We are first told that inflation predicts Q = 10–5, and then we are told that inflation cannot predict Q at all. Both claims are false. A given inflationary model will predict Q, and it will only predict a life-permitting value for Q if the parameters of the inflaton potential are suitably fine-tuned. As Turok (2002) notes, ‘to obtain density perturbations of the level required by observations ...we need to adjust the coupling μ [for a power law potential μφn] to be very small, ~10–13 in Planck units. This is the famous fine-tuning problem of inflation’; see also Barrow & Tipler (1986, p. 437) and Brandenberger (2011). Rees’ life-permitting range for Q implies a fine-tuning of the inflaton potential of ~10–11 with respect to the Planck scale. Tegmark (2005, particularly figure 11) argues that on very general grounds we can conclude that life-permitting inflation potentials are highly unnatural.
Stenger’s second reply is to ask,
‘...is an order of magnitude fine-tuning? Furthermore, Rees, as he admits, is assuming all other parameters are unchanged. In the first case where Q is too small to cause gravitational clumping, increasing the strength of gravity would increase the clumping. Now, as we have seen, the dimensionless strength of gravity αG is arbitrarily defined. However, gravity is stronger when the masses involved are greater. So the parameter that would vary along with Q would be the nucleon mass. As for larger Q, it seems unlikely that inflation would ever result in large fluctuations, given the extensive smoothing that goes on during exponential expansion.’ (Foft 207)
There are a few problems here. We have a clear case of the flippant funambulist fallacy — the possibility of altering other constants to compensate the change in Q is not evidence against fine-tuning. Choose Q and, say, αG at random and you are unlikely to have picked a life-permitting pair, even if our universe is not the only life-permitting one. We also have a nice example of the cheap-binoculars fallacy. The allowed change in Q relative to its value in our universe (‘an order of magnitude’) is necessarily an underestimate of the degree of fine-tuning. The question is whether this range is small compared to the possible range of Q. Stenger seems to see this problem, and so argues that large values of Q are unlikely to result from inflation. This claim is false19. The upper blue region of Figure 4 shows the distribution of Q for the model of Tegmark (2005), using the ‘physically natural expectation’ mv = mh. The mean value of Q ranges from 10 to almost 10 000.
Note that Rees only varies Q in ‘Just Six Numbers’ because it is a popular level book. He and many others have extensively investigated the effect on structure formation of altering a number of cosmological parameters, including Q.
Tegmark & Rees (1998) were the first to calculate the range of Q which permits life, deriving the following limits for the case where ρΛ = 0:
where these quantities are defined in Table 1, except for the cosmic baryon density parameter Ωb, and we have omitted geometric factors of order unity. This inequality demonstrates the variety of physical phenomena, atomic, gravitational and cosmological, that must combine in the right way in order to produce a life-permitting universe. Tegmark & Rees also note that there is some freedom to change Q and ρΛ together.
Tegmark et al. (2006) expanded on this work, looking more closely at the role of the cosmological constant. We have already seen some of the results from this paper in Section 4.2.1. The paper considers 8 anthropic constraints on the 7 dimensional parameter space (α, β, mp, ρΛ, Q, ξ, ξbaryon). Figure 2 (bottom row) shows that the life-permitting region is boxed-in on all sides. In particular, the freedom to increase Q and ρΛ together is limited by the life-permitting range of galaxy densities.
Bousso et al. (2009) considers the 4-dimensional parameter space (β, Q, Teq, ρΛ), where Teq is the temperature if the CMB at matter-radiation equality. They reach similar conclusions to Rees et al.; see also Garriga et al. (1999); Bousso & Leichenauer (2009, 2010).
Garriga & Vilenkin (2006) discuss what they call the ‘Q catastrophe’: the probability distribution for Q across a multiverse typically increases or decreases sharply through the anthropic window. Thus, we expect that the observed value of Q is very likely to be close to one of the boundaries of the life-permitting range. The fact that we appear to be in the middle of the range leads Garriga & Vilenkin to speculate that the life-permitting range may be narrower than Tegmark & Rees (1998) calculated. For example, there may be a tighter upper bound due to the perturbation of comets by nearby stars and/or the problem of nearby supernovae explosions.
The interested reader is referred to the 90 scientific papers which cite Tegmark & Rees (1998), catalogued on the NASA Astrophysics Data System20.
The fine-tuning of Q stands up well under examination.
4.6 Cosmological Constant Λ
The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as ‘arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it’. A well-understood and well-tested theory of fundamental physics (Quantum Field Theory — QFT) predicts contributions to the vacuum energy of the universe that are ~10120 times greater than the observed total value. Stenger’s reply is guided by the following principle:
‘Any calculation that disagrees with the data by 50 or 120 orders of magnitude is simply wrong and should not be taken seriously. We just have to await the correct calculation.’ (Foft 219)
This seems indistinguishable from reasoning that the calculation must be wrong since otherwise the cosmological constant would have to be fine-tuned. One could not hope for a more perfect example of begging the question. More importantly, there is a misunderstanding in Stenger’s account of the cosmological constant problem. The problem is not that physicists have made an incorrect prediction. We can use the term dark energy for any form of energy that causes the expansion of the universe to accelerate, including a ‘bare’ cosmological constant (see Barnes et al. 2005, for an introduction to dark energy). Cosmological observations constrain the total dark energy. QFT allows us to calculate a number of contributions to the total dark energy from matter fields in the universe. Each of these contributions turns out to be 10120 times larger than the total. There is no direct theory-vs.-observation contradiction as one is calculating and measuring different things. The fine-tuning problem is that these different independent contributions, including perhaps some that we don’t know about, manage to cancel each other to such an alarming, life-permitting degree. This is not a straightforward case of Popperian falsification.
Stenger outlines a number of attempts to explain the fine-tuning of the cosmological constant.
Supersymmetry: Supersymmetry, if it holds in our universe, would cancel out some of the contributions to the vacuum energy, reducing the required fine-tuning to one part in ~1050. Stenger admits the obvious — this isn’t an entirely satisfying solution — but there is a deeper reason to be sceptical of the idea that advances in particle physics could solve the cosmological constant problem. As Bousso (2008) explains:
...nongravitational physics depends only on energy differences, so the standard model cannot respond to the actual value of the cosmological constant it sources. This implies that ρΛ = 0 [i.e. zero cosmological constant] is not a special value from the particle physics point of view.
A particle physics solution to the cosmological constant problem would be just as significant a coincidence as the cosmological constant problem itself. Further, this is not a problem that appears only at the Planck scale. It is thus unlikely that quantum gravity will solve the problem. For example, Donoghue (2007) says
‘It is unlikely that there is technically natural resolution to the cosmological constant’s fine-tuning problem — this would require new physics at 10–3 eV. [Such attempts are] highly contrived to have new dynamics at this extremely low scale which modifies only gravity and not the other interactions.’
Zero Cosmological Constant: Stenger tries to show that the cosmological constant of general relativity should be defined to be zero. He says:
‘Only in general relativity, where gravity depends on mass/energy, does an absolute value of mass/energy have any consequence. So general relativity (or a quantum theory of gravity) is the only place where we can set an absolute zero of mass/ energy. It makes sense to define zero energy as the situation in which the source of gravity, the energy momentum tensor, and the cosmological constant are each zero.’
The second sentence contradicts the first. If gravity depends on the absolute value of mass/energy, then we cannot set the zero-level to our convenience. It is in particle physics, where gravity is ignorable, where we are free to define ‘zero’ energy as we like. In general relativity there is no freedom to redefine Λ. The cosmological constant has observable consequences that no amount of redefinition can disguise.
Stenger’s argument fails because of this premise: if (Tμν = 0 ⇒ Gμν = 0) then Λ = 0. This is true as a conditional, but Stenger has given no reason to believe the antecedent. Even if we associate the cosmological constant with the ‘SOURCE’ side of the equations, the antecedent nothing more than an assertion that the vacuum (Tμν = 0) doesn’t gravitate.
Even if Stenger’s argument were successful, it still wouldn’t solve the problem. The cosmological constant problem is actually a misnomer. This section has discussed the ‘bare’ cosmological constant. It comes purely from general relativity, and is not associated with any particular form of energy. The 120 orders-of-magnitude problem refers to vacuum energy associated with the matter fields of the universe. These are contributions to Tμν. The source of the confusion is the fact that vacuum energy has the same dynamical effect as the cosmological constant, so that observations measure an ‘effective’ cosmological constant: Λeff = Λbare +Λvacuum. The cosmological constant problem is really the vacuum energy problem. Even if Stenger could show that Λbare = 0, this would do nothing to address why Λeff is observed to be so much smaller than the predicted contributions to Λvacuum.
Quintessence: Stenger recognises that, even if he could explain why the cosmological constant and vacuum energy are zero, he still needs to explain why the expansion of the universe is accelerating. One could appeal to an as-yet-unknown form of energy called quintessence, which has an equation of state w = p/ρ that causes the expansion of the universe to accelerate21 (w < –1/3). Stenger concludes that:
...a cosmological constant is not needed for early universe inflation nor for the current cosmic acceleration. Note this is not vacuum energy, which is assumed to be identically zero, so we have no cosmological constant problem and no need for fine-tuning.
In reply, it is logically possible that the cause of the universe’s acceleration is not vacuum energy but some other form of energy. However, to borrow the memorable phrasing of Bousso (2008), if it looks, walks, swims, flies and quacks like a duck, then the most reasonable conclusion is not that it is a unicorn in a duck outfit. Whatever is causing the accelerated expansion of the universe quacks like vacuum energy. Quintessence is a unicorn in a duck outfit. We are discounting a form of energy with a plausible, independent theoretical underpinning in favour of one that is pure speculation.
The present energy density of quintessence must fall in the same life-permitting range that was required of the cosmological constant. We know the possible range of ρΛ because we have a physical theory of vacuum energy. What is the possible range of ρQ? We don’t know, because we have no well-tested, well-understood theory of quintessence. This is hypothetical physics. In the absence of a physical theory of quintessence, and with the hint (as discussed above) that gravitational physics must be involved, the natural guess for the dark energy scale is the Planck scale. In that case, ρQ is once again 120 orders of magnitude larger than the life-permitting scale, and we have simply exchanged the fine-tuning of the cosmological constant for the fine-tuning of dark energy.
Stenger’s assertion that there is no fine-tuning problem for quintessence is false, as a number of authors have pointed out. For example, Peacock (2007) notes that most models of quintessence in the literature specify its properties via a potential V(φ), and comments that ‘Quintessence ...models do not solve the [cosmological constant] problem: the potentials asymptote to zero, even though there is no known symmetry that requires this’. Quintessence models must be fine-tuned in exactly the same way as the cosmological constant (see also Durrer & Maartens 2007).
Underestimating Λ: Stenger’s presentation of the cosmological constant problem fails to mention some of the reasons why this problem is so stubborn22. The first is that we know that the electron vacuum energy does gravitate in some situations. The vacuum polarisation contribution to the Lamb shift is known to give a nonzero contribution to the energy of the atom, and thus by the equivalence principle must couple to gravity. Similar effects are observed for nuclei. The puzzle is not just to understand why the zero point energy does not gravitate, but why it gravitates in some environments but not in vacuum. Arguing that the calculation of vacuum energy is wrong and can be ignored is naïve. There are certain contexts where we know that the calculation is correct.
Secondly, a dynamical selection mechanism for the cosmological constant is made difficult by the fact that only gravity can measure ρΛ, and ρΛ only becomes dynamically important quite recently in the history of the universe. Polchinski (2006) notes that many of the mechanisms aimed at selecting a small value for ρΛ — the Hawking-Hartle wavefunction, the de Sitter entropy and the Coleman-de Luccia amplitude for tunneling — can only explain why the cosmological constant vanishes in an empty universe.
Inflation creates another problem for would-be cosmological constant problem solvers. If the universe underwent a period of inflation in its earliest stages, then the laws of nature are more than capable of producing life-prohibiting accelerated expansion. The solution must therefore be rather selective, allowing acceleration in the early universe but severely limiting it later on. Further, the inflaton field is yet another contributor to the vacuum energy of the universe, and one with universe-accelerating pedigree. We can write a typical local minimum of the inflaton potential as: V(φ) = μ (φ – φ0)2 + V0. Post inflation, our universe settles into the minimum at φ = φ0, and the V0 term contributes to the effective cosmological constant. We have seen this point previously: the five- and six-pointed stars in Figure 4 show universes in which the value of V0 is respectively too negative and too positive for the post-inflationary universe to support life. If the calculation is wrong, then inflation is not a well-characterised theory. If the field does not cause the expansion of the universe to accelerate, then it cannot power inflation. There is no known symmetry that would set V0 = 0, because we do not know what the inflaton is. Most proposed inflation mechanisms operate near the Planck scale, so this defines the possible range of V0. The 120 order-of-magnitude fine-tuning remains.
The Principle of Mediocrity: Stenger discusses the multiverse solution to the cosmological constant problem, which relies on the principle of mediocrity. We will give a more detailed appraisal of this approach in Section 5. Here we note what Stenger doesn’t: an appeal to the multiverse is motivated by and dependent on the fine-tuning of the cosmological constant. Those who defend the multiverse solution to the cosmological constant problem are quite clear that they do so because they have judged other solutions to have failed. Examples abound:
1. ‘There is not a single natural solution to the cosmological constant problem. ...[With the discovery that Λ > 0] The cosmological constant problem became suddenly harder, as one could no longer hope for a deep symmetry setting it to zero.’ (Arkani-Hamed, Dimopoulos & Kachru 2005)
2. ‘Throughout the years many people ...have tried to explain why the cosmological constant is small or zero. The overwhelming consensus is that these attempts have not been successful.’ (Susskind 2005, p. 357)
3. ‘No concrete, viable theory predicting ρΛ = 0 was known by 1998 [when the acceleration of the universe was discovered] and none has been found since.’ (Bousso 2008)
4. ‘There is no known symmetry to explains why the cosmological constant is either zero or of order the observed dark energy.’ (Hall & Nomura 2008)
5. ‘As of now, the only viable resolution of [the cosmological constant problem] is provided by the anthropic approach.’ (Vilenkin 2010)
See also Peacock (2007) and Linde & Vanchurin (2010), quoted above, and Susskind (2003).
Conclusion: There are a number of excellent reviews of the cosmological constant in the scientific literature (Weinberg 1989; Carroll 2001; Vilenkin 2003; Polchinski 2006, Durrer & Maartens 2007; Padmanabhan 2007; Bousso 2008). The calculations are known to be correct in other contexts and so are taken very seriously. Supersymmetry won’t help. The problem cannot be defined away. The most plausible small-vacuum-selecting mechanisms don’t work in a universe that contains matter. Particle physics is blind to the absolute value of the vacuum energy. The cosmological constant problem is not a problem only at the Planck scale and thus quantum gravity is unlikely to provide a solution. Quintessence and the inflaton field are just more fields whose vacuum state must be sternly commanded not to gravitate, or else mutually balanced to an alarming degree.
There is, of course, a solution to the cosmological problem. There is some reason — some physical reason — why the large contributions to the vacuum energy of the universe don’t make it life-prohibiting. We don’t currently know what that reason is, but scientific papers continue to be published that propose new solutions to the cosmological constant problem (e.g. Shaw & Barrow 2011). The point is this: however many ways there are of producing a life-permitting universe, there are vastly many more ways of making a life-prohibiting one. By the time we discover how our universe solves the cosmological constant problem, we will have compiled a rather long list of ways to blow a universe to smithereens, or quickly crush it into oblivion. Amidst the possible universes, life-permitting ones are exceedingly rare. This is fine-tuning par excellence.
4.7 Stars
Stars have two essential roles to play in the origin and evolution of intelligent life. They synthesise the elements needed by life — big bang nucleosynthesis provides only hydrogen, helium and lithium, which together can form just two chemical compounds (H2 and LiH). By comparison, Gingerich (2008) notes that the carbon and hydrogen alone can be combined into around 2300 different chemical compounds. Stars also provide a long-lived, low-entropy source of energy for planetary life, as well as the gravity that holds planets in stable orbits. The low-entropy of the energy supplied by stars is crucial if life is to ‘evade the decay to equilibrium’ (Schrödinger 1992).
4.7.1 Stellar Stability
Stars are defined by the forces that hold them in balance. The crushing force of gravity is held at bay by thermal and radiation pressure. The pressure is sourced by thermal reactions at the centre of the star, which balance the energy lost to radiation. Stars thus require a balance between two very different forces — gravity and the strong force — with the electromagnetic force (in the form of electron scattering opacity) providing the link between the two.
There is a window of opportunity for stars — too small and they won’t be able to ignite and sustain nuclear fusion at their cores, being supported against gravity by degeneracy rather than thermal pressure; too large and radiation pressure will dominate over thermal pressure, allowing unstable pulsations. Barrow & Tipler (1986, p. 332) showed that this window is open when,
where the first expression uses the more exact calculation of the right-hand-side by Adams (2008), and the second expression uses Barrow & Tipler’s approximation for the minimum nuclear ignition temperature Tnuc ~ ηα2mp, where η ≈ 0.025 for hydrogen burning. Outside this range, stars are not stable: anything big enough to burn is big enough to blow itself apart. Adams (2008) showed there is another criterion that must be fulfilled for stars have a stable burning configuration,
where is a composite parameter related to nuclear reaction rates, and we have specialised equation 44 of Adams to the case where stellar opacity is due to Thomson scattering.
Adams combines these constraints in (G, α, ) parameter space, holding all other parameters constant, as shown in Figure 5. Below the solid line, stable stars are possible. The dashed (dotted) line shows the corresponding constraint for universes in which is increased (decreased) by a factor of 100. Adams remarks that ‘within the parameter space shown, which spans 10 orders of magnitude in both α and G, about one-fourth of the space supports the existence of stars’.
Figure 5 The parameter space (G, α), shown relative to their values in our universe (G0, α0). The triangle shows our universe. Below the solid line, stable stars are possible. The dashed (dotted) line shows the corresponding constraint for universes in which is increased (decreased) by a factor of 100. Note that the axes are logarithmic and span 10 orders of magnitude. Figure from Adams (2008), reproduced with permission of IOP Publishing Ltd.
Stenger (Foft 243) cites Adams’ result, but crucially omits the modifier shown. Adams makes no attempt to justify the limits of parameter space as he has shown them. Further, there is no justification of the use of logarithmic axes, which significantly affects the estimate of the probability23. The figure of ‘one-fourth’ is almost meaningless — given any life-permitting region, one can make it equal one-fourth of parameter space by chopping and changing said space. This is a perfect example of the cheap-binoculars fallacy. If one allows G to increase until gravity is as strong as the strong force (αG ≈ αs ≈ 1), and uses linear rather than logarithmic axes, the stable-star-permitting region occupies ~ 10–38 of parameter space. Even with logarithmic axes, fine-tuning cannot be avoided — zero is a possible value of G, and thus is part of parameter space. However, such a universe is not life-permitting, and so there is a minimum life-permitting value of G. A logarithmic axis, by placing G = 0 at negative infinity, puts an infinitely large region of parameter space outside of the life-permitting region. Stable stars would then require infinite fine-tuning. Note further that the fact that our universe (the triangle in Figure 5) isn’t particularly close to the life-permitting boundary is irrelevant to fine-tuning as we have defined it. We conclude that the existence of stable stars is indeed a fine-tuned property of our universe.
4.7.2 The Hoyle Resonance
One of the most famous examples of fine-tuning is the Hoyle resonance in carbon. Hoyle reasoned that if such a resonance level did not exist at just the right place, then stars would be unable to produce the carbon required by life24.
Is the Hoyle resonance (called the 0+ level) fine-tuned? Stenger quotes the work of Livio et al. (1989), who considered the effect on the carbon and oxygen production of stars when the 0+ level is shifted. They found one could increase the energy of the level by 60 keV without effecting the level of carbon production. Is this a large change or a small one? Livio et al. (1989) ask just this question, noting the following. The permitted shift represents a 0.7% change in the energy of the level itself. It is 3% of the energy difference between the 0+ level and the next level up in the carbon nucleus (3). It is 16% of the difference between the energy of the 0+ state and the energy of three alpha particles, which come together to form carbon.
Stenger argues that this final estimate is the most appropriate one, quoting from Weinberg (2007):
‘We know that even-even nuclei have states that are well described as composites of α particles. One such state is the ground state of Be8, which is unstable against fission into two α particles.The same αα potential that produces that sort of unstable state in Be8 could naturally be expected to produce an unstable state in C12 that is essentially a composite of three α particles, and that therefore appears as a low-energy resonance in α-Be8 reactions. So the existence of this state does not seem to me to provide any evidence of fine tuning.’
As Cohen (2008) notes, the 0+ state is known as a breathing mode; all nuclei have such a state.
However, we are not quite done with assessing this fine-tuning case. The existence of the 0+ level is not enough. It must have the right energy, and so we need to ask how the properties of the resonance level, and thus stellar nucleosynthesis, change as we alter the fundamental constants. Oberhummer, Csótó & Schlattl (2000a)25 have performed such calculations, combining the predictions of a microscopic 12-body, three-alpha cluster model of 12C (as alluded to by Weinberg) with a stellar nucleosynthesis code. They conclude that:
Even with a change of 0.4% in the strength of [nucleon-nucleon] force, carbon-based life appears to be impossible, since all the stars then would produce either almost solely carbon or oxygen, but could not produce both elements.
Schlattl et al. (2004), by the same group, noted an important caveat on their previous result. Modelling the later, post-hydrogen-burning stages of stellar evolution is difficult even for modern codes, and the inclusion of He-shell flashes seems to lessen the degree of fine-tuning of the Hoyle resonance.
Ekström et al. (2010) considered changes to the Hoyle resonance in the context of Population III stars. These first-generation stars play an important role in the production of the elements needed by life. Ekström et al. (2010) place similar limits to Oberhummer et al. (2000a) on the nucleon-nucleon force, and go further by translating these limits into limits on the fine-structure constant, α. A fractional change in α of one part in 105 would change the energy of the Hoyle resonance enough that stars would contain carbon or oxygen at the end of helium burning but not both.
There is again reason to be cautious, as stellar evolution has not been followed to the very end of the life of the star. Nevertheless, these calculations are highly suggestive — the main process by which carbon and oxygen are synthesised in our universe is drastically curtailed by a tiny change in the fundamental constants. Life would need to hope that sufficient carbon and oxygen are synthesized in other ways, such as supernovae. We conclude that Stenger has failed to turn back the force of this fine-tuning case. The ability of stars in our universe to produce both carbon and oxygen seems to be a rare talent.
4.8 Forces and Masses
In Chapters 7–10, Stenger turns his attention to the strength of the fundamental forces and the masses of the elementary particles. These quantities are among the most discussed in the fine-tuning literature, beginning with Carter (1974), Carr & Rees (1979) and Barrow & Tipler (1986). Figure 6 shows in white the life-permitting region of (α, β) (left) and (α, αs) (right) parameter space26. The axes are scaled like arctan (log10[x]), so that the interval [0, ∞] maps onto a finite range. The blue cross shows our universe. This figure is similar to those of Tegmark (1998). The various regions illustrated are as follows:
Figure 6 The life-permitting region (shown in white) in the (α, β) (left) and (α, αs) (right) parameter space, with other constants held at their values in our universe. Our universe is shown as a blue cross. These figures are similar to those of Tegmark (1998). The numbered regions and solid lines are explained in Section 4.8. The blue dot-dashed line is discussed in Section 4.8.2.
Click to zoom
1. For hydrogen to exist — to power stars and form water and organic compounds — we must have me < mn – mp. Otherwise, the electron will be captured by the proton to form a neutron (Hogan 2006; Damour & Donoghue 2008).
2. For stable atoms, we need the radius of the electron orbit to be significantly larger than the nuclear radius, which requires αβ/αs ≪ 1 (Barrow & Tipler 1986, p. 320). The region shown is αβ/αs < 1/1000, which Stenger adopts (Foft 244).
3. We require that the typical energy of chemical reactions is much smaller than the typical energy of nuclear reactions. This ensures that the atomic constituents of chemical species maintain their identity in chemical reactions. This requires α2β/αs2 ≪ 1 (Barrow & Tipler 1986, p. 320). The region shown is α2β/αs2 < 1/1000.
4. Unless β1/4 ≪ 1, stable ordered molecular structures (like chromosomes) are not stable. The atoms will too easily stray from their place in the lattice and the substance will spontaneously melt (Barrow & Tipler 1986, p. 305). The region shown is β1/4 < 1/3.
5. The stability of the proton requires α (md – mu)/141 MeV, so that the extra electromagnetic mass-energy of a proton relative to a neutron is more than counter-balanced by the bare quark masses (Hogan 2000; Hall & Nomura 2008).
6. Unless α ≪ 1, the electrons in atoms and molecules are unstable to pair creation (Barrow & Tipler 1986, p. 297). The limit shown is α < 0.2. A similar constraint is calculated by Lieb & Yau (1988).
7. As in Equation 4, stars will not be stable unless β α2/100.
8. Unless αs/αs,0 1.003 + 0.031α/α0 (Davies 1972), the diproton has a bound state, which affects stellar burning and big bang nucleosynthesis. (Note, however, the caveats mentioned in Footnote 9.)
9. Unless αs 0.3α1/2, carbon and all larger elements are unstable (Barrow & Tipler 1986, p. 326).
10. Unless αs/αs,0 0.91 (Davies 1972), the deuteron is unstable and the main nuclear reaction in stars (pp) does not proceed. A similar effect would be achieved27 unless md – mu + me < 3.4 MeV which makes the pp reaction energetically unfavourable (Hogan 2000). This region is numerically very similar to Region 1 in the left plot; the different scaling with the quark masses is illustrated in Figure 7.
Figure 7 Constraints from the stability of hydrogen and deuterium, in terms of the electron mass (me) and the down-up quark mass difference (md – mu). The condition labelled no nuclei was discussed in Section 4.8, point 10. The line labelled no atoms is the same condition as point 1, expressed in terms of the quark masses. The thin solid vertical line shows ‘a constraint from a particular SO(10) grand unified scenario’. Figure from Hogan (2007), reproduced with permission of Cambridge University Press.
1. The grey stripe on the left of each plot shows where α < αG, rendering electric forces weaker than gravitational ones.
2. To the left of our universe (the blue cross) is shown the limit of Adams (2008) on stellar stability, Equation 5. The limit shown is α > 7.3 × 10–5, as read off figure 5 of Adams (2008). The dependence on β and αs has not been calculated, and so only the limit for the case when these parameters take the value they have in our universe is shown28.
3. The upper limit shown in the right plot of Figure 6 is the result of MacDonald & Mullan (2009) that the amount of hydrogen left over from big bang nucleosynthesis is significantly diminished when αs > 0.27. Note that this is weaker than the condition that the diproton be bound. The dependence on α has not been calculated, so only a 1D limit is shown.
4. The dashed line in the left plot shows a striking coincidence discussed by Carter (1974), namely α12β4 ~ αG. Near this line, the universe will contain both radiative and convective stars. Carter conjectured that life may require both types for reasons pertaining to planet formation and supernovae. This reason is somewhat dubious, but a better case can be made. The same coincidence can be shown to ensure that the surface temperature of stars is close to ‘biological temperature’ (Barrow & Tipler 1986, p. 338). In other words, it ensures that the photons emitted by stars have the right energy to break chemical bonds. This permits photosynthesis, allowing electromagnetic energy to be converted into and stored as chemical energy in plants. However, it is not clear how close to the line a universe must be to be life-permitting, and the calculation considers only radiation dominated stars.
5. The left solid line shows the lower limit α > 1/180 for a grand-unified theory to unify no higher than the Planck scale. The right solid line shows the boundary of the condition that protons be stable on stellar timescales (β2 > α (αG exp α–1)–1, Barrow & Tipler 1986, p. 358). These limits are based on Grand Unified Theories (GUT) and thus somewhat more speculative. We will say more about GUTs below.
6. The triple-alpha constraint is not shown. The constraint on carbon production from Ekström et al. (2010) is –3.5 × 10–5 Δα/α +1.8 × 10–5, as discussed in Section 4.7.2. Note also the caveats discussed there. This only considers the change in α i.e. horizontally, and the life-permitting region is likely to be a 2D strip in both the (α, β) and (α, αs) plane. As this strip passes our universe, its width in the x-direction is one-thousandth of the width of one of the vertical black lines.
7. The limits placed on α and β from chemistry are weaker than the constraints listed above. If we consider the nucleus as fixed in space, then the time-independent, non-relativistic Schrödinger equation scales with α2me i.e. the relative energy and properties of the energy levels of electrons (which determine chemical bonding) are unchanged (Barrow & Tipler 1986, p. 533). The change in chemistry with fundamental parameters depends on the accuracy of the approximations of an infinite mass nucleus and non-relativistic electrons. This has been investigated by King et al. (2010) who considered the bond angle and length in water, and the reaction energy of a number of organic reactions. While ‘drastic changes in the properties of water’ occur for α 0.08 and β 0.054, it is difficult to predict what impact these changes would have on the origin and evolution of life.
Note that there are four more constraints on α, me and mp from the cosmological considerations of Tegmark et al. (2006), as discussed in Section 4.2. There are more cases of fine-tuning to be considered when we expand our view to consider all the parameters of the standard model of particle physics.
Agrawal et al. (1998a, b) considered the life-permitting range of the Higgs mass parameter μ2, and the corresponding limits on the vacuum expectation value, v = (–μ2/λ)1/2, which takes the value 246 GeV =2 × 10–17mPl in our universe. After exploring the range [–mPl, mPl], they find that ‘only for values in a narrow window is life likely to be possible’. In Planck units, the relevant limits are: for v > 4 × 10–17, the deuteron is strongly unstable (see point 10 above); for v > 10–16, the neutron is heavier than the proton by more than the nucleon’s binding energy, so that even bound neutrons decay into protons and no nuclei larger than hydrogen are stable; for v > 2 × 10–14, only the Δ++ particle is stable and the only stable nucleus has the chemistry of helium; for v 2 × 10–19, stars will form very slowly (~1017 yr) and burn out very quickly (~1 yr), and the large number of stable nucleon species may make nuclear reactions so easy that the universe contains no light nuclei. Damour & Donoghue (2008) refined the limits of Agrawal et al. by considering nuclear binding, concluding that unless 0.78 × 10–17 < v < 3.3 × 10–17 hydrogen is unstable to the reaction p + e → n + ν (if v is too small) or else there is no nuclear binding at all (if v is too large).
Jeltema & Sher (1999) combined the conclusions of Agrawal et al. and Oberhummer et al. (2000a) to place a constraint on the Higgs vev from the fine-tuning of the Hoyle resonance (Section 4.7.2). They conclude that a 1% change in v from its value in our universe would significantly affect the ability of stars to synthesise both oxygen and carbon. Hogan (2006) reached a similar conclusion: ‘In the absence of an identified compensating factor, increases in [vQCD] of more than a few percent lead to major changes in the overall cosmic carbon creation and distribution’. Remember, however, the caveats of Section 4.7.2: it is difficult to predict exactly when a major change becomes a life-prohibiting change.
There has been considerable attention given to the fine-tuning of the masses of fundamental particles, in particular mu, md and me. We have already seen the calculation of Barr & Khan (2007) in Figure 2, which shows the life-permitting region of the mumd plane. Hogan (2000) was one of the first to consider the fine-tuning of the quark masses (see also Hogan 2006). Such results have been confirmed and extended by Damour & Donoghue (2008), Hall & Nomura (2008) and Bousso et al. (2009).
Jaffe et al. (2009) examined a different slice through parameter space, varying the masses of the quarks while ‘holding as much as possible of the rest of the Standard Model phenomenology constant’ [emphasis original]. In particular, they fix the electron mass, and vary ΛQCD so that the average mass of the lightest baryon(s) is 940 MeV, as in our universe. These restrictions are chosen to make the characterisation of these other universes more certain. Only nuclear stability is considered, so that a universe is deemed congenial if both carbon and hydrogen are stable. The resulting congenial range is shown in Figure 8. The height of each triangle is proportional to the total mass of the three lightest quarks: mT = mu + md + ms; the centre triangle has mT as in our universe. The perpendicular distance from each side represents the mass of the u, d and s quarks. The lower green region shows universes like ours with two light quarks (mu, md ≪ ms), and is bounded above by the stability of some isotope of hydrogen (in this case, tritium) and below by the corresponding limit for carbon 10C, (–21.80 MeV < mp – mn < 7.97 MeV). The smaller green strip shows a novel congenial region, where there is one light quark (md ≪ ms ≈ mu). This congeniality band has half the width of the band in which our universe is located. The red regions are uncongenial, while white regions show where it is uncertain where the red-green boundary should lie. Note two things about the larger triangle on the right. Firstly, the smaller congenial band detaches from the edge of the triangle for mT 1.22mT,0 as the lightest baryon is the Δ++, which would be incapable of forming nuclei. Secondly, and most importantly for our purposes, the absolute width of the green regions remains the same, and thus the congenial fraction of the space decreases approximately as 1/mT. Moving from the centre (mT = mT,0) to the right (mT = 2mT,0) triangle of Figure 8, the congenial fraction drops from 14% to 7%. Finally, ‘congenial’ is almost certainly a weaker constraint than ‘life-permitting’, since only nuclear stability is investigated. For example, a universe with only tritium will have an element which is chemically very similar to hydrogen, but stars will not have 1H as fuel and will therefore burn out significantly faster.
Figure 8 The results of Jaffe et al. (2009), showing in green the region of (mu, md, ms) parameter space that is ‘congenial’, meaning that at least one isotope of hydrogen and carbon is stable. The height of each triangle is proportional to mT = mu + md + ms, with the centre triangle having mT as in our universe. The perpendicular distance from each side represents the mass of the u, d and s quarks. See the text for details of the instabilities in the red ‘uncongenial’ regions. Reprinted figure with permission from Jaffe et al. (2009). Copyright (2009) by the American Physical Society.
Click to zoom
Tegmark, Vilenkin & Pogosian (2005) studied anthropic constraints on the total mass of the three neutrino species. If ∑mν 1 eV then galaxy formation is significantly suppressed by free streaming. If ∑mν is large enough that neutrinos are effectively another type of cold dark matter, then the baryon fraction in haloes would be very low, affecting baryonic disk and star formation. If all neutrinos are heavy, then neutrons would be stable and big bang nucleosynthesis would leave no hydrogen for stars and organic compounds. This study only varies one parameter, but its conclusions are found to be ‘rather robust’ when ρΛ is also allowed to vary (Pogosian & Vilenkin 2007).
There are a number of tentative anthropic limits relating to baryogenesis. Baryogenesis is clearly crucial to life — a universe which contained equal numbers of protons and antiprotons at annihilation would only contain radiation, which cannot form complex structures. However, we do not currently have a well-understood and well-tested theory of baryogenesis, so caution is advised. Gould (2010) has argued that three or more generations of quarks and leptons are required for CP violation, which is one of the necessary conditions for baryogenesis (Sakharov 1967; Cahn 1996; Schellekens 2008). Hall & Nomura (2008) state that vQCD ~ 1 is required ‘so that the baryon asymmetry of the early universe is not washed out by sphaleron effects’ (see also Arkani-Hamed et al. 2005).
Harnik, Kribs & Perez (2006) attempted to find a region of parameter space which is life-permitting in the absence of the weak force. With some ingenuity, they plausibly discovered one, subject to the following conditions. To prevent big bang nucleosynthesis burning all hydrogen to helium in the early universe, they must use a ‘judicious parameter adjustment’ and set the baryon to photon radio ηb = 4 × 10–12. The result is a substantially increased abundance of deuterium, ~10% by mass. ΛQCD and the masses of the light quarks and leptons are held constant, which means that the nucleon masses and thus nuclear physics is relatively unaffected (except, of course, for beta decay) so long as we ‘insist that the weakless universe is devoid of heavy quarks’ to avoid problems relating to the existence of stable baryons29 Λc+, Λb0 and Λt+. Since v ~ mPl in the weakless universe, holding the light fermion masses constant requires the Yukawa parameters (Γe, Γu, Γd, Γs) must all be set by hand to be less than 10–20 (Feldstein et al. 2006). The weakless universe requires Ωbaryondark matter ~ 10–3, 100 times less than in our universe. This is very close to the limit of Tegmark et al. (2006), who calculated that unless Ωbaryondark matter 5 × 10–3, gas will not cool into galaxies to form stars. Galaxy formation in the weakless universe will thus be considerably less efficient, relying on rare statistical fluctuations and cooling via molecular viscosity. The proton-proton reaction which powers stars in our universe relies on the weak interaction, so stars in the weakless universe burn via proton-deuterium reactions, using deuterium left over from the big bang. Stars will burn at a lower temperature, and probably with shorter lifetimes. Stars will still be able to undergo accretion supernovae (Type 1a), but the absence of core-collapse supernovae will seriously affect the oxygen available for planet formation and life (Clavelli & White 2006). Only ~1% of the oxygen in our universe comes from accretion supernovae. It is then somewhat optimistic to claim that (Gedalia, Jenkins & Perez 2011),
where {αus} ({αweakless}) represents the set of parameters of our (the weakless) universe. Note that, even if Equation 6 holds, the weakless universe at best opens up a life-permitting region of parameter space of similar size to the region in which our universe resides. The need for a life-permitting universe to be fine-tuned is not significantly affected.
4.8.1 The Origin of Mass
Let’s consider Stenger’s responses to these cases of fine-tuning.
Higgs and Hierarchy:
‘Electrons, muons, and tauons all pick up mass by the Higgs mechanism. Quarks must pick up some of their masses this way, but they obtain most of their masses by way of the strong interaction ...All these masses are orders of magnitude less than the Planck mass, and no fine-tuning was necessary to make gravity much weaker than electromagnetism. This happened naturally and would have occurred for a wide range of mass values, which, after all, are just small corrections to their intrinsically zero masses. ...In any case, these small mass corrections do not call for any fine-tuning or indicate that our universe is in any way special. ...[mpme/m2Pl] is so small because the masses of the electron and the protons are so small compared to the Planck mass, which is the only ‘natural’ mass you can form from the simplest combination of fundamental constants.’ (Foft 154,156,175)
Stenger takes no cognizance of the hierarchy and flavour problems, widely believed to be amongst the most important problems of particle physics:
Lisa Randal: ‘The universe seems to have two entirely different mass scales, and we don’t understand why they are so different. There’s what’s called the Planck scale, which is associated with gravitational interactions. It’s a huge mass scale ...1019 GeV. Then there’s the electroweak scale, which sets the masses for the W and Z bosons. [~100 GeV] ...So the hierarchy problem, in its simplest manifestation, is how can you have these particles be so light when the other scale is so big.’ (Taubes 2002)
Frank Wilzcek: ‘We have no ...compelling idea about the origin of the enormous number [mPl/me] = 2.4 × 1022. If you would like to humble someone who talks glibly about the Theory of Everything, just ask about it, and watch ‘em squirm.’ (Wilczek 2005)
Leonard Susskind: ‘The up- and down-quarks are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson ...needs an explanation. The Standard Model has not provided one. Thus, we can ask what the world would be like is the up- and down-quarks were much heavier than they are. Once again — disaster!’ (Susskind 2005, p. 176)
The problem is as follows. The mass of a fundamental particle in the standard model is set by two factors: , where i labels the particle species, Γi is called the Yukawa parameter (e.g. electron: Γe ≈ 2.9 ×10–6, up quark: Γu ≈ 1.4 × 10–5, down quark: Γd ≈2.8 × 10–5), and v is the Higgs vacuum expectation value, which is the same for all particles (see Burgess & Moore 2006, for an introduction). Note that, contra Stenger, the bare masses of the quarks are not related to the strong force30.
There are, then, two independent ways in which the masses of the basic constituents of matter are surprisingly small: v = 2 × 10–17mPl, which ‘is so notorious that it’s acquired a special name — the Hierarchy Problem — and spawned a vast, inconclusive literature’ (Wilczek 2006a), and Γi ~ 10–6, which implies that, for example, the electron mass is unnaturally smaller than its (unnaturally small) natural scale set by the Higgs condensate (Wilczek 2007, p. 53) . This is known as the flavour problem.
Let’s take a closer look at the hierarchy problem. The problem (as ably explained by Martin 1998) is that the Higgs mass (squared) mH2 receives quantum corrections from the virtual effects of every particle that couples, directly or indirectly, to the Higgs field. These corrections are enormous — their natural scale is the Planck scale, so that these contributions must be fine-tuned to mutually cancel to one part in mPl2/mH2 ≈ 1032. Stenger’s reply is to say that:
‘...the masses of elementary particles are small compared to the Planck mass. No fine-tuning is required. Small masses are a natural consequence of the origin of mass. The masses of elementary particles are essentially small corrections to their intrinsically zero masses.’ (Foft 187)
Here we see the problem itself presented as its solution. It is precisely the smallness of the quantum corrections wherein the fine-tuning lies. If the Planck mass is the ‘natural’ (Foft 175) mass scale in physics, then it sets the scale for all mass terms, corrections or otherwise. Just calling them ‘small’ doesn’t explain anything.
Attempts to solve the hierarchy problem have driven the search for theories beyond the standard model: technicolor, the supersymmetric standard model, large extra dimensions, warped compactifications, little Higgs theories and more — even anthropic solutions (Arkani-Hamed & Dimopoulos 2005; Arkani-Hamed et al. 2005; Feldstein et al. 2006; Hall & Nomura 2008, 2010; Donoghue et al. 2010). Perhaps the most popular option is supersymmetry, whereby the Higgs mass scale doesn’t receive corrections from mass scales above the supersymmetry-breaking scale ΛSM due to equal and opposite contributions from supersymmetric partners. This ties v to ΛSM. The question now is: why is ΛSM ≪ mPl? This is known in the literature as ‘the μ-problem’, in reference to the parameter in the supersymmetric potential that sets the relevant mass scale. The value of μ in our universe is probably ~102–103 GeV. The natural scale for μ is mPl, and thus we still do not have an explanation for why the quark and lepton masses are so small. Low-energy supersymmetry does not by itself explain the magnitude of the weak scale, though it protects it from radiative correction (Barr & Khan 2007). Solutions to the μ-problem can be found in the literature (see Martin 1998, for a discussion and references).
We can draw some conclusions. First, Stenger’s discussion of the surprising lightness of fundamental masses is woefully inadequate. To present it as a solved problem of particle physics is a gross misrepresentation of the literature. Secondly, smallness is not sufficient for life. Recall that Damour & Donoghue (2008) showed that unless 0.78 × 10–17 < v/mPl < 3.3 × 10–17, the elements are unstable. The masses must be sufficiently small but not too small. Finally, suppose that the LHC discovers that supersymmetry is a (broken) symmetry of our universe. This would not be the discovery that the universe could not have been different. It would not be the discovery that the masses of the fundamental particles must be small. It would at most show that our universe has chosen a particularly elegant and beautiful way to be life-permitting.
QCD and Mass-Without-Mass: The bare quark masses, discussed above, only account for a small fraction of the mass of the proton and neutron. The majority of the other 95% comes from the strong force binding energy of the valence quarks. This contribution can be written as aΛQCD, where a ≈ 4 is a dimensionless constant determined by quantum chromodynamics (QCD). In Planck units, ΛQCD ≈ 10–20mPl. The question ‘why is gravity so feeble?’ (i.e. αG ≪ 1) is at least partly answered if we can explain why ΛQCD ≪ mPl. Unlike the bare masses of the quarks and leptons, we can answer this question from within the standard model.
The strength of the strong force αs is a function of the energy of the interaction. ΛQCD is the mass-energy scale at which αs diverges. Given that the strength of the strong force runs very slowly (logarithmically) with energy, there is a exponential relationship between ΛQCD and the scale of grand unification mU:
where b is a constant of order unity. Thus, if the QCD coupling is even moderately small at the unification scale, the QCD scale will be a long way away. To make this work in our universe, we need αs(mU) ≈ 1/25, and mU ≈ 1016 GeV (De Boer & Sander 2004). The calculation also depends on the spectrum of quark flavours; see Hogan (2000), Wilczek (2002) and Schellekens (2008, Appendix C).
As an explanation for the value of the proton and neutron mass in our universe, we aren’t done yet. We don’t know how to calculate the αs(mU), and there is still the puzzle of why the unification scale is three orders of magnitude below the Planck scale. From a fine-tuning perspective, however, this seems to be good progress, replacing the major miracle ΛQCD/mPl ~ 10–20 with a more minor one, αs(mU) ~ 10–1. Such explanations have been discussed in the fine-tuning literature for many years (Carr & Rees 1979; Hogan 2000).
Note that this does not completely explain the smallness of the proton mass, since mp is the sum of a number of contributions: QCD (ΛQCD), electromagnetism, the masses of the valence quarks (mu and md), and the mass of the virtual quarks, including the strange quark, which makes a surprisingly large contribution to the mass of ordinary matter. We need all of the contributions to be small in order for mp to be small.
Potential problems arise when we need the proton mass to fall within a specific range, rather than just be small, since the proton mass depends very sensitively (exponentially) on αU. For example, consider Region 4 in Figure 6, β1/4 ≪ 1. The constraint shown, β1/4 < 1/3 would require a 20-fold decrease in the proton mass to be violated, which (using Equation 7) translates to decreasing αU by ~0.003. Similarly, Region 7 will be entered if αU is increased31 by ~0.008. We will have more to say about grand unification and fine-tuning below. For the moment, we note that the fine-tuning of the mass of the proton can be translated into anthropic limits on GUT parameters.
Protons, Neutrons, Electrons: We turn now to the relative masses of the three most important particles in our universe: the proton, neutron and electron, from which atoms are made. Consider first the ratio of the electron to the proton mass, β, of which Stenger says:
‘...we can argue that the electron mass is going to be much smaller than the proton mass in any universe even remotely like ours. ...The electron gets its mass by interacting electroweakly with the Higgs boson. The proton, a composite particle, gets most of its mass from the kinetic energies of gluons swirling around inside. They interact with one another by way of the strong interaction, leading to relatively high kinetic energies. Unsurprisingly, the proton’s mass is much higher than the electron’s and is likely to be so over a large region of parameter space. ...The electron mass is much smaller than the proton mass because it gets its mass solely from the electroweak Higgs mechanism, so being less than 1.29 MeV is not surprising and also shows no sign of fine-tuning.’ (Foft 164,178)
Remember that fine-tuning compares the life-permitting range of a parameter with the possible range. Foft has compared the electron mass in our universe with the electron mass in universes ‘like ours’, thus missing the point entirely.
In terms of the parameters of the standard model, β ≡ me/mp ≈ Γev/aΛQCD. The smallness of β is thus quite surprising, since the ratio of the natural mass scale of the electron and the proton is vQCD ≈ 103. The smallness of β stems from the fact that the dimensionless constant for the proton is of order unity (a ≈ 4), while the Yukawa constant for the electron is unnaturally small Γe ≈ 10–6. Stenger’s assertion that the Higgs mechanism (with mass scale 246 GeV) accounts for the smallness of the electron mass (0.000511 GeV) is false.
The other surprising aspect of the smallness of β is the remarkable proximity of the QCD and electroweak scales (Arkani-Hamed & Dimopoulos 2005); in Planck units, v ≈ 2 × 10–17mPl and ΛQCD ≈ 2 × 10–20mPl. Given that β is constrained from both above and below anthropically (Figure 6), this coincidence is required for life.
Let’s look at the proton-neutron mass difference.
‘...this apparently fortuitous arrangement of masses has a plausible explanation within the framework of the standard model. ...the proton and neutron get most of their masses from the strong interaction, which makes no distinction between protons and neutrons. If that were all there was to it, their masses would be equal. However, the masses and charges of the two are not equal, which implies that the mass difference is electroweak in origin. ...Again, if quark masses were solely a consequence of the strong interaction, these would be equal. Indeed, the lattice QCD calculations discussed in chapter 7 give the u and d quarks masses of 3.3 ± 0.4 MeV. On the other hand, the masses of the two quarks are estimated to be in the range 1.5 to 3 MeV for the u quark and 2.5 to 5.5 MeV for the d quark. This gives a mass difference range md – mu from 1 to 4 Mev. The neutron-proton mass difference is 1.29 MeV, well within that range. We conclude that the mass difference between the neutron and proton results from the mass difference between the d and u quarks, which, in turn, must result from their electroweak interaction with the Higgs field. No fine-tuning is once again evident.’ (Foft 178)
Let’s first deal with the Lattice QCD (LQCD) calculations. LQCD is a method of reformulating the equations of QCD in a way that allows them to be solved on a supercomputer. LQCD does not calculate the quark masses from the fundamental parameters of the standard model — they are fundamental parameters of the standard model. Rather, ‘[t]he experimental values of the π, ρ and K or φ masses are employed to fix the physical scale and the light quark masses’ (Iwasaki 2000). Every LQCD calculation takes great care to explain that they are inferring the quark masses from the masses of observed hadrons (see, for example, Davies et al. 2004; Dürr et al. 2008; Laiho 2011).
This is important because fine-tuning involves a comparison between the life-permitting range of the fundamental parameters with their possible range. LQCD doesn’t address either. It demonstrates that (with no small amount of cleverness) one can measure the quark masses in our universe. It does not show that the quark masses could not have been otherwise. When Stenger compares two different values for the quark masses (3.3 MeV and 1.5–3 MeV), he is not comparing a theoretical calculation with an experimental measurement. He is comparing two measurements. Stenger has demonstrated that the u and d quark masses in our universe are equal (within experimental error) to the u and d quark masses in our universe.
Stenger states that mn – mp results from md – mu. This is false, as there is also a contribution from the electromagnetic force (Gasser & Leutwyler 1982; Hall & Nomura 2008). This would tend to make the (charged) proton heavier than the (neutral) neutron, and hence we need the mass difference of the light quarks to be large enough to overcome this contribution. As discussed in Section 4.8 (item 5), this requires α (md – mu)/141 MeV. The lightness of the up-quark is especially surprising, since the up-quark’s older brothers (charm and top) are significantly heavier than their partners (strange and bottom).
Finally, and most importantly, note carefully Stenger’s conclusion. He states that no fine-tuning is needed for the neutron-proton mass difference in our universe to be approximately equal to the up quark-down quark mass difference in our universe. Stenger has compared our universe with our universe and found no evidence of fine-tuning. There is no discussion of the life-permitting range, no discussion of the possible range of mn – mp (or its relation to the possible range of md – mu), and thus no relevance to fine-tuning whatsoever.
4.8.2 The Strength of the Fundamental Forces
Until now, we have treated the strength of the fundamental forces, quantified by the coupling constants α1, α2 and α3 (collectively αi), as constants. In fact, these parameters are a function of energy due to screening (or antiscreening) by virtual particles. For example, the ‘running’ of α1 with mass-energy (M) is governed (to first order) by the following equation (De Boer 1994; Hogan 2000)
where the sum is over the charges Qi of all fermions of mass less than M. If we include all (and only) the particles of the standard model, then the solution is
The integration constant, α1(M0) is set at a given energy scale M0. A similar set of equations holds for the other constants. Stenger asks,
‘What is the significance of this result for the fine-tuning question? All the claims of the fine-tuning of the forces of nature have referred to the values of the force strengths in our current universe. They are assumed to be constants, but, according to established theory (even without supersymmetry), they vary with energy.’ (Foft 189)
The second sentence is false by definition — a fine-tuning claim necessarily considers different values of the physical parameters of our universe. Note that Stenger doesn’t explicitly answer the question he has posed. If the implication is that those who have performed theoretical calculations to determine whether universes with different physics would support life have failed to take into account the running of the coupling constants, then he should provide references. I know of no scientific paper on fine-tuning that has used the wrong value of αi for this reason. For example, for almost all constraints involving the fine-structure constant, the relevant value is the low energy limit i.e. the fine structure constant α = 1/137. The fact that α is different at higher energies is not relevant.
Alternatively, if the implication is that the running of the constants means that one cannot meaningfully consider changes in the αi, then this too is false. As can be seen from Equation 9, the running of the coupling does not fix the integration constants. If we choose to fix them at low energies, then changing the fine-structure constant is effected by our choice of α1(M0) and α2(M0). The running of the coupling constants does not change the status of the αi as free parameters of the theory.
The running of the coupling constants is only relevant if unification at high energy fixes the integration constants, changing their status from fundamental to derived. We thus turn to Grand Unification Theories (GUTs), of which Stenger remarks:
‘[We can] view the universe as starting out in a highly symmetric state with a single, unified force [with] strength αU = 1/25. At 10–37 second, when the temperature of the universe dropped below 3 × 1016 GeV, symmetry breaking separated the unified force into electroweak and strong components ...The electroweak force became weaker than the unified force, while the strong force became stronger. ...In short, the parameters will differ from one another at low energies, but not by orders of magnitude. ...the relation between the force strengths is natural and predicted by the highly successful standard model, supplemented by the yet unproved but highly promising extension that includes supersymmetry. If this turns out to be correct, and we should know in few years, then it will have been demonstrated that the strengths of the strong, electromagnetic, and weak interactions are fixed by a single parameter, αU, plus whatever parameters are remaining in the new model that will take the place of the standard model.’ (Foft 190)
At the risk of repetition: to show (or conjecture) that a parameter is derived rather than fundamental does not mean that it is not fine-tuned. As Stenger has presented it, grand unification is a cane toad solution, as no attempt is made to assess whether the GUT parameters are fine-tuned. All that we should conclude from Stenger’s discussion is that the parameters (α1, α2, α3) can be calculated given αU and MU. The calculation also requires that the masses, charges and quantum numbers of all fundamental particles be given to allow terms like ∑Qi2 to be computed.
What is the life-permitting range of αU and MU? Given that the evidence for GUTs is still circumstantial, not much work has been done towards answering this question. The pattern α3 ≫ α2 > α1 seems to be generic, since ‘the antiscreening or asymptotic freedom effect is more pronounced for larger gauge groups, which have more types of virtual gluons’ (Wilczek 1997). As can be seen from Figure 6, this is a good start but hardly guarantees a life-permitting universe. The strength of the strong force at low energy increases with MU, so the smallness of MU/mPl may be ‘explained’ by the anthropic limits on αs. If we suppose that α and αs are related linearly to αU, then the GUT would constrain the point (α, αs) to lie on the blue dot-dashed line in Figure 6. This replaces the fine-tuning of the white area with the fine-tuning of the line-segment, plus the constraints placed on the other GUT parameters to ensure that the dotted line passes through the white region at all.
This last point has been emphasised by Hogan (2007). Figure 7 shows a slice through parameter space, showing the electron mass (me) and the down-up quark mass difference (md – mu). The condition labelled no nuclei was discussed in Section 4.8, point 10. The line labelled no atoms is the same condition as point 1, expressed in terms of the quark masses. The thin solid vertical line shows ‘a constraint from a particular SO(10) grand unified scenario’ which fixes md/me. Hogan notes:
[I]f the SO(10) model is the right one, it seems lucky that its trajectory passes through the region that allows for molecules. The answer could be that even the gauge symmetries and particle content also have an anthropic explanation.
The effect of grand unification on fine-tuning is discussed in Barrow & Tipler (1986, p. 354). They found that GUTs provided the tightest anthropic bounds on the fine structure constant, associated with the decay of the proton into a positron and the requirement of grand unification below the Planck scale. These limits are shown in Figure 6 as solid black lines.
Regarding the spectrum of fundamental particles, Cahn (1996) notes that if the couplings are fixed at high energy, then their value at low energy depends on the masses of particles only ever seen in particle accelerators. For example, changing the mass of the top quark affects the fine-structure constant and the mass of the proton (via ΛQCD). While the dependence on mt is not particularly dramatic, it would be interesting to quantify such anthropic limits within GUTs.
Note also that, just as there are more than one way to unify the forces of the standard model — SU(5), SO(10), E8 and more — there is also more than one way to break the GUT symmetry. I will defer to the expertise of Schellekens (2008).
‘[T]here is a more serious problem with the concept of uniqueness here. The groups SU(5) and SO(10) also have other subgroups beside SU(3) × SU(2) × U(1). In other words, after climbing out of our own valley and reaching the hilltop of SU(5), we discover another road leading down into a different valley (which may or may not be inhabitable).’
In other words, we not only need the right GUT symmetry, we need to make sure it breaks in the right way.
A deeper perspective of GUTs comes from string theory — I will follow the discussion in Schellekens (2008, p. 62ff.). Since string theory unifies the four fundamental forces at the Planck scale, it doesn’t really need grand unification. That is, there is no particular reason why three of the forces should unify first, three orders of magnitude below the Planck scale. It seems at least as easy to get the standard model directly, without bothering with grand unification. This could suggest that there are anthropic reasons for why we (possibly) live in a GUT universe. Grand unification provides a mechanism for baryon number violation and thus baryogenesis, though such theories are currently out of favour.
We conclude that anthropic reasoning seems to provide interesting limits on GUTs, though much work remains to be done in this area.
4.8.3 Conclusion
Suppose Bob sees Alice throw a dart and hit the bullseye. ‘Pretty impressive, don’t you think?’, says Alice. ‘Not at all’, says Bob, ‘the point-of-impact of the dart can be explained by the velocity with which the dart left your hand. No fine-tuning is needed.’ On the contrary, the fine-tuning of the point of impact (i.e. the smallness of the bullseye relative to the whole wall) is evidence for the fine-tuning of the initial velocity.
This fallacy alone makes much of Chapters 7 to 10 of Foft irrelevant. The question of the fine-tuning of these more fundamental parameters is not even asked, making the whole discussion a cane toad solution. Stenger has given us no reason to think that the life-permitting region is larger, or possibility space smaller, than has been calculated in the fine-tuning literature. The parameters of the standard model remain some of the best understood and most impressive cases of fine-tuning.
4.9 Dimensionality of Spacetime
A number of authors have emphasised the life-permitting properties of the particular combination of one time- and three space-dimensions, going back to Ehrenfest (1917) and Whitrow (1955), summarised in Barrow & Tipler (1986) and Tegmark (1997)32. Figure 9 shows the summary of the constraints on the number of space and time dimensions. The number of space dimensions is one of Rees ‘Just Six Numbers’. Foft addresses the issue:
Figure 9 Anthropic constraints on the dimensionality of spacetime (from Tegmark 1997). UNPREDICTABLE: the behaviour of your surroundings cannot be predicted using only local, finite accuracy data, making storing and processing information impossible. UNSTABLE: no stable atoms or planetary orbits. TOO SIMPLE: no gravitational force in empty space and severe topological problems for life. TACHYONS ONLY: energy is a vector, and rest mass is no barrier to particle decay. For example, a electron could decay into a neutron, an antiproton and a neutrino. Life is perhaps possible in very cold environments. Reproduced with permission of IOP Publishing Ltd.
‘Martin Rees proposes that the dimensionality of the universe is one of six parameters that appear particularly adjusted to enable life ...Clearly Rees regards the dimensionality of space as a property of objective reality. But is it? I think not. Since the space-time model is a human invention, so must be the dimensionality of space-time. We choose it to be three because it fits the data. In the string model, we choose it to be ten. We use whatever works, but that does not mean that reality is exactly that way.’ (Foft 51)
In response, we do not need to think of dimensionality as a property of objective reality. We just rephrase the claim: instead of ‘if space were not three dimensional, then life would not exist’, we instead claim ‘if whatever exists were not such that it is accurately described on macroscopic scales by a model with three space dimensions, then life would not exist’. This (admittedly inelegant sentence) makes no claims about the universe being really three-dimensional. If ‘whatever works’ was four dimensional, then life would not exist, whether the number of dimensions is simply a human invention or an objective fact about the universe. We can still use the dimensionality of space in counterfactual statements about how the universe could have been.
String theory is actually an excellent counterexample to Stenger’s claims. String theorists are not content to posit ten dimensions and leave it at that. They must compactify all but 3+1 of the extra dimensions for the theory to have a chance of describing our universe. This fine-tuning case refers to the number of macroscopic or ‘large’ space dimensions, which both string theory and classical physics agree to be three. The possible existence of small, compact dimensions is irrelevant.
Finally, Stenger tells us (Foft 48) that ‘when a model has passed many risky tests ...we can begin to have confidence that it is telling us something about the real world with certainty approaching 100 percent’. One wonders how the idea that space has three (large) dimensions fails to meet this criterion. Stenger’s worry seems to be that the three-dimensionality of space may not be a fundamental property of our universe, but rather an emergent one. Our model of space as a subset of33 may crumble into spacetime foam below the Planck length. But emergent does not imply subjective. Whatever the fundamental properties of spacetime are, it is an objective fact about physical reality — by Stenger’s own criterion — that in the appropriate limit space is accurately modelled by .
The confusion of Stenger’s response is manifest in the sentence: ‘We choose three [dimensions] because it fits the data’ (Foft 51). This isn’t much of a choice. One is reminded of the man who, when asked why he choose to join the line for ‘non-hen-pecked husbands’, answered, ‘because my wife told me to’. The universe will let you choose, for example, your unit of length. But you cannot decide that the macroscopic world has four space dimensions. It is a mathematical fact that in a universe with four spatial dimensions you could, with a judicious choice of axis, make a left-footed shoe into a right-footed one by rotating it. Our inability to perform such a transformation is not the result of physicists arbitrarily deciding that, in this spacetime model we’re inventing, space will have three dimensions.
5 The Multiverse
On Boxing Day, 2002, Powerball announced that Andrew J. Whittaker Jr. of West Virginia had won $314.9 million in their lottery. The odds of this event are 1 in 120 526 770. How could such an unlikely event occur? Should we accuse Mr Whittaker of cheating? Probably not, because a more likely explanation is that a great many different tickets were sold, increasing the chances that someone would win.
The multiverse is just such an explanation. Perhaps there are more universes out there (in some sense), sufficiently numerous and varied that it is not too improbable that at least one of them would be in the life-permitting subset of possible-physics-space. And, just as Powerball wouldn’t announce that ‘Joe Smith of Chicago didn’t win the lottery today’, so there is no one in the life-prohibiting universes to wonder what went wrong.
Stenger says (Foft 24) that he will not need to appeal to a multiverse in order to explain fine-tuning. He does, however, keep the multiverse close in case of emergencies.
‘Cosmologists have proposed a very simple solution to the fine-tuning problem. Their current models strongly suggest that ours is not the only universe but part of a multiverse containing an unlimited number of individual universes extending an unlimited distance in all directions and for an unlimited time in the past and future. ...Modern cosmological theories do indicate that ours is just one of an unlimited number of universes, and theists can give no reason for ruling them out.’ (Foft 22,42)
Firstly, the difficulty in ruling out multiverses speaks to their unfalsifiability, rather than their steadfastness in the face of cosmological data. There is very little evidence, one way or the other. Moreover, there are plenty of reasons given in the scientific literature to be skeptical of the existence of a multiverse. Even their most enthusiastic advocate isn’t as certain about the existence of a multiverse as Stenger suggests.
A multiverse is not part of nor a prediction of the concordance model of cosmology. It is the existence of small, adiabatic, nearly-scale invariant, Gaussian fluctuations in a very-nearly-flat FLRW model (containing dark energy, dark matter, baryons and radiation) that is strongly suggested by the data. Inflation is one idea of how to explain this data. Some theories of inflation, such as chaotic inflation, predict that some of the properties of universes vary from place to place. Carr & Ellis (2008) write:
[Ellis:] A multiverse is implied by some forms of inflation but not others. Inflation is not yet a well defined theory and chaotic inflation is just one variant of it. ...the key physics involved in chaotic inflation (Coleman-de Luccia tunnelling) is extrapolated from known and tested physics to quite different regimes; that extrapolation is unverified and indeed unverifiable. The physics is hypothetical rather than tested. We are being told that what we have is ‘known physics → multiverse’. But the real situation is ‘known physics → hypothetical physics → multiverse’ and the first step involves a major extrapolation which may or may not be correct.
Stenger fails to distinguish between the concordance model of cosmology, which has excellent empirical support but in no way predicts a multiverse, and speculative models of the early universe, only some of which predict a multiverse, all of which rely on hypothetical physics, and none of which have unambiguous empirical support, if any at all.
5.1 How to Make A Multiverse
What does it take to specify a multiverse? Following Ellis, Kirchner & Stoeger (2004), we need to:
1. Determine the set of possible universes .
2. Characterise each universe in by a set of distinguishing parameters p, being careful to create equivalence classes of physically identical universes with different p. The parameters p will need to specify the laws of nature, the parameters of those laws and the particular solution to those laws that describes the given member m of , which usually involves initial or boundary conditions.
3. Propose a distribution function f(m) on , specifying how many times each possible universe m is realised. Note that simply saying that all possibilities exist only tells us that f(m) > 0 for all m in . It does not specify f(m).
4. Define a distribution function over continuous parameters, relative to a measure π, which assigns a probability space volume to each parameter increment.
We would also like to know the set of universes which allow the existence of conscious observers — the anthropic subset.
As Ellis et al. (2004) point out, any such proposal will have to deal with the problems of what determines {}, actualized infinities (in , f(m) and the spatial extent of universes) and non-renormalisability, the parameter dependence and non-uniqueness of π, and how one could possibly observationally confirm any of these quantities. If some meta-law is proposed to physically generate a multiverse, then we need to postulate not just a.) that the meta-law holds in this universe, but b.) that it holds in some pre-existing metaspace beyond our universe. There is no unambiguous evidence in favour of a.) for any multiverse, and b.) will surely forever hold the title of the most extreme extrapolation in all of science, if indeed it can be counted as part of science. We turn to this topic now.
5.2 Is it Science?
Could a multiverse proposal ever be regarded as scientific? Foft 228 notes the similarity between undetectable universes and undetectable quarks, but the analogy is not a good one. The properties of quarks — mass, charge, spin, etc. — can be inferred from measurements. Quarks have a causal effect on particle accelerator measurements; if the quark model were wrong, we would know about it. In contrast, we cannot observe any of the properties of a multiverse {}, as they have no causal effect on our universe. We could be completely wrong about everything we believe about these other universes and no observation could correct us. The information is not here. The history of science has repeatedly taught us that experimental testing is not an optional extra. The hypothesis that a multiverse actually exists will always be untestable.
The most optimistic scenario is where a physical theory, which has been well-tested in our universe, predicts a universe-generating mechanism. Even then, there would still be questions beyond the reach of observation, such as whether the necessary initial conditions for the generator hold in the metaspace, and whether there are modifications to the physical theory that arise at energy scales or on length scales relevant to the multiverse but beyond testing in our universe. Moreover, the process by which a new universe is spawned almost certainly cannot be observed.
5.3 The Principle of Mediocrity
One way of testing a particular multiverse proposal is the so-called principle of mediocrity. This is a self-consistency test — it cannot pick out a unique multiverse as the ‘real’ multiverse — but can be quite powerful. We will present the principle using an illustration. Boltzmann (1895), having discussed the discovery that the second law of thermodynamics is statistical in nature, asks why the universe is currently so far from thermal equilibrium. Perhaps, Boltzmann says, the universe as a whole is in thermal equilibrium. From time to time, however, a random statistical fluctuation will produce a region which is far from equilibrium. Since life requires low entropy, it could only form in such regions. Thus, a randomly chosen region of the universe would almost certainly be in thermal equilibrium. But if one were to take a survey of all the intelligent life in such a universe, one would find them all scratching their heads at the surprisingly low entropy of their surroundings.
It is a brilliant idea, and yet something is wrong34. At most, life only needs a low entropy fluctuation a few tens of Mpc in size — cosmological structure simulations show that the rest of the universe has had virtually no effect on galaxy/star/planet/life formation where we are. And yet, we find ourselves in a low entropy region that is tens of thousands of Mpc in size, as far as our telescopes can see.
Why is this a problem? Because the probability of a thermal fluctuation decreases exponentially with its volume. This means that a random observer is overwhelmingly likely to observe that they are in the smallest fluctuation able to support an observer. If one were to take a survey of all the life in the multiverse, an incredibly small fraction would observe that they are inside a fluctuation whose volume is at least a billion times larger than their existence requires. In fact, our survey would find vastly many more observers who were simply isolated brains that fluctuated into existence preloaded with false thoughts about being in a large fluctuation. It is more likely that we are wrong about the size of the universe, that the distant galaxies are just a mirage on the face of the thermal equilibrium around us. The Boltzmann multiverse is thus definitively ruled out.
5.4 Coolness and the Measure Problem
Do more modern multiverse proposals escape the mediocrity test? Tegmark (2005) discusses what is known as the coolness problem, also known as the youngness paradox. Suppose that inflation is eternal, in the sense (Guth 2007) the universe is always a mix of inflating and non-inflating regions. In our universe, inflation ended 13.7 billion years ago and a period of matter-dominated, decelerating expansion began. Meanwhile, other regions continued to inflate. Let’s freeze the whole multiverse now, and take our survey clipboard around to all parts of the multiverse. In the regions that are still inflating, there is almost no matter and so no life. So we need to look for life in the parts that have stopped inflating. Whenever we find an intelligent life form, we’ll ask how long ago their part of the universe stopped inflating. Since the temperature of a post-inflation region is at its highest just as inflation ends and drops as the universe expands, we could equivalently ask: what is the temperature of the CMB in your universe?
The results of this survey would be rather surprising: an extremely small fraction of life-permitting universes are as old and cold as ours. Why? Because other parts of the universe continued to inflate after ours had stopped. These regions become exponentially larger, and thus nucleate exponentially more matter-dominated regions, all of which are slightly younger and warmer than ours. There are two effects here: there are many more younger universes, but they will have had less time to make intelligent life. Which effect wins? Are there more intelligent observers who formed early in younger universes or later in older universes? It turns out that the exponential expansion of inflation wins rather comfortably. For every observer in a universe as old as ours, there are 101038 observers who live in a universe that is one second younger. The probability of observing a universe with a CMB temperature of 2.75 K or less is approximately 1 in 101056.
Alas! Is this the end of the inflationary multiverse as we know it? Not necessarily. The catch comes in the seemingly innocent word now. We are considering the multiverse at a particular time. But general relativity will not allow it — there is no unique way to specify ‘now’. We can’t just compare our universe with all the other universes in existence ‘now’. But we must be able to compare the properties of our universe with some subset of the multiverse — otherwise the multiverse proposal cannot make predictions. This is the ‘measure problem’ of cosmology, on which there is an extensive literature — Page (2011a) lists 70 scientific papers. As Linde & Noorbala (2010) explains, one of the main problems is that ‘in an eternally inflating universe the total volume occupied by all, even absolutely rare types of the ‘universes’, is indefinitely large’. We are thus faced with comparing infinities. In fact, even if inflation is not eternal and the universe is finite, the measure problem can still paralyse our analysis.
The moral of the coolness problem is not that the inflationary multiverse has been falsified. Rather, it is this: no measure, no nothing. For a multiverse proposal to make predictions, it must be able to calculate and justify a measure over the set of universes it creates. The predictions of the inflationary multiverse are very sensitive to the measure, and thus in the absence of a measure, we cannot conclude that it survives the test of the principle of mediocrity.
5.5 Our Island in the Multiverse
A closer look at our island in parameter space reveals a refinement of the mediocrity test, as discussed by Aguirre (2007); see also Bousso, Hall & Nomura (2009). It is called the ‘principle of living dangerously’: if the prior probability for a parameter is a rapidly increasing (or decreasing) function, then we expect the observed value of the parameter to lie near the edge of the anthropically allowed range. One particular parameter for which this could be a problem is Q, as discussed in Section 4.5. Fixing other cosmological parameters, the anthropically allowed range is 10–6 Q 10–4. The observed value (~10–5) isn’t close to either edge of the anthropic range. This creates problems for inflationary multiverses, which are either fine-tuned to have the prior for Q to peak near the observed value, or else are steep functions of Q in the anthropic range (Graesser et al. 2004; Feldstein, Hall & Watari 2005).
The discovery of another life-permitting island in parameter space potentially creates a problem for the multiverse. If the other island is significantly larger than ours (for a given multiverse measure), then observers should expect to be on the other island. An example is the cold big bang, as described by Aguirre (2001). Aguirre’s aim in the paper is to provide a counterexample to what he calls the anthropic program: ‘the computation of P [the probability that a randomly chosen observer measures a given set of cosmological parameters]; if this probability distribution has a single peak at a set [of parameters] and if these are near the measured values, then it could be claimed that the anthropic program has ‘explained’ the values of the parameters of our cosmology’. Aguirre’s concern is a lack of uniqueness.
The cold big bang (CBB) is a model of the universe in which the (primordial) ratio of photons to baryons is ηγ ~ 1. To be a serious contender as a model of our universe (in which ηγ ~ 109) there would need to be an early population of luminous objects e.g. PopIII stars. Nucleosynthesis generally proceeds further than in our universe, creating an approximately solar metalicity intergalactic medium along with a 25% helium mass fraction35. Structure formation is not suppressed by CMB radiation pressure, and thus stars and galaxies require a smaller value of Q.
How much of a problem is the cold big bang to a multiverse explanation of cosmological parameters? Particles and antiparticles pair off and mutually annihilate to photons as the universe cools, so the excess of particles over antiparticles determines the value of ηγ. We are thus again faced with the absence of a successful theory of baryogenesis and leptogenesis. It could be that small values of ηγ, which correspond to larger baryon and lepton asymmetry, are very rare in the multiverse. Nevertheless, the conclusion of Aguirre (2001) seems sound: ‘[the CBB] should be discouraging for proponents of the anthropic program: it implies that it is quite important to know the [prior] probabilities P, which depend on poorly constrained models of the early universe’.
Does the cold big bang imply that cosmology need not be fine-tuned to be life-permitting? Aguirre (2001) claims that ξ(ηγ ~ 1, 10–11 < Q < 10–5) ~ ξ(ηγ ~ 109, 10–6 <Q < 10–4), where ξ is the number of solar mass stars per baryon. At best, this would show that there is a continuous life-permitting region, stretching along the ηγ axis. Various compensating factors are needed along the way — we need a smaller value of Q, which renders atomic cooling inefficient, so we must rely on molecular cooling, which requires higher densities and metalicities, but not too high or planetary orbits will be disrupted collisions (whose frequency increases as ηγ–4Q7/2). Aguirre (2001) only considers the case ηγ ~ 1 in detail, so it is not clear whether the CBB island connects to the HBB island (106 ηγ 1011) investigated by Tegmark & Rees (1998). Either way, life does not have free run of parameter space.
5.6 Boltzmann’s Revenge
The spectre of the demise of Boltzmann’s multiverse haunts more modern cosmologies in two different ways. The first is the possibility of Boltzmann brains. We should be wary of any multiverse which allows for single brains, imprinted with memories, to fluctuate into existence. The worry is that, for every observer who really is a carbon-based life form who evolved on a planet orbiting a star in a galaxy, there are vastly more for whom this is all a passing dream, the few, fleeting fancies of a phantom fluctuation. This could be a problem in our universe — if the current, accelerating phase of the universe persists arbitrarily into the future, then our universe will become vacuum dominated. Observers like us will die out, and eventually Boltzmann brains, dreaming that they are us, will outnumber us. The most serious problem is that, unlike biologically evolved life like ourselves, Boltzmann brains do not require a fine-tuned universe. If we condition on observers, rather than biological evolved life, then the multiverse may fail to predict a universe like ours. The multiverse would not explain why our universe is fine-tuned for biological life (R. Collins, forthcoming).
Another argument against the multiverse is given by Penrose (2004, p. 763ff). As with the Boltzmann multiverse, the problem is that this universe seems uncomfortably roomy.
‘...do we really need the whole observable universe, in order that sentient life can come about? This seems unlikely. It is hard to imagine that even anything outside our galaxy would be needed ...Let us be very generous and ask that a region of radius one tenth of the ...observable universe must resemble the universe that we know, but we do not care about what happens outside that radius ...Assuming that inflation acts in the same way on the small region [that inflated into the one-tenth smaller universe] as it would on the somewhat larger one [that inflated into ours], but producing a smaller inflated universe, in proportion, we can estimate how much more frequently the Creator comes across the smaller than the larger regions. The figure is no better than 1010123. You see what an incredible extravagance it was (in terms of probability) for the Creator to bother to produce this extra distant part of the universe, that we don’t actually need ...for our existence.’
In other words, if we live in a multiverse generated by a process like chaotic inflation, then for every observer who observes a universe of our size, there are 1010123 who observe a universe that is just 10 times smaller. This particular multiverse dies the same death as the Boltzmann multiverse. Penrose’s argument is based on the place of our universe in phase space, and is thus generic enough to apply to any multiverse proposal that creates more small universe domains than large ones. Most multiverse mechanisms seem to fall into this category.
5.7 Conclusion
A multiverse generated by a simple underlying mechanism is a remarkably seductive idea. The mechanism would be an extrapolation of known physics, that is, physics with an impressive record of explaining observations from our universe. The extrapolation would be natural, almost inevitable. The universe as we know it would be a very small part of a much larger whole. Cosmology would explore the possibilities of particle physics; what we know as particle physics would be mere by-laws in an unimaginably vast and variegated cosmos. The multiverse would predict what we expect to observe by predicting what conditions hold in universes able to support observers.
Sadly, most of this scenario is still hypothetical. The goal of this section has been to demonstrate the mountain that the multiverse is yet to climb, the challenges that it must face openly and honestly. The multiverse may yet solve the fine-tuning of the universe for intelligent life, but it will not be an easy solution. ‘Multiverse’ is not a magic word that will make all the fine-tuning go away. For a popular discussion of these issues, see Ellis (2011).
6 Conclusions and Future
We conclude that the universe is fine-tuned for the existence of life. Of all the ways that the laws of nature, constants of physics and initial conditions of the universe could have been, only a very small subset permits the existence of intelligent life.
Will future progress in fundamental physics solve the problem of the fine-tuning of the universe for intelligent life, without the need for a multiverse? There are a few ways that this could happen. We could discover that the set of life-permitting universes is much larger than previously thought. This is unlikely, since the physics relevant to life is low-energy physics, and thus well-understood. Physics at the Planck scale will not rewrite the standard model of particle physics. It is sometimes objected that we do not have an adequate definition of ‘an observer’, and we do not know all possible forms of life. This is reason for caution, but not a fatal flaw of fine-tuning. If the strong force were weaker, the periodic table would consist of only hydrogen. We do not need a rigorous definition of life to reasonably conclude that a universe with one chemical reaction (2H → H2) would not be able to create and sustain the complexity necessary for life.
Alternatively, we could discover that the set of possible universes is much smaller than we thought. This scenario is much more interesting. What if, when we really understand the laws of nature, we will realise that they could not have been different? We must be clear about the claim being made. If the claim is that the laws of nature are fixed by logical and mathematical necessity, then this is demonstrably wrong — theoretical physicists find it rather easy to describe alternative universes that are free from logical contradiction (Davies, in Davies 2003). The category of ‘physically possible’ isn’t much help either, as the laws of nature tell us what is physically possible, but not which laws are possible.
It is not true that fine-tuning must eventually yield to the relentless march of science. Fine-tuning is not a typical scientific problem, that is, a phenomenon in our universe that cannot be explained by our current understanding of physical laws. It is not a gap. Rather, we are concerned with the physical laws themselves. In particular, the anthropic coincidences are not like, say, the coincidence between inertial mass and gravitational mass in Newtonian gravity, which is a coincidence between two seemingly independent physical quantities. Anthropic coincidences, on the other hand, involve a happy consonance between a physical quantity and the requirements of complex, embodied intelligent life. The anthropic coincidences are so arresting because we are accustomed to thinking of physical laws and initial conditions as being unconcerned with how things turn out. Physical laws are material and efficient causes, not final causes. There is, then, no reason to think that future progress in physics will render a life-permitting universe inevitable. When physics is finished, when the equation is written on the blackboard and fundamental physics has gone as deep as it can go, fine-tuning may remain, basic and irreducible.
Perhaps the most optimistic scenario is that we will eventually discover a simple, beautiful physical principle from which we can derive a unique physical theory, whose unique solution describes the universe as we know it, including the standard model, quantum gravity, and (dare we hope) the initial conditions of cosmology. While this has been the dream of physicists for centuries, there is not the slightest bit of evidence that this idea is true. It is almost certainly not true of our best hope for a theory of quantum gravity, string theory, which has ‘anthropic principle written all over it’ (Schellekens 2008). The beauty of its principles has not saved us from the complexity and contingency of the solutions to its equations. Beauty and simplicity are not necessity.
Finally, it would be the ultimate anthropic coincidence if beauty and complexity in the mathematical principles of the fundamental theory of physics produced all the necessary low-energy conditions for intelligent life. This point has been made by a number of authors, e.g. Carr & Rees (1979) and Aguirre (2005). Here is Wilczek (2006b):
‘It is logically possible that parameters determined uniquely by abstract theoretical principles just happen to exhibit all the apparent fine-tunings required to produce, by a lucky coincidence, a universe containing complex structures. But that, I think, really strains credulity.’
Adams, F. C., 2008, JCAP, 2008, 010
Agrawal, V., Barr, S. M., Donoghue, J. F. and Seckel, D., 1998a, PhRvL, 80, 1822
| CAS |
Agrawal, V., Barr, S. M., Donoghue, J. F. and Seckel, D., 1998b, PhRvD, 57, 5480
| CAS |
Aguirre, A., 1999, ApJ, 521, 17
CrossRef | CAS |
Aguirre, A., 2001, PhRvD, 64, 083508
Aguirre, A., 2005, ArXiv:astro-ph/0506519
Aguirre, A., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 367
Aitchison, I. & Hey, A., 2002, Gauge Theories in Particle Physics: Volume 1 — From Relativistic Quantum Mechanics to QED (3rd edition; New York: Taylor & Francis)
Arkani-Hamed, N. and Dimopoulos, S., 2005, JHEP, 2005, 073
Arkani-Hamed, N., Dimopoulos, S. & Kachru, S., 2005, ArXiv: hep-th/0501082
Barnes, L. A., Francis, M. J., Lewis, G. F. and Linder, E. V., 2005, PASA, 22, 315
CrossRef |
Barr, S. M. and Khan, A., 2007, PhRvD, 76, 045002
Barrow, J. D. & Tipler, F. J., 1986, The Anthropic Cosmological Principle (Oxford: Clarendon Press)
Bekenstein, J. D., 1973, PhRvD, 7, 2333
Boltzmann, L., 1895, Natur, 51, 413
CrossRef |
Bousso, R., 2008, GReGr, 40, 607
Bousso, R. and Leichenauer, S., 2009, PhRvD, 79, 063506
Bousso, R. and Leichenauer, S., 2010, PhRvD, 81, 063524
Bousso, R., Hall, L. and Nomura, Y., 2009, PhRvD, 80, 063510
Bradford, R. A. W., 2009, JApA, 30, 119
| CAS |
Brandenberger, R. H., 2011, ArXiv:astro-ph/1103.2271
Burgess, C. & Moore, G., 2006, The Standard Model: A Primer (Cambridge: Cambridge University Press)
Cahn, R., 1996, RvMP, 68, 951
| CAS |
Carr, B. J. and Ellis, G. F. R., 2008, A&G, 49, 2.29
| CAS |
Carr, B. J. and Rees, M. J., 1979, Natur, 278, 605
CrossRef |
Carroll, S. M., 2001, LRR, 4, 1
Carroll, S. M., 2003, Spacetime and Geometry: An Introduction to General Relativity (San Francisco: Benjamin Cummings)
Carroll, S. M., 2008, SciAm, 298, 48
| CAS |
Carroll, S. M. & Tam, H., 2010, ArXiv:astro-ph/1007.1417
Carter, B., 1974, in IAU Symposium, Vol. 63, Confrontation of Cosmological Theories with Observational Data, ed. M. S. Longair (Boston: D. Reidel Pub. Co.), 291
Clavelli, L. & White, R. E., 2006, ArXiv:hep-ph/0609050
Cohen, B. L., 2008, PhTea, 46, 285
Collins, R., 2003, in The Teleological Argument and Modern Science, ed. N. Manson (London: Routledge), 178
Csótó, A., Oberhummer, H. and Schlattl, H., 2001, NuPhA, 688, 560
Damour, T. and Donoghue, J. F., 2008, PhRvD, 78, 014014
Davies, P. C. W., 1972, JPhA, 5, 1296
| CAS |
Davies, P., 2003, in God and Design: The Teleological Argument and Modern Science, ed. N. A. Manson (London: Routledge), 147
Davies, P. C. W., 2006, The Goldilocks Enigma: Why is the Universe Just Right for Life? (London: Allen Lane)
Davies, C. et al., 2004, PhRvL, 92,
| CAS |
Dawkins, R., 1986, The Blind Watchmaker (New York: W. W. Norton & Company)
Dawkins, R., 2006, The God Delusion (New York: Houghton Mifflin Harcourt)
De Boer, W., 1994, PrPNP, 33, 201
| CAS |
De Boer, W. and Sander, C., 2004, PhLB, 585, 276
| CAS |
Donoghue, J. F., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 231
Donoghue, J. F., Dutta, K., Ross, A. and Tegmark, M., 2010, PhRvD, 81,
Dorling, J., 1970, AmJPh, 38, 539
Dürr, S. et al., 2008, Sci, 322, 1224
CrossRef |
Durrer, R. and Maartens, R., 2007, GReGr, 40, 301
Dyson, F. J., 1971, SciAm, 225, 51
Earman, J., 2003, in Symmetries in Physics: Philosophical Reflections, ed. K. Brading & E. Castellani (Cambridge: Cambridge University Press), 140
Ehrenfest, P., 1917, Proc. Amsterdam Academy, 20, 200
Ekström, S., Coc, A., Descouvemont, P., Meynet, G., Olive, K. A., Uzan, J.-P. and Vangioni, E., 2010, A&A, 514, A62
Ellis, G. F. R., 1993, in The Anthropic Principle, ed. F. Bertola & U. Curi (Oxford: Oxford University Press), 27
Ellis, G. F. R., 2011, SciAm, 305, 38
Ellis, G. F. R., Kirchner, U. and Stoeger, W. R., 2004, MNRAS, 347, 921
CrossRef | CAS |
Feldstein, B., Hall, L. and Watari, T., 2005, PhRvD, 72, 123506
Feldstein, B., Hall, L. and Watari, T., 2006, PhRvD, 74, 095011
Freeman, I. M., 1969, AmJPh, 37, 1222
Garriga, J. and Vilenkin, A., 2006, PThPS, 163, 245
Garriga, J., Livio, M. and Vilenkin, A., 1999, PhRvD, 61, 023503
Gasser, J. and Leutwyler, H., 1982, PhR, 87, 77
| CAS |
Gedalia, O., Jenkins, A. and Perez, G., 2011, PhRvD, 83,
| CAS |
Gibbons, G. W. and Turok, N., 2008, PhRvD, 77, 063516
Gibbons, G. W., Hawking, S. W. and Stewart, J. M., 1987, NuPhB, 281, 736
Gingerich, O., 2008, in Fitness of the Cosmos for Life: Biochemistry and Fine-Tuning, ed. J. D. Barrow, S. C. Morris, S. J. Freeland & C. L. Harper (Cambridge: Cambridge University Press), 20
Gould, A., 2010, ArXiv:hep-ph/1011.2761
Graesser, M. L., Hsu, S. D. H., Jenkins, A. and Wise, M. B., 2004, PhLB, 600, 15
| CAS |
Greene, B., 2011, The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos (New York: Knopf)
Griffiths, D. J., 2008, Introduction to Elementary Particles (Weinheim: Wiley-VCH)
Gurevich, L., 1971, PhLA, 35, 201
Guth, A. H., 1981, PhRvD, 23, 347
| CAS |
Guth, A. H., 2007, JPhA, 40, 6811
Hall, L. and Nomura, Y., 2008, PhRvD, 78, 035001
Hall, L. and Nomura, Y., 2010, JHEP, 2010, 76
Harnik, R., Kribs, G. and Perez, G., 2006, PhRvD, 74, 035006
Harrison, E. R., 1970, PhRvD, 1, 2726
Harrison, E. R., 2003, Masks of the Universe (2nd edition; Cambridge: Cambridge University Press)
Hartle, J. B., 2003, Gravity: An Introduction to Einstein's General Relativity (San Francisco: Addison Wesley)
Hawking, S. W., 1975, CMaPh, 43, 199
Hawking, S. W., 1988, A Brief History of Time (Toronto: Bantam)
Hawking, S. W. & Mlodinow L., 2010, The Grand Design (Toronto: Bantam)
Hawking, S. W. and Page, D. N., 1988, NuPhB, 298, 789
Healey, R., 2007, Gauging What's Real: The Conceptual Foundations of Gauge Theories (New York: Oxford University Press)
Hogan, C. J., 2000, RvMP, 72, 1149
| CAS |
Hogan, C. J., 2006, PhRvD, 74, 123514
Hogan, C. J., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 221
Hollands, S. & Wald, R. M., 2002a, ArXiv:hep-th/0210001
Hollands, S. and Wald, R. M., 2002b, GReGr, 34, 2043
Iwasaki, Y., 2000, PThPS, 138, 1
| CAS |
Jaffe, R., Jenkins, A. and Kimchi, I., 2009, PhRvD, 79, 065014
Jeltema, T. and Sher, M., 1999, PhRvD, 61, 017301
Kaku, M., 1993, Quantum Field Theory: A Modern Introduction (New York: Oxford University Press)
King, R. A., Siddiqi, A., Allen, W. D. and Schaefer, H. F. I., 2010, PhRvA, 81, 042523
Kofman, L., Linde, A. and Mukhanov, V., 2002, JHEP, 2002, 057
Kostelecký, V. and Russell, N., 2011, RvMP, 83, 11
Laiho, J., 2011, ArXiv:hep-ph/1106.0457
Leslie, J., 1989, Universes (London: Routledge)
Liddle, A., 1995, PhRvD, 51, R5347
| CAS |
Lieb, E. and Yau, H.-T., 1988, PhRvL, 61, 1695
| CAS |
Linde, A., 2008, in Lecture Notes in Physics, Vol. 738, Inflationary Cosmology, ed. M. Lemoine, J. Martin & P. Peter (Berlin, Heidelberg: Springer), 1
Linde, A. and Noorbala, M., 2010, JCAP, 2010, 8
Linde, A. & Vanchurin, V., 2010, ArXiv:hep-th/1011.0119
Livio, M., Hollowell, D., Weiss, A. and Truran, J. W., 1989, Natur, 340, 281
CrossRef | CAS |
Lynden-Bell, D., 1969, Natur, 223, 690
CrossRef |
MacDonald, J. and Mullan, D. J., 2009, PhRvD, 80, 043507
Martin, S. P., 1998, in Perspectives on Supersymmetry, ed. G. L. Kane (Singapore: World Scientific Publishing), 1
Martin, C. A., 2003, in Symmetries in Physics: Philosophical Reflections, ed. K. Brading & E. Castellani (Cambridge: Cambridge University Press), 29
Misner, C. W., Thorne, K. S. & Wheeler, J. A., 1973, Gravitation (San Francisco: W. H. Freeman and Co)
Mo, H., van den Bosch, F. C. & White, S. D. M., 2010, Galaxy Formation and Evolution (Cambridge: Cambridge University Press)
Nagashima, Y., 2010, Elementary Particle Physics: Volume 1: Quantum Field Theory and Particles (Wiley-VCH)
Nakamura, K., 2010, JPhG, 37, 075021
Norton, J. D., 1995, Erkenntnis, 42, 223
CrossRef |
Oberhummer, H., 2001, NuPhA, 689, 269
Oberhummer, H., Pichler, R. & Csótó, A., 1998, ArXiv:nuclth/9810057
Oberhummer, H., Csótó, A. & Schlattl, H., 2000a, in The Future of the Universe and the Future of Our Civilization, ed. V. Burdyuzha & G. Khozin (Singapore: World Scientific Publishing), 197
Oberhummer, H., Csótó, A. and Schlattl, H., 2000b, Sci, 289, 88
CrossRef | CAS |
Padmanabhan, T., 2007, GReGr, 40, 529
Page, D. N., 2011a, JCAP, 2011, 031
Page, D. N., 2011b, ArXiv e-prints: 1101.2444
Peacock, J. A., 1999, Cosmological Physics (Cambridge: Cambridge University Press)
Peacock, J. A., 2007, MNRAS, 379, 1067
CrossRef |
Penrose, R., 1959, MPCPS, 55, 137
Penrose, R., 1979, in General Relativity: An Einstein Centenary Survey, ed. S. W. Hawking & W. Israel (Cambridge: Cambridge University Press), 581
Penrose, R., 1989, NYASA, 571, 249
CrossRef | CAS |
Penrose, R., 2004, The Road to Reality: A Complete Guide to the Laws of the Universe (London: Vintage)
Phillips, A. C., 1999, The Physics of Stars (2nd edition; Chichester: Wiley)
Pogosian, L. and Vilenkin, A., 2007, JCAP, 2007, 025
Pokorski, S., 2000, Gauge Field Theories (Cambridge: Cambridge University Press)
Polchinski, J., 2006, ArXiv:hep-th/0603249
Polkinghorne, J. C. & Beale, N., 2009, Questions of Truth: Fifty-One Responses to Questions about God, Science, and Belief (Louisville: Westminster John Knox Press)
Pospelov, M. and Romalis, M., 2004, PhT, 57, 40
| CAS |
Price, H., 1997, in Time's Arrows Today: Recent Physical and Philosophical Work on the Direction of Time, ed. S. F. Savitt (Cambridge: Cambridge University Press), 66
Price, H., 2006, Time and Matter – Proceedings of the International Colloquium on the Science of Time, ed. I. I. Bigi (Singapore: World Scientific Publishing), 209
Redfern, M., 2006, The Anthropic Universe, ABC Radio National, available at http://www.abc.net.au/rn/scienceshow/stories/2006/1572643.htm
Rees, M. J., 1999, Just Six Numbers: The Deep Forces that Shape the Universe (New York: Basic Books)
Sakharov, A. D., 1967, JETPL, 5, 24
Schellekens, A. N., 2008, RPPh, 71, 072201
Schlattl, H., Heger, A., Oberhummer, H., Rauscher, T. and Csótó, A., 2004, ApSS, 291, 27
| CAS |
Schmidt, M., 1963, Natur, 197, 1040
CrossRef |
Schrödinger, E., 1992, What Is Life? (Cambridge: Cambridge University Press)
Shaw, D. and Barrow, J. D., 2011, PhRvD, 83,
Smolin, L., 2007, in Universe or Multiverse?, ed. B. Carr (Cambridge: Cambridge University Press), 323
Steinhardt, P. J., 2011, SciAm, 304, 36
Strocchi, F., 2007, Symmetry Breaking (Berlin, Heidelberg: Springer)
Susskind, L., 2003, ArXiv:hep-th/0302219
Susskind, L., 2005, The Cosmic Landscape: String Theory and the Illusion of Intelligent Design (New York: Little, Brown and Company)
Taubes, G., 2002, Interview with Lisa Randall, ESI Special Topics, available at http://www.esitopics.com/brane/interviews/DrLisaRandall.html
Tegmark, M., 1997, CQGra, 14, L69
CrossRef | CAS |
Tegmark, M., 1998, AnPhy, 270, 1
| CAS |
Tegmark, M., 2005, JCAP, 2005, 001
Tegmark, M. and Rees, M. J., 1998, ApJ, 499, 526
CrossRef | CAS |
Tegmark, M., Vilenkin, A. and Pogosian, L., 2005, PhRvD, 71, 103523
Tegmark, M., Aguirre, A., Rees, M. J. and Wilczek, F., 2006, PhRvD, 73, 023505
Turok, N., 2002, CQGra, 19, 3449
CrossRef |
Vachaspati, T. and Trodden, M., 1999, PhRvD, 61, 023502
Vilenkin, A., 2003, in Astronomy, Cosmology and Fundamental Physics, ed. P. Shaver, L. Dilella & A. Giméne (Berlin: Springer Verlag), 70
Vilenkin, A., 2006, ArXiv e-prints: hep-th/0610051
Vilenkin, A., 2010, JPhCS, 203, 012001
Weinberg, S., 1989, RvMP, 61, 1
| CAS |
Weinberg, S., 1994, SciAm, 271, 44
| CAS |
Weinberg, S., 2007, in Universe or Multiverse?, ed. B. J. Carr (Cambridge: Cambridge University Press), 29
Wheeler J. A., 1996, At Home in the Universe (New York: AIP Press)
Whitrow, G. J., 1955, BrJPhilosSci, VI, 13
CrossRef |
Wilczek, F., 1997, in Critical Dialogues in Cosmology, ed. N. Turok (Singapore: World Scientific Publishing), 571
Wilczek, F., 2002, ArXiv:hep-ph/0201222
Wilczek, F., 2005, PhT, 58, 12
Wilczek, F., 2006a, PhT, 59, 10
Wilczek, F., 2006b, PhT, 59, 10
Zel'dovich, Y. B., 1964, SPhD, 9, 195
Zel'dovich, Y. B., 1972, MNRAS, 160, 1P
1 We may wish to stipulate that a given observer by definition only observes one universe. Such finer points will not effect our discussion.
2 The counter-argument presented in Stenger’s book (page 252), borrowing from a paper by Ikeda and Jeffreys, does not address this possibility. Rather, it argues against a deity which intervenes to sustain life in this universe. I have discussed this elsewhere: ikedajeff.notlong.com
3 Viz Top Tip: http://www.viz.co.uk/toptips.html
4 Hereafter, ‘Foft x’ will refer to page x of Stenger’s book.
5 References: Barrow & Tipler (1986), Carr & Rees (1979), Carter (1974), Davies (2006), Dawkins (2006), Redfern (2006) for Deutsch’s view on fine-tuning, Ellis (1993), Greene (2011), Guth (2007), Harrison (2003), Hawking & Mlodinow (2010, p. 161), Linde (2008), Page (2011b), Penrose (2004, p. 758), Polkinghorne & Beale (2009), Rees (1999), Smolin (2007), Susskind (2005), Tegmark et al. (2006), Vilenkin (2006), Weinberg (1994) and Wheeler (1996).
6 Note that it isn’t just that the rod appears to be shorter. Length contraction in special relativity is not just an optical illusion resulting from the finite speed of light. See, for example, Penrose (1959).
7 That is, the spacetime of a non-rotating, uncharged black hole.
8 See also the excellent articles by Martin (2003) and Earman (2003).
9 This may not be as clear-cut a disaster as is often asserted in the fine-tuning literature, going back to Dyson (1971). MacDonald & Mullan (2009) and Bradford (2009) have shown that the binding of the diproton is not sufficient to burn all the hydrogen to helium in big bang nucleosynthesis. For example, MacDonald & Mullan (2009) show that while an increase in the strength of the strong force by 13% will bind the diproton, a ~50% increase is needed to significantly affect the amount of hydrogen left over for stars. Also, Collins (2003) has noted that the decay of the diproton will happen too slowly for the resulting deuteron to be converted into helium, leaving at least some deuterium to power stars and take the place of hydrogen in organic compounds. Finally with regard to stars, Phillips (1999, p. 118) notes that: ‘It is sometimes suggested that the timescale for hydrogen burning would be shorter if it were initiated by an electromagnetic reaction instead of the weak nuclear reaction [as would be the case is the diproton were bound]. This is not the case, because the overall rate for hydrogen burning is determined by the rate at which energy can escape from the star, i.e. by its opacity, If hydrogen burning were initiated by an electromagnetic reaction, this reaction would proceed at about the same rate as the weak reaction, but at a lower temperature and density.’ However, stars in such a universe would be significantly different to our own, and detailed predictions for their formation and evolution have not been investigated.
10 Note that this is independent of xmax and ymax, and in particular holds in the limit xmax, ymax → ∞.
11 This requirement is set by the homogeneity of our universe. Regions that transition early will expand and dilute, and so for the entire universe to be homogeneous to within Q ≈ 10–5, the regions must begin their classical phase within Δt ≈ Qt.
12 This seems very unlikely. Regions of the universe which have collapsed and virialised have decoupled from the overall expansion of the universe, and so would have no way of knowing exactly when the expansion stalled and reversed. However, as Price (1997) lucidly explains, such arguments risk invoking a double standard, as they work just as well when applied backwards in time.
13 Carroll has raised this objection to Stenger (Foft 142), whose reply was to point out that the arrow of time always points away from the lowest entropy point, so we can always call that point the beginning of the universe. Once again, Stenger fails to understand the problem. The question is not why the low entropy state was at the beginning of the universe, but why the universe was ever in a low entropy state. The second law of thermodynamics tells us that the most probable world is one in which the entropy is always high. This is precisely what entropy quantifies. See Price (1997, 2006) for an excellent discussion of these issues.
14 These requirements can be found in any good cosmology textbook, e.g. Peacock (1999); Mo, van den Bosch & White (2010).
15 See also the discussion in Kofman, Linde & Mukhanov (2002) and Hollands & Wald (2002a).
16 Cosmic phase transitions are irreversible in the same sense that scrambling an egg is irreversible. The time asymmetry is a consequence of low entropy initial conditions, not the physics itself (Penrose 1989; Hollands & Wald 2002a).
17 We should also note that Carroll & Tam (2010) argue that the Gibbons-Hawking-Stewart canonical measure renders an inflationary solution to the flatness problem superfluous. This is a puzzling result — it would seem to show that non-flat FLRW universes are infinitely unlikely, so to speak. This result has been noted before. See Gibbons & Turok (2008) for a different point of view.
18 We use the Hubble constant to specify the particular time being considered.
19 The Arxiv version of this paper (arxiv.org/abs/1112.4647) includes an appendix that gives further critique of Stenger’s discussion of cosmology.
20 http://TegRees.notlong.com
21 Stenger’s Equation 12.22 is incorrect, or at least misleading. By the third Friedmann equation, , one cannot stipulate that the density ρ is constant unless one sets w = –1. Equation 12.22 is thus only valid for w = –1, in which case it reduces to Equation 12.21 and is indistinguishable from a cosmological constant. One can solve the Friedmann equations for w ≠ –1, for example, if the universe contains only quintessence, is spatially flat and w is constant, then a(t) = (t/t0)2/3(1+w), where t0 is the age of the universe.
22 Some of this section follows the excellent discussion by Polchinski (2006).
23 More precisely, to use the area element in Figure 5 as the probability measure, one is assuming a probability distribution that is linear in log10 G and log10 α. There is, of course, no problem in using logarithmic axes to illustrate the life-permitting region.
24 Hoyle’s prediction is not an ‘anthropic prediction’. As Smolin (2007) explains, the prediction can be formulated as follows: a.) Carbon is necessary for life. b.) There are substantial amounts of carbon in our universe. c.) If stars are to produce substantial amounts of carbon, then there must be a specific resonance level in carbon. d.) Thus, the specific resonance level in carbon exists. The conclusion does not depend in any way on the first, ‘anthropic’ premise. The argument would work just as well if the element in question were the inert gas neon, for which the first premise is (probably) false.
25 See also Oberhummer, Pichler & Csótó (1998); Oberhummer, Csótó & Schlattl (2000b); Csótó, Oberhummer & Schlattl (2001); Oberhummer (2001).
26 In the left plot, we hold mp constant, so we vary β = me/mp by varying the electron mass.
27 As with the stability of the diproton, there is a caveat. Weinberg (2007) notes that if the pp reaction p+ + p+ → 2H + e+νe is rendered energetically unfavourable by changing the fundamental masses, then the reaction p+ + e + p+ → 2H + νe will still be favourable so long as md – mu – me < 3.4 MeV. This is a weaker condition. Note, however, that the pep reaction is 400 times less likely to occur in our universe than pp, meaning that pep stars must burn hotter. Such stars have not been simulated in the literature. Note also that the full effect of an unstable deuteron on stars and their formation has not been calculated. Primordial helium burning may create enough carbon, nitrogen and oxygen to allow the CNO cycle to burn hydrogen in later generation stars.
28 Even this limit should be noted with caution, as it holds for constant . As appears to depend on α, the corresponding limit on α may be a different plane to the one shown in Figure 6.
29 In the absence of weak decay, the weakless universe will conserve each individual quark number.
30 The most charitable reading of Stenger’s claim is that he is referring to the constituent quark model, wherein the mass-energy of the cloud of virtual quarks and gluons that surround a valence quark in a composite particle is assigned to the quark itself. In this model, the quarks have masses of ~300 MeV. The constituent quark model is a non-relativistic phenomenological model which provides a simple approximation to the more fundamental but more difficult theory (QCD) that is useful at low-energies. It is completely irrelevant to the cases of fine-tuning in the literature concerning quark masses (e.g. Agarwal et al. 1998a; Hogan 2000; Barr & Khan 2007), all of which discuss the bare (or current) quark masses. In fact, even a charge of irrelevance is too charitable — Stenger later quotes the quark masses as ~5 MeV, which is the current quark mass.
31 A few caveats. This estimate assumes that this small change in αU will not significantly change α. The dependence seems to be flatter than linear, so this assumption appears to hold. Also, be careful in applying the limits on β in Figure 6 to the proton mass, as where appropriate only the electron mass was varied. For example, Region 1 depends on the proton-neutron mass difference, which doesn’t change with ΛQCD and thus does not place a constraint on αU.
32 See also Freeman (1969); Dorling (1970); Gurevich (1971), and the popular-level discussion in Hawking (1988, p. 180).
33 Or perhaps Euclidean space , or Minkowskian spacetime.
34 Actually, there are several things wrong, not least that such a scenario is unstable to gravitational collapse.
35 Stenger states that ‘[t]he cold big-bang model shows that we don’t necessarily need the Hoyle resonance, or even significant stellar nucleosynthesis, for life’. It shows nothing of the sort. The CBB does not alter nuclear physics and thus still relies on the triple-α process to create carbon in the early universe; see the more detailed discussion of CBB nucleosynthesis in Aguirre (1999, p. 22). Further, CBB does not negate the need for long-lived, nuclear-fueled stars as an energy source for planetary life. Aguirre (2001) is thus justifiably eager to demonstrate that stars will plausibly form in a CBB universe.
Legal & Privacy | Contact Us | Help
© CSIRO 1996-2015 |
ab5294bea9fe58e6 | information Information Information Information 2078-2489 MDPI 10.3390/info3040809 information-03-00809 Article Implementation of Classical Communication in a Quantum World Fields Chris 815 East Palace # 14, Santa Fe, NM 87501, USA; E-Mail:; Tel.: +1-505-995-9859 13 12 2012 2012 3 4 809 831 11 07 2012 31 10 2012 06 12 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012
Observations of quantum systems carried out by finite observers who subsequently communicate their results using classical data structures can be described as “local operations, classical communication” (LOCC) observations. The implementation of LOCC observations by the Hamiltonian dynamics prescribed by minimal quantum mechanics is investigated. It is shown that LOCC observations cannot be described using decoherence considerations alone, but rather require the a priori stipulation of a positive operator-valued measure (POVM) about which communicating observers agree. It is also shown that the transfer of classical information from system to observer can be described in terms of system-observer entanglement, raising the possibility that an apparatus implementing an appropriate POVM can reveal the entangled system-observer states that implement LOCC observations.
decoherence einselection emergence entanglement quantum-to-classical transition virtual machines
1. Introduction
Suppose spatially-separated observers Alice and Bob each perform local measurements on a spatially-extended quantum system—for example, a pair of entangled qubits in an asymmetric Bell state—and afterwards communicate their experimental outcomes to each other. This “local operations, classical communication” (LOCC, e.g., [1] Chapter 12) scenario characterizes quantum key distribution, preparation of the initial states and subsequent observation of the final states of quantum computers, and practical laboratory investigations of spatially-extended quantum systems; indeed LOCC characterizes all situations in which two or more observers interact with a quantum system and then report their observations by encoding them into sharable classical data structures. Formal descriptions of LOCC scenarios generally specify the quantum system S with which the observers interact by explicitly specifying its quantum degrees of freedom and hence its Hilbert space ; in addition, they typically explicitly specify the “prepared” quantum state with which the observers interact, for example by an expression such as “” where and are basis vectors and “A” and “B” name Alice and Bob, respectively. The “local operations” are generally dealt with cursorily: Alice and Bob are said to measure spin or polarization, for example, with the details of the apparatus used to do so, if any are given, relegated to the Methods section. The “classical communication” between Alice and Bob is rarely discussed at all. Understanding LOCC in physical terms, however, requires not just understanding the quantum state being observed, but understanding both the “local operations” and the “classical communication” as physical processes.
Let us begin with classical communication. Any finite message from Bob to Alice can be represented as a finite sequence of classical bits. It must, moreover, be encoded in some physical medium [2]—notes in a logbook, for example, or an email message, or coherent vibrations of air molecules. Bob encodes the message and Alice receives it by performing local operations on the physical medium employed for transmission. Successful transmission requires, therefore, that Alice monitor the medium for messages, and that Alice and Bob share an encoding/decoding scheme—a data structure with and methods—as well as a semantics for that data structure that renders the message meaningful. These requirements are independent of whether Alice and Bob are human beings or non-human information-processing machines; two computers attached to the internet must share a communication protocol (e.g., ) and must share assumptions about both the syntax and semantics of the data structures employed to encode transmitted messages.
The local operations performed by Alice and Bob have, therefore, two distinct targets. Alice and Bob must each operate locally on S to extract classical information, and must each operate locally on their shared communication medium to either encode (Bob) or decode (Alice, and Bob if he checks his encoding) the classical information contained in the transmitted message. Most discussions of LOCC acknowledge that the interactions with S involve quantum measurement; most neglect the fact that, if quantum theory is assumed to be universal, the encoding and decoding steps also involve interactions with a quantum system: the physical medium of communication. Most, moreover, neglect the fact that Alice and Bob are themselves quantum systems. The purpose of the present paper is to examine LOCC from a perspective that acknowledges these facts; it is, therefore, to ask what is required to implement LOCC in a quantum world.
The next section, “Preliminaries”, discusses the fundamental assumption that quantum theory is universal and two of its consequences: That the extraction of classical information from quantum systems can be represented by the actions of positive operator-valued measures (POVMs, reviewed by [1] Chapter 2), and that observers must deploy POVMs to identify quantum systems of interest. The third section, “Decompositional equivalence and its consequences”, discusses a second fundamental assumption: that the universe as a whole exhibits a symmetry, decompositional equivalence, that allows alternative tensor-product structures (TPSs) for a single Hilbert space [3]. Like the assumption of universality, decompositional equivalence is an empirical assumption; if it is true, physical dynamics cannot depend in any way on TPSs that may be specified as defining “systems” of interest. In a universe satisfying decompositional equivalence, system-environment decoherence, which depends for its definition on the specification of a TPS, can have no physical consequences, and hence can neither create nor alter physical encodings of classical information. Observers cannot, therefore, take for granted physical encodings by their shared environment of either the boundaries or the pointer states of specific systems of interest, as is proposed in the “environment as witness” formulation of decoherence theory [4,5] and quantum Darwinism [6,7]. The fourth section, “Decoherence as semantics”, shows that decoherence can be represented as the action of a POVM, and hence as being a semantic or model-theoretic mapping from physical systems to classical data structures, and in particular to classical virtual machines. It is shown that the semantic consistency conditions for constructing such mappings are those familiar from the consistent histories formulation of quantum measurement (e.g., [8]). The fifth section, “Observation as entanglement”, returns to the question of how multiple observers in a LOCC setting identify and determine the state of a single system and then communicate their results. It shows that LOCC requires an infinite regress of assumptions regarding prior classical communications between the observers involved. In the absence of further assumptions, therefore, observations under LOCC conditions cannot be carried out in a universe characterized by both universal quantum theory and decompositional equivalence. It is then shown that the classical correlation between the states of an observer and an observed system produced by the action of a POVM would result from observer-system entanglement, and that such a correlation would be perfect if the entanglement was monogamous. Hence observation mediated by a POVM can be regarded as an alternative formal description of quantum entanglement; the transfer of classical information such entanglement enables is independent of system boundaries and relative, for any third party, to the specification of an appropriate basis for the joint system-observer state. While this result renders the explanation of classical communication in terms of an observer-independent physical process of “emergence” unattainable, it offers the possibility that an apparatus implementing an appropriate POVM could reveal the specific system-observer entanglements that implement the observation of classical outcomes. The paper concludes that the appearance of shared public classicality in the physical world is fully analogous to the appearance of algorithm instantiation in classical computer science: Both are cases of a shared jointly stipulated semantic interpretation.
2. Preliminaries 2.1. Assumption: Quantum Theory is Universal
The first and most fundamental assumption made here is that quantum theory is universal: All physical systems are quantum systems. The universe U, in particular, is a physical system; it is therefore a quantum system, and can be characterized by a Hilbert space comprising a collection of quantum degrees of freedom. The universe is moreover, as assumed by Everett [9], not part of anything else; it is an isolated quantum system. The evolution of the universal quantum state , therefore, satisfies a Schrödinger equation , where HU is a deterministic universal Hamiltonian. This assumption rules out any objective non-unitary “collapse” of ; it amounts to the adoption of what Landsman [10] calls “stance 1” regarding quantum theory, a stance that is realist about quantum states, and therefore demands an explanation for the appearance of classicality. All available experimental evidence is consistent with this universality assumption [11]. Alice, Bob, the systems that they observe and the systems that they employ to encode classical communications are, on this assumption, all collections of quantum degrees of freedom evolving under the action of the universal Hamiltonian HU.
The assumption that all physical systems are quantum systems clearly does not entail that all descriptions of physical systems are quantum-theoretical descriptions. Some descriptions are classical; others are quantum-theoretical. Classical descriptions of physical systems are in some cases (e.g., for billiard balls) sufficient for practical purposes, while in other cases (e.g., for electrons) they are not. The observable world appears classical to human observers employing their unaided senses; this appearance will be referred to as “observational classicality”. Human observers, moreover, record and communicate their observations using classical data structures, as do all artificial observers thus far constructed by humans. Hence all descriptions of physical, i.e., quantum systems, whether they are classical descriptions or quantum-theoretical descriptions, are both recorded for future access and communicated using classical data structures, regardless of whether the observers involved are humans or artifacts. It is this classicality of recorded descriptions that both motivates and requires LOCC as a characterization of the interaction of multiple observers with a quantum, i.e., physical system.
Under the assumption of universality, understanding the requirements of LOCC in the case of either Alice or Bob individually clearly requires understanding quantum measurement, and in particular understanding whether observational classicality can be supposed to “emerge” from the dynamics specified by HU. If the observed system S is regarded as a quantum information processor, this question of observational classicality becomes the question of how the behavior of S can be interpreted as computation. How, for example, do the unitary transformations of the quantum state of a quantum Turing machine (QTM, [12]) or Hamiltonian oracle [13] implement a computation on a classical data structure encoded by the system’s initial state? In what sense do the events that occur between measurements in a measurement-based quantum computer [14] implement computation? That these questions are both foundational to quantum computing and non-trivial has been emphasized by Aaronson [15].
What the LOCC concept adds to the quantum measurement problem as traditionally presented (e.g., [10,16]) is the requirement that two observers interact with the same system, and then moreover interact, via a communicated message, with each other. Understanding LOCC, therefore, requires understanding measurement as both a redundant or repeatable process and as a social process; with the exception of some discussions of Wigner’s friend, neither of these aspects of LOCC are considered in traditional accounts of single-observer measurement. It will be shown below, in Section 3, Section 4, Section 5 respectively, that the theoretical issues raised by these additional considerations are non-trivial.
2.2. Consequence: Measurements are Actions by POVMs
If quantum theory is universal, measurements can be represented by POVMs. A POVM is a collection {Ei} of positive-semidefinite Hilbert-space automorphisms that have been normalized so as to sum to unity; POVMs generalize traditional projective measurements (e.g., [17]) by dropping the requirement of orthogonality and hence the requirement that all elements of a measurement project onto the same Hilbert-space basis. If is a POVM representing a measurement of the state of some quantum system S, then each component is a Hilbert-space automorphism on , i.e., ; one can also write , where in general . Given the assumption of universality, it is clear that any such automorphism must be implemented by the unitary physical propagator acting on the universal Hilbert space , and hence on as a collection of components of some universal state . Hence a measurement can be thought of as a physical action by a POVM, as emphasized for example by Fuchs’ [18] depiction of a POVM as an observer’s prosthetic hand.
Treating a POVM as a collection of Hilbert-space automorphisms does not, however, capture the sense in which observations extract classical information from quantum systems. To see how POVMs model measurement, it is useful to return to the case of a POVM with mutually orthognal components, i.e., a von Neumann projection {Πi} defined on a Hilbert space . Each component Πj of a von Neumann projection {Πi} projects any state onto a basis vector of . If the set {} of images of the components of {Πi} is complete in the sense of spanning , one can write for states . In this case a general Hermitian observable M can be written where αj is the jth possible observable outcome of M acting on . Hence from an observer’s point of view, what a projection {Πi} produces is not just a new state vector, but a real outcome value αj; {Πi} is not just a Hilbert-space automorphism, but is also a mapping from to the set of real outcome values of some observable of interest.
A general POVM can be thought of as a mapping from to a set of real outcome values with two caveats. First, the components of a general POVM are not necessarily orthogonal and hence do not, in general, all project to the same basis. Second, any finite observer can explicitly represent, and hence can physically encode in a classical memory or communication medium, values with at most some finite number N of bits. Hence from an observer’s point of view, a component of a general POVM is not just an automorphism on ; it is also a mapping , where is the set of binary codes of length N and {basis}S is the set of bases of [3]. Indeed, any collection of mappings for which the probabilities P(αj) of obtaining real outcome values αj sum to unity, and for which each of the components is implementable by the unitary physical propagator acting on the universal Hilbert space must be positive semi-definite (to yield real outcome values), normalized (to yield well-defined probabilities) and be a collection of Hilbert-space automorphisms (to be implementable by ); hence such a collection must be a POVM. The POVM formalism thus represents the extraction of classical information from quantum systems in the only way that it can be represented while maintaining consistency with the universality assumption.
The assumption that all measurements can be represented by POVMs clearly does not entail that an observer can explicitly write down the components of every POVM that he or she might deploy in the course of interacting with the world. Doing so in any particular case would require both a complete specification of the outcome values obtainable with that POVM and a complete specification of the Hilbert space upon which it acts, or as discussed below, a complete specification of the inverse image in of its set of obtainable outcome values. Such a specification would, for any particular POVM and hence any particular Hilbert space, require scientific investigation of the physical system represented by that Hilbert space to be complete. Classical theorems [19,20] restricting the completeness of system identification strongly suggest that such completeness is infeasible in principle. Hence explicitly-specified POVMs can at best be viewed as predictively-adequate approximations based on experimental investigations carried out thus far; in practice such POVMs are available only for systems with small numbers of (known or stipulated) degrees of freedom.
2.3. Consequence: Observers must Identify the Systems They Observe
When a new graduate student enters a laboratory, he or she is introduced to the various items of apparatus that the laboratory employs. The reason for this ritual is obvious: The student cannot be expected to reliably report the state of a particular apparatus if he or she cannot identify that apparatus. Traditional discussions of quantum measurement take the ability of observers to identify items of apparatus for granted. For example, Ollivier, Poulan and Zurek define “objectivity” for physical systems operationally as follows:
“A property of a physical system is objective when it is:
simultaneously accessible to many observers,
who are able to find out what it is without prior knowledge about the system of interest, and
who can arrive at a consensus about it without prior agreement.”
(p. 1 of [4]; p. 3 of [5]) Nothing is said in this definition, or in the surrounding discussion [4,5], about how observers are able to “access” a physical system “without prior knowledge” of such state variables as its location, size or shape, and without “prior agreement” about which item in their shared environment constitutes the system of interest. To find the identification of physical systems by observers treated explicitly, one must look to cybernetics, where unique identification of even classical finite-state machines (FSMs) by finite sequences of finite observations is shown to be impossible in principle [19,20], or to the cognitive neuroscience of perception, where the identification in practice of individual systems over extended periods of time is recognized as a computationally-intensive heuristic process [21,22,23].
In practice, observers identify items of laboratory apparatus by finite sets of classically-specified criteria: location, size, shape, color, overall appearance, laboratory-affixed labels, brand name. These criteria are encodable as finite binary strings. If quantum theory is universal, items of laboratory apparatus are quantum systems, and hence are characterizable by Hilbert spaces comprising their quantum degrees of freedom. Observing a laboratory apparatus, therefore, requires deploying an operator that maps a collection of quantum degrees of freedom to a finite set of finite binary strings; by the reasoning above, such operators can only be POVMs. Identifying a system of interest clearly requires observing it; hence an observer can only identify a system of interest by deploying a POVM. Call POVMs deployed to identify systems of interest “system-identifying” POVMs. For simplicity, a system-identifying POVM can be regarded as yielding as output just the conventionalized name of the system it identifies, e.g., “S” or “the Canberra® Ge(Li) detector” [3].
The formal definition of system-identifying POVMs is complicated by two related issues. First, the vast majority of systems identified by human observers are characterized, like laboratory apparatus are characterized, not by possible outcome values of their quantum degrees of freedom, but by possible outcome values of bulk degrees of freedom such as macroscopic size or shape. The exceptions—the systems that those who reject the universality of quantum theory consider to be the only bona fide “quantum systems”—are systems defined by particular values of quantum degrees of freedom, as electrons or the Higgs boson are currently defined within the Standard Model, or are systems defined by certain observable behaviors of macroscopic apparatus, as electrons were defined in the late 19th century. The second complication is that observers, as emphasized by Zurek [24,25] and others, typically interact not with systems of interest themselves, but with their surrounding environments. While in the case of macroscopic systems such as laboratory apparatus this environment may be treated using a straightforward approximation, for example as the ambient photon field, in the case of either microscopic or very distant systems it is complicated by the inclusion of laboratory apparatus; our interactions with presumptive Higgs bosons, for example, are via an environment containing the ATLAS [26] or CMS [27] detectors. These complicating issues are not significantly simplified by considering non-human observers; the components of such observers that record classical records are, with the exception of such things as blocks of plastic that record the passage of cosmic rays, almost as distant from the microscopic events to which their records refer as are their human minders.
In recognition of the role of the intervening environment in the observation and hence identification of systems of interest, it has been proposed that system-identifying POVMs be defined, in general, over either the physically-implemented information channel with which an observer interacts (i.e., the observer’s environment) [3] or over the universe U as a whole [28]. The latter definition is adopted here, as it simplifies the description of LOCC by allowing two or more observers to be regarded as deploying the same system-identifying POVM. Defining system-identifying POVMs over all of U acknowledges, moreover, the actual epistemic position of any finite observer. Observations are information-transferring actions by the observer’s environment on the observer. Without a complete, deterministic theory of the behavior of U, such actions cannot be predicted precisely; without sufficient recording capacity to record the state of every degree of freedom of U at the instant of observation, such actions cannot be replicated precisely. Any finite observer can, therefore, at best predict or retrodict only approximately and heuristically what degrees of freedom of U might be causally responsible for any particular episode of observation. An observer can, however, be sure that such degrees of freedom are within U, so defining system-identifying POVMs over U can be viewed as an exercise of epistemic conservatism.
Defining system-identifying POVMs over U as a whole does not render observations nonlocal. Any finite observer must expend finite energy to record the outcomes obtained by deploying a POVM; hence any observation requires finite time. Any finite observer can, moreover, deploy a POVM for only a finite time. A finite observer can, therefore, regard a system-identifying POVM—or any POVM—as extracting classical information from at most a local volume with a horizon at cΔt, where Δt is the period of observation. Quantum information may originate outside this volume by entanglement, but such entanglement is undetectable in principle by the observer. Alice can only regard classical information extracted from a quantum system employed as a communication channel as a message from Bob if Bob is in her light-cone; LOCC requires timelike, not spacelike, separation of observers.
Defining system-identifying POVMs over U as a whole does not, moreover, resolve the question of how such POVMs—or how any POVMs—can yield outcome values for bulk degrees of freedom such as macroscopic size or shape. This question is, clearly, the question of quantum measurement itself; in particular, it is the question of the “emergence of classicality” that is taken up in Section 4 below.
3. Decompositional Equivalence and Its Consequences 3.1. Assumption: Our Universe Exhibits Decompositional Equivalence
A fundamental requirement of observational objectivity, and hence of science as practiced, is that reality is independent of the language chosen to describe it. This fundamental assumption that reality is independent of the descriptive terms and hence the semantics chosen by observers—in particular, human observers—underlies the assumption in scientific practice that any arbitrary collection of physical degrees of freedom can be stipulated to be a “system of interest” and named with a symbol such as “S” without this choice of language affecting either fundamental physical laws or their outcomes as expressed by the dynamical behavior of the degrees of freedom contained within S. It similarly underlies the assumption that, given the technological means, an experimental apparatus to investigate the behavior of S can be designed and constructed without altering either fundamental physical laws or the dynamical behavior of the degrees of freedom contained within S. These assumptions operate prior to apparatus-dependent experimental interventions into the behavior of S, and hence prior to observations of S, both logically and, in the course of practical investigations of microscopic degrees of freedom by means of macroscopic apparatus, temporally.
This fundamental assumption that reality is independent of semantics can be generalized to state an assumed dynamical symmetry: The universal dynamics HU is asumed to be independent of, and hence symmetric under arbitrary modifications of, boundaries drawn in by specifications of tensor product structures. Call this symmetry decompositional equivalence [3]. Stated formally, decompositional equivalence is the assumption that if a TPS S ⊗ E = S′ ⊗ E′ = U, then the dynamics HU = HS + HE + HS−E = HS′ + HE′ + HS′ − E′, where S and S′ are arbitrarily chosen collections of physical degrees of freedom, E and E′ are their respective “environments” and HS − E and HS′ − E′ are, respectively, the S − E and S′ − E′ interaction Hamiltonians. Such equivalence of TPSs of can be alternatively expressed in terms of the linearity of HU: If HU =∑ij Hij where the indices i and j range without restriction over all quantum degrees of freedom within , decompositional equivalence is the assumption that the interaction matrix elements ( do not depend on the labels assigned to collections of degrees of freedom by specifications of TPSs. Decompositional equivalence is thus consistent with the general philosophical position of microphysicalism (for a recent review, see [29]), but involves no claims about explanatory reduction, and indeed no claims about explanation at all; it requires only that emergent properties of composite objects exactly supervene, as a matter of physical fact, on the fundamental interactions of the microscale components of those objects.
As is the assumption that quantum theory is universal, the assumption that the universe satisfies decompositional equivalence is an empirical assumption. Its empirical content is most obvious in its formulation as the assumption that interaction matrix elements ( do not depend on specifications of TPSs. This is an assumption that the pairwise interaction Hamiltonians Hij are not just independent of where and when the degrees of freedom labeled by i and j interact, but are also independent of any other classical information that might be included in the specification of a reference frame from which the interaction of i and j might be observed. As such, it is similar in spirit to Tegmark’s “External Reality Hypothesis (ERH)” that “there exists an external physical reality completely independent of us humans” ([30] p. 101). If taken literally, however, the ERH violates energy conservation, as it allows human beings to behave arbitrarily without affecting “external physical reality” and vice-versa. The assumption of decompositional equivalence, on the other hand, does not involve, entail, or allow decoupling of observers or any other systems from their environments; any evidence that energy is not conserved, or evidence that energy is conserved but not additive would be evidence that decompositional equivalence is not satisfied in our universe. Were our universe to fail in fact to satisfy decompositional equivalence, any shift in specified system boundaries—any change in the TPS of —could be expected to alter fundamental physical laws or their dynamical outcomes; in such a universe, the notions of “fundamental physical laws” and “well-defined dynamics” would be effectively meaningless. It is, therefore, assumed in what follows that decompositional equivalence is in fact satisfied in our universe U, and hence that the dynamics HU is independent of system boundaries.
3.2. Consequence: System-Environment Decoherence can have No Physical Consequences
The assumption of decompositional equivalence has immediate, but largely unremarked, consequences in two areas: The characterization of system-environment decoherence and the characterization of system identification by observers. Let us consider decoherence first. The usual understanding of system-environment decoherence (e.g., [24,25,31,32]) is that interactions between a system S and its environment E, where S ⊗ E = U is a TPS of , select eigenstates of the S − E interaction HS−E. Such environmentally-mediated superselection or einselection [33,34] assures that observations of S that are mediated by information transfer through E will reveal eigenstates of HS−E; in the canonical example, observations of macroscopic objects mediated by information transfer through the ambient visible-spectrum photon field reveal eigenstates of position. From this perspective, it is the quantum mechanism of einselection that underlies the classical notion that the “environment” of a system—whether this refers to the ambient environment or to an experimental apparatus—objectively encodes the physical state of the system, where “objectively” has the sense given in the Ollivier–Poulin–Zurek definition [4,5] quoted in Section 2.3.
Two features of this standard account of decoherence deserve emphasis. First, the idea that the environment einselects particular eigenstates of S in an observer-independent way—that environmental einselection depends only on HS−E, where both S and E are specified completely independently of observers—allows decoherence to mimic “collapse” as a mechanism by which the world prepares or creates classical information about particular systems that observers can then detect. In this picture, as in the traditional Copenhagen picture, observers have nothing to do with what “systems” are available to observe: The world—in the decoherence picture, the environment—reveals some systems as “classical” and not others. The sense of “objectivity” defined by Ollivier, Poulin and Zurek [4,5] depends critically on this assumption; without it, the idea that observers can approach the world “without prior knowledge” of the systems it contains becomes uninterpretable. The second thing to note is that the formal mechanism of “tracing out the environment” in decoherence calculations [24,25,31,32] corresponds physically to an assumption that environmental degrees of freedom are irrelevant to the system-observer interaction, i.e., to an assumption that the physical interaction HS−O, where O is the observer, is independent of E. This assumption straightforwardly conflicts with the idea that observation—the S − O interaction—is mediated by E. This conflict between the formalism of decoherence and its model theory suggests that the trace operation is at best an approximate mathematical representation of the physics of decoherence.
By definition, einselection depends on the Hamiltonian HS−E, which is defined at the boundary, in Hilbert space, between S and E [33,34]. In a universe that satisfies decompositional equivalence, this boundary can be shifted arbitrarily without affecting the interactions between quantum degrees of freedom, i.e., without affecting the interaction Hij, and hence without affecting the matrix element , between any pair of degrees of freedom i and j within U. An arbitrary boundary shift, in other words, has no physical consequences. In particular, a boundary shift that transforms S ⊗ E into an alternative TPS S′ ⊗ E′ has no physical consequences for the values of matrix elements ( where i and j are degrees of freedom within the intersection E ∩ E′, and hence has no physical consequences for states of E ∩ E′ or for the classical information that such states encode. The encodings within E ∩ E′ of arbitrary states of S and S′, and hence of einselected pointer states of S and S′ are, therefore, entirely independent of the boundaries of these systems, and hence entirely independent of the Hamiltonians HS−E and HS′−E′ defined at those boundaries. The encoding of information about S in E is, in other words, entirely a result of the action of HU = ∑ij Hij , and is entirely independent of specified system boundaries or “emergent” system-environment interactions definable at such specified boundaries.
It has been proposed, under the rubric of “quantum Darwinism” [6,7], that environmental “witnessing” of the pointer states of particular macroscopic systems by einselection explains the observer-independent “emergence into classicality” of such systems, and hence explains the observer-independent existence of the “classical world” of ordinary human experience (see also [10,31,32]). In a universe satisfying decompositional equivalence, the einselection of pointer states as eigenstates of system-environment interactions cannot, as shown above, be a physical mechanism, and hence cannot underpin an observer-independent “objective” [4,5] encoding of classical information about some particular systems at the expense of classical information about the states of other possible systems in such a universe. In a universe satisfying decompositional equivalence, the shared environment encodes the states of all possible embedded systems, or none at all. The notion that environmental witnessing and quantum Darwinism explain the “emergence of classicality” collapses in a universe satisfying decompositional equivalence, as both require that einselection physically and observer-independently encode the states of some but not all “systems” in the state of E [28].
The physics of continuous fluid flow provides a simple example of decompositional equivalence and its consequences for einselection. It is commonplace to describe fluid flow in terms of deformable voxels, stipulated to be cubic at some initial time t0, that contain some particular collection of molecules. The stipulation of such a voxel has no effect on the intermolecular interactions between the molecules composing the fluid, whether these molecules are within, outside, or on opposite sides of the boundary of the voxel. Stipulation of a voxel boundary immediately defines, however, a Hamiltonian Hin −out that describes the bulk interaction between the molecules within the voxel and those outside. This bulk interaction can be viewed as decohering the collective quantum state of the molecules within the voxel, with a decoherence time at room temperature and pressure of substantially less than 10−20 s [35], and as einselecting as an eigenstate of position within the fluid at all subsequent times. Such einselection prevents the wavefunction from spreading into a macroscopically-extended spatial superposition, just as decoherence and einselection by interplanetary dust, gasses and radiation prevent the wavefunction of Hyperion from doing so [36]. Does the state of the fluid outside the stipulated voxel objectively encode the position of the continuously-deforming voxel boundary at which this einselection takes place? Could observers with no prior knowledge of the stipulated voxel boundary determine its position by observing the state of the fluid? Obviously they could not.
The situation with bulk material objects appears, intuitively, to be different from the fluid-flow situation just described. When viewed in terms of pairwise interactions between the quantum degrees of freedom of individual atoms, however, the intuitive difference vanishes. Consider a uniform sphere of Pb embedded in a solid mass of Plexiglas® plastic. The interatomic interactions between Pb, C, O and H atoms are completely independent of whether the Pb sphere, the Pb sphere together with a surrounding spherical shell of plastic, a voxel of Pb entirely within the Pb sphere, or a voxel containing only plastic is considered the “system of interest.” The boundary of the system stipulated, in each of these cases, is the site of action of a Hamiltonian Hin−out that describes the bulk interaction between the atoms within the stipulated boundary and those outside; the action of this Hamiltonian einselects positional eigenstates of the collective quantum state of the atoms inside the boundary just as it does in the case of a voxel boundary in a fluid. Observers of the states of some arbitrary sample of the atoms in the plastic part of this combined system would, however, be no more capable of determining the site of a stipulated boundary than observers of some arbitrary sample of the fluid molecules in the previous example.
As a final example, consider observers of the experimental apparatus employed by Brune et al. [37] to follow the decoherence of single Rb atoms within an ion trap. Would an observer unfamiliar with the design or purpose of this apparatus, for example a new graduate student, who observed the behavior of its externally-accessible degrees of freedom—either quantum degrees of freedom or bulk macroscopic degrees of freedom such as pointer positions or readouts from digital displays—be capable of inferring the boundary between the trapped Rb atoms and the apparatus itself, including the magnetic and various electromagnetic fields it generates? Clearly not. The boundary between the quantum system comprising the trapped Rb atoms and the quantum system comprising the internal radiative degrees of freedom is stipulated by theory, and this theory must be understood to interpret the behavior of the apparatus as a measurement of decoherence time. Observers of such an apparatus, in other words, must have prior knowledge of the system they are observing and must have prior agreements about what the bulk macroscopic states of the system indicate—about what the characters displayed on the readouts mean, for example—to comprehend the operation of the apparatus. The criteria for “objectivity” offered by Ollivier, Poulan and Zurek [4,5] and quoted in Section 2.3 above fail utterly in this case, just as they do for the “objectivity” of voxel boundaries in fluids or the intuitively “obvious” boundary of a Pb sphere embedded in plastic. As in the previous examples, what counts as the boundary of the “system of interest” contained within an ion trap is established by an agreed convention among the observers, one that can be changed arbitrarily without changing the physical dynamics occurring within the ion trap in any way.
If decoherence has no physical consequences for interaction matrix elements, it can have no physical consequences for entanglement. The total entanglement in a quantum universe satisfying decompositional equivalence is, therefore, strictly conserved. Measurements, in particular, cannot physically destroy entanglement, and hence cannot create von Neumann entropy. The state can, in this case, be considered to be a pure quantum state with von Neumann entropy of zero at all times. This situation is in stark contrast to that of a universe in which decompositional equivalence is violated, i.e., a universe in which the dynamics do depend on system boundaries, either via a physical process of “wave-function collapse” driven by measurement or a physical and therefore ontological “emergence” of bounded systems driven by decoherence. In this latter kind of universe, entanglement is physically destroyed by decoherence and von Neumann entropy objectively increases. A countervailing physical process that creates entanglement, either between measurements or in regions of weak decoherence, and hence decreases von Neumann entropy must be postulated to prevent such a universe from solidifying into an objectively classical system, a kind of system that our universe demonstrably is not.
3.3. Consequence: Identification of Systems by Observers is Intrinsically Ambiguous
While they cannot, without violating decompositional equivalence, physically destroy entanglement, observations nonetheless have real-valued outcomes that can be recorded in classical data structures and reported by one observer to another using classical communication. If the “systems” that these outcome values describe cannot be assumed to be specified for observers by decoherence and environmental witnessing, they must be specified by observers themselves, by the deployment of system-identifying POVMs. It was argued in Section 2.3 above that both the role of the environment in mediating observations and the de facto epistemic position of finite observers support defining system-identifying POVMs not over the particular sets of quantum degrees of freedom—the particular Hilbert spaces and thus TPSs of U—corresponding to recordable outcome values, but over U as a whole. With the assumption of decompositional equivalence, this broad approach to defining POVMs becomes not just advisable but inescapable. If system boundaries can be shifted arbitrarily without physical consequences, they can be shifted arbitrarily without consequences for the recording of observed outcome values in physical media. Hence the outcome values recorded following deployment of a POVM must be independent of arbitrary shifts of the boundary within , and hence in the TPS of U, over which the POVM is defined. This can only be the case if the POVM is not defined over one component of a fixed TPS, but rather over all of .
Recall that any finite observer is restricted to a finite encoding of the outcomes obtained with any POVM; any POVM can be considered a mapping to binary codes of some finite length N. This condition can be met by composing an arbitrary POVM {Ei} with a nonlinear function such that: for some finite resolution . Defining any POVM {Ei} over all of as in (1) renders the definition of “system” implicit: A system S is whatever returns finite outcome values when acted upon by some POVM composed with . The detectable degrees of freedom of such a system are, at some time t, the degrees of freedom in the inverse images of the components for which at t.
In general, many TPSs of will satisfy (1) for any given ; the collections of quantum degrees of freedom represented by the “system” components of these TPSs will be indistinguishable in principle by an observer deploying . Observations in any universe satisfying decompositional equivalence thus satisfy a symmetry, called “observable-dependent exchange symmetry” in [38]: Any two systems S and T for which a POVM returns identical sets of outcome values when composed with can be exchanged arbitrarily without affecting observations carried out using . To borrow an example from [38], many distinct radioactive sources may appear identical to an observer equipped only with a Geiger counter. It is shown in [38] that all observational consequences of the no-cloning theorem, the Kochen–Specker theorem and Bell’s theorem follow from observable-dependent exchange symmetry. Decompositional equivalence is sufficient, therefore, for the universe to appear quantum-mechanical, not classical, to finite observers whose means of collecting classical information can be represented by POVMs.
By imposing observable-dependent exchange symmetry on observers, the assumption of decompositional equivalence removes the final sense in which observational classicality might be regarded as objective classicality: Two observers who record the same outcomes can no longer infer that their respective POVMs have detected the same collection of quantum degrees of freedom. As observable-dependent exchange symmetry applies, in principle, to all quantum systems, it applies not just to the “systems of interest” to which classically communicated outcome values refer, but to the physical media into which such outcome values are encoded. The “measurement problem” in the current framework is thus the problem of explaining not only how discrete outcome values are obtained from quantum systems, but how classical data structures encoding such values are implemented by the collections of quantum degrees of freedom that constitute communication channels, including the collections of quantum degrees of freedom that constitute the apparently-classical memories of observers. The measurement problem in this formulation is thus the full problem of understanding LOCC. This formulation of the measurement problem is similar to those encountered in the multiple worlds [9], multiple minds [39] or consistent histories [8] formulations of quantum theory, all of which assume purely unitary evolution; however, it rejects the implicit ontological assumption, common to these standard approaches, that “systems” and hence TPSs can be regarded as constants across “branches” or histories, and therefore rejects the assumption that “classical communication” can be taken for granted as being physically unproblematic.
4. Decoherence as Semantics 4.1. Decoherence as Implemented by a POVM
If decoherence is not a physical process by which the environment creates classical information for observers, what is it? It is suggested in [3], and shown in detail in [28] that decoherence can be self-consistently and without circularity viewed as a purely informational process, a model-theoretic or semantic mapping from quantum states to classical information. It is, therefore, reasonable to think of decoherence as implemented by a POVM. To see this, it is useful to reconceptualize observation not as the collection by observers of pre-existing classical information, but as a dynamical outcome of the continuous action by the environment on the physical degrees of freedom composing the observer. If an arbitrary system S interacts with its environment E via a Hamiltonian HS−E, a POVM can be defined as a mapping: where i labels degrees of freedom of S and k and j label degrees of freedom of E. This POVM maps each degree of freedom of E to the real normalized sum of its matrix elements, and hence to its total coupling, with the degrees of freedom of S, and hence naturally represents the encoding of in . It thus takes the slogan “decoherence is continuous measurement by the environment” literally.
In a universe that satisfies decompositional equivalence, the meanings of “S” and “E” in (2) can be shifted arbitrarily provided S ⊗ E = U. Suppose an observer O deploys a POVM {Ei} defined over U, such that the inverse image Im−1Ek is outside O for all components Ek for which . In this case, O can be considered the “system” and ∪k(Im−1Ek) ⊂ U where can be considered the “environment” in (2); the Hamiltonian Hik then characterizes the observer-environment interaction, and encodes classical information—the outcome values αk—about ∪k(Im−1Ek) into . Hence (2) provides a general definition of decoherence as the deployment of a POVM by an observer. For observers embedded in a relatively static environment, for which the total observer-environment interaction ∑ik Hik is nearly constant, (2) is reasonably interpreted as defining a single, continuously-deployed POVM. For observers embedded in highly-variable environments that nonetheless exhibit some periodicity, as most human observers are, it is reasonable to view (2) as describing the deployment of not one but a periodic sequence of POVMs, each normalized over a subset of the environmental degrees of freedom with which O interacts. As such a sequence must be finite for a finite observer, a finite observer can only be viewed as decohering his, her or its environment in a finite number of ways. Hence unlike the “environment as witness”, a finite observer as witness can physically encode the states of at most a finite number of distinct “systems”. Because the POVMs encoded by finite observers are limited in their resolution by , each of the distinct “systems” representable by a finite observer is in fact an equivalence class under observable-dependent exchange symmetry.
Using (2), any collection of Hilbert-subspace boundaries that enclose disjoint collections of degrees of freedom and hence define distinct “systems” Sµ can be represented by a collection of distinct POVMs . The detectable outcome values produced by these POVMs have non-overlapping inverse images; hence they all mutually commute. If these POVMs are regarded as all acting at each of a sequence of times ti, their outcomes at those times can be considered to be a sequence of real vectors . These vectors form a consistent decoherent history of the Sµ at the ti, in the sense defined by Griffiths [8]. In a universe in which decoherence is an informational process, the number of such consistent decoherent histories and hence the number of “classical realms” [40] is limited only by the number of distinct sets of subspaces of , i.e., is combinatorial in the number of degrees of freedom of . Each of these histories, as a discrete time sequence of real vectors, can be regarded as a sequential sample of the state transitions of a classical finite state machine (FSM; [19]). As shown by Moore [20], no finite sequence of observations of an FSM is sufficient to uniquely identify the FSM; hence no finite sample of any decoherent history is sufficient to identify the TPS boundaries at which the POVMs contributing to the history are defined, confirming the observable-dependent exchange symmetry of observations in a universe satisfying decompositional equivalence.
4.2. Decoherence Defines a Virtual Machine
A classical virtual machine is an abstract machine representable by an algorithm executed on a classical Turing machine [41,42]; any executable item of software, from an operating system to a word processor or a numerical simulation, defines a virtual machine. An execution trace of a virtual machine V is the sequence of state transitions that V executes from a some given input state. Any classical FSM is a classical virtual machine; hence any finite sequence of observations made with a POVM can be represented as an execution trace of a classical virtual machine. Considering that an arbitrary algorithm A can be employed to choose which of a collection of mutually-commuting POVMs to deploy at a given time point tk, it is clear that any consistent decoherent history of U can be represented as an execution trace of a classical virtual machine. Hence decoherence can, in general, be represented as a mapping of to the space of classical virtual machines, i.e., by a diagram such Figure 1; as such a mapping takes quantum states to classical information, it can be represented as a POVM {Ei}. The requirement that this diagram commutes is the requirement that the action of the physical propagator eacting from tn to tn+1 is represented, by the mapping {Ei}, as a classical state transition from the nth to the (n + 1)th state of some virtual machine V. This commutativity requirement is fully equivalent to the commutativity requirement that defines consistency of observational histories of U (e.g., [8] Equation 10.20). Hence an evolution HU is consistent under a decoherence mapping {Ei} if it can be interpreted as an implementation of a classical virtual machine.
Semantic relationship between physical states of U and einselected virtual states of a virtual machine V implemented by U. Commutativity of this diagram assures that the decoherence mapping {Ei} is consistent.
The semantic relationship shown in Figure 1 is familiar: It is the relationship by which the behavior of any physical device is interpreted as computation, i.e., as execution of an algorithm characterized as an abstract virtual machine V. Any consistent decoherence mapping can, therefore, be regarded as an interpretation of the time evolution of U as classical computation. As the outcome values returned by any mapping {Ei} deployed by a finite observer must be collected within a finite time, any such mapping interprets only some local sample of the time evolution of U as computation. This perspective on decoherence is consistent with the cybernetic intuition—the intuition expressed by the Church–Turing thesis—that any classical dynamical process, and in particular any classical communicative process can be represented algorithmically.
5. Observation as Entanglement 5.1. Classical Communication is Regressive
We can now return to Alice and Bob, who each perform local observations of a quantum system and then exchange their results by classical communication. If the dynamics in U exhibit decompositional equivalence, Alice and Bob cannot rely on decoherence by their shared environment to uniquely identify the system of interest; instead they must each rely on their own POVM to identify it. Observable-dependent exchange symmetry prevents them, moreover, from determining by observation that they have identified the same system of interest; given (2), they cannot determine without observational access to all degrees of freedom of U whether they are deploying the same system-identifying POVM. Under these conditions, what is the meaning of LOCC?
The first thing to note is that any answer to this question that relies on prior agreements between Alice and Bob is straightforwardly regressive, and hence incapable of explaining anything. How, for example, do Alice and Bob know which POVM to deploy in order to perform a joint observation? How, in other words, do observers coordinate their observations, independently of whether they manage to observe a single, shared system? There are two possibilities, as illustrated in Figure 2. One involves classical communication: In line with the canonical scenario, some third party presents each observer with a qubit, and instructs them on how to observe it. The other, more in line with laboratory practice, involves Alice and Bob jointly observing the production of the pair, and then each transporting one of the qubits to a separate site for further observation. This second option reduces the problem of selecting the correct POVM to employ for the subsequent observations to the problem of resolving the joint system-identification ambiguity when the production of S is jointly observed.
From the perspective of the observers, the two processes illustrated in Figure 2 both involve the receipt of classical information at t1 and its use in directing observations at t2; they differ only in the source of the information received at t1. As noted earlier, however, the only means of obtaining classical information provided by quantum theory is the deployment of a POVM. The two processes differ, therefore, only in which POVM the observers deploy at t1: In (A) they each deploy a POVM that identifies and determines the state of the “classical source,” while in (B) they each deploy a POVM that identifies and determines the state of S. Hence the coordination question asked at t2 can also be asked at t1; even if the intrinsic ambiguity of observations with POVMs is ignored, the LOCC scenario cannot get off the ground without an agreement between the observers about which POVM to deploy at t1.
In order to reach an agreement about which POVMs to deploy at t1, the observers must exchange classical information. Each observer must, therefore, deploy a POVM that enables the acquisition of classical information from the other; call Alice’s POVM for acquiring information from Bob “” and Bob’s POVM for acquiring information from Alice “”, and suppose that these POVMs are deployed at some time t0. Clearly the same question can be asked at t0 as at t2 and t1, and clearly it cannot be answered by postulating yet another agreement, another classical communication, and another deployment of POVMs. The same kind of regress infects any simple joint assumption by Alice and Bob that they are observing the same system, an assumption that must be communicated to be effective. Any instance of measurement under LOCC conditions, in other words, requires the postulation of a priori classical communication between the observers, and hence requires that the observers themselves be regarded as classically objective a priori. Minimal quantum mechanics with decompositional equivalence provides no mechanism by which such a priori classical objectivity can be achieved; hence minimal quantum mechanics with decompositional equivalence does not support LOCC. At best, minimal quantum mechanics with decompositional equivalence supports the appearance of LOCC in cases in which observers agree to treat their observations as observations of the same system.
Two options for coordinating the selection of POVMs and Bob by Alice and Bob, respectively. (A) Alice and Bob receive POVM selection instructions from a classical source. (B) Alice and Bob jointly observe the production of S and agree that their selected POVMs identify it.
The regress of classical communications encountered here is equivalent to the regress of the von Neumann chain that motivates the adoption of “collapse” as a postulate of quantum mechanics [17]. Following Everett [9], the usual response to this regress in the context of minimal quantum mechanics is to postulate observation-induced “branching” between the multiple possible outcomes at each instant of observation, with the resulting “branches” being regarded as equally “actual” either as physically-realized classical universes (e.g., [43,44]) or as classical information-encoding states of a branching observer’s consciousness (e.g., [39]). In either case, inter-branch decoherence is regarded as conferring observational classicality, and the identity of observed systems across branches is taken for granted; hence decompositional equivalence and observable-dependent exchange symmetry are both violated by the standard Everettian picture. The concept of branching does not, moreover, explain how classical outcomes are encoded by the physical degrees of freedom that implement observers; it therefore leaves open the question of how the communication of classical information is possible.
5.2. Memory is Communication
The second thing to note regarding LOCC is that the physical implementation of any classical memory, whether it comprises words written on a page or neural excitation patterns in someone’s brain, is a quantum system. Physically accessing a classical memory requires extracting classical information from this quantum system, and hence requires deploying a POVM. Observable-dependent exchange symmetry assures that an observer cannot be confident that the physical degrees of freedom accessed with a “memory-accessing” POVM are the same physical degrees of freedom that were accessed when a memory was encoded, or on any previous occasion when the memory was read. Hence Bob’s predicament when accessing his own memory of an observation is no different from Alice’s predicament when accessing a report from Bob; in both cases, all the usual caveats pertaining to quantum measurement apply.
The requirement that classical memories be observed in order to function as memories renders the LOCC scenario descriptive of all reportable or even recallable observations by single observers. When John Wheeler said “no phenomenon is a physical phenomenon until it is an observed phenomenon” (quoted in [45] p. 191), he might as well have said that no phenomenon is a physical phenomenon until it is an observed and reported phenomenon, at least reported to the observer himself/herself/itself via recall from memory. It is reporting that renders observational results classical. In this sense, observational classicality is intrinsically public, or social; without an observer to access a report of an observation, there is no evidence that the observation has been classically recorded. Hence explaining the appearance of LOCC can be considered to be equivalent to explaining the ability of a single observer to interpret a physical state, including a physical state of his/her/its own memory system, as a classical report of a previous observation.
5.3. Implementation of POVMs by <italic>H</italic><sub>U</sub>
Let us suppose that Alice obtains a report from Bob simply by observing his state . If Alice is to regard a state of Bob as a report, i.e., as classically encoding a state of some identified external system S, it must be possible, at least in principle, for her to establish that a counterfactual-supporting classical correlation—a classical correlation that exists whether observed or not—between and is maintained by the B − S interaction and hence, given decompositional equivalence, by HU. The action of HU maintains a counterfactual-supporting classical correlation between states of S and B just in case S and B are entangled; if the correlation that is maintained is perfect, S and B must be monogamously entangled. Whether joint states of two identified systems appear to be entangled is, however, dependent on the choice of basis and hence the POVM deployed to determine their joint states [46,47,48,49,50]. Bob’s state is, therefore, a classical encoding of for Alice only if she deploys a POVM that projects onto a Hilbert-space bases in which is entangled, and is a perfectly classical encoding if this apparent entanglement is monogamous.
To say of any observer O that “O deploys to identify S” is, therefore, just to say that O and S are entangled by the action of HU on the quantum degrees of freedom that implement O and S: Observation is entanglement. The existence of such entanglement is an objective fact that is, in a universe satisfying decompositional equivalence, independent of the boundaries of S and O. Whether S and O appear to be entangled to a third-party observer, however, is not an objective fact; it rather depends on the POVM employed by that observer to extract classical information from the degrees of freedom implementing S and O. Hence while the classical correlation between S and O is “real”—i.e., physical, a result of the action of HU—whether it appears classical to third parties is virtual, i.e., dependent on semantic interpretation. All public communication is, therefore, nonfungible or “unspeakable” in the sense defined in [51]: The information communicated is always strictly relative to a POVM—a “reference frame” in the language of [51]—that is not specified by HU and cannot be assumed without circularity. Any publicly-communicable classical description of the world is, therefore, intrinsically logically circular.
The intrinsic circularity of public classical communication renders an explanation of a shared classical world in terms of fundamental physics unattainable. The shared classical world of ordinary experience cannot, therefore, be regarded as “emergent” from fundamental physics alone; instead it must be thought of as stipulated by the choice of a POVM, i.e., as stipulated by observers themselves. From a practical point of view, however, a shared POVM is a shared item of experimental apparatus. The conclusion that classical communication is entanglement therefore raises the possibility of discovering an item of apparatus that implements a POVM capable of revealing, to third-party observers, the entanglement that transfers classical information from S to O in any particular instance. With such an apparatus, it would be possible to claim a third-party understanding of the local action of HU that implements any particular instance of classical communication.
6. Conclusions
As Bohr [52] often emphasized, physicists must rely on language, pictures, and other conventionalized tools of human communication to construct descriptions of the world. They must, moreover, rely on measurements conducted in finite regions of space and time. The acquisition and communication of classical information is, therefore, always pursued in a LOCC setting. What has been examined here is the question of how such communication can be understood in terms of basic physics: Minimal quantum mechanics together with decompositional equivalence. What has been shown is that classical communication is quantum entanglement that results deterministically from the action of HU. Such entanglement is not publicly accessible to multiple observers without the further specification of a POVM. Any such specification is, however, itself an item of classical information; hence any claim that classical communication “emerges” from quantum entanglement involves logical circularity. The idea that quantum theory can produce a shared classicality—can be an “ultimate theory that needs no modifications to account for the emergence of the classical” ([53] p. 1)—therefore cannot be maintained. This loss of “emergent classicality” is, however, balanced by a powerful gain: The possibility that a POVM can be discovered that will reveal, in particular cases, the entanglement by which the transfer of classical information from system to observer is implemented.
The dependence of physics on model-theoretic or semantic assumptions explored here ties physics explicitly to classical computer science: the selection of a shared POVM that enables quantum theory to get off the ground as a description of a shared observable world is fully equivalent to the selection of a virtual-machine description that enables the description of a physical process as the instantiation of a classical algorithm to get off the ground. All physical descriptions are, from this point of view, specifications of classical virtual machines. What distinguishes “quantum” from “classical” computation is the choice of a POVM. The increased efficiency of quantum computation is, therefore, not the result of a different kind of device executing a different kind of behavior, but rather the result of a different choice of description. Castagnoli [54,55] has shown that executions of quantum algorithms can be understood as executions of classical algorithms in which half of the required answer is known up front; what the current analysis suggests is that this half of the required answer is encoded by the POVM with which the initial state of a quantum computation is defined.
References Nielsen M.A. Chaung I.L. Quantum Information and Quantum Computation Cambridge University Press Cambridge, UK 2000 Landauer R. Information is a physical entity Physica A 1999 263 63 67 10.1016/S0378-4371(98)00513-5 Fields C. If physics is an information science, what is an observer? Information 2012 3 92 123 Ollivier H. Poulin D. Zurek W.H. Objective properties from subjective quantum states: Environment as a witness Phys. Rev. Lett. 2004 93 220401:1 220401:4 Ollivier H. Poulin D. Zurek W.H. Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe Phys. Rev. A 2005 72 042113:1 042113:21 Blume-Kohout R. Zurek W.H. Quantum Darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information Phys. Rev. A 2006 73 062310:1 062310:21 Zurek W.H. Quantum Darwinism Nat. Phys. 2009 5 181 188 10.1038/nphys1202 Griffiths R.B. Consistent Quantum Theory Cambridge University Press New York, NY, USA 2002 Everett H. III. “Relative state” formulation of quantum mechanics Rev. Mod. Phys. 1957 29 454 462 10.1103/RevModPhys.29.454 Landsman N.P. Between classical and quantum Handbook of the Philosophy of Science: Philosophy of Physics Butterfield J. Earman J. Elsevier Amsterdam, the Netherland 2007 417 553 Schlosshauer M. Experimental motivation and empirical consistency of minimal no-collapse quantum mechanics Ann. Phys. 2006 321 112 149 10.1016/j.aop.2005.10.004 Deutsch D. Quantum theory, the Church-Turing principle and the universal quantum computer Proc. R. Soc. Lond. A 1985 400 97 117 10.1098/rspa.1985.0070 Farhi E. Gutmann F. An analog analogue of a digital quantum computation Phys. Rev. A 1996 57 2403 2406 10.1103/PhysRevA.57.2403 Briegel H.J. Browne D.E. Dür W. Raussendorf R. van den Nest M. Measurement-based quantum computation Nat. Phys. 2009 5 19 26 Aaronson S. NP-complete problems and physical reality Available online: (accessed on 6 December 2012) Wallace D. Philosophy of quantum mechanics The Ashgate Companion to Contemporary Philosophy of Physics Rickles D. Ashgate Publisher Aldershot, UK 2008 16 98 von Neumann J. Mathematische Grundlagen der Quantenmechanik Springer Berlin, Germany 1932 Fuchs C.A. QBism: The perimeter of quantum Bayesianism Available online: (accessed on 6 December 2012) Ashby W.R. An Introduction to Cybernetics Chapman and Hall London, UK 1956 Moore E.F. Gedankenexperiments on sequential machines Autonoma Studies Shannon C.W. McCarthy J. Princeton University Press Princeton, NJ, USA 1956 129 155 Rips L. Blok S. Newman G. Tracing the identity of objects Psychol. Rev. 2006 133 1 30 Scholl B.J. Object persistence in philosophy and psychology Mind Lang. 2007 22 563 591 10.1111/j.1468-0017.2007.00321.x Fields C. The very same thing: Extending the object token concept to incorporate causal constraints on individual identity Adv. Cogn. Psychol. 2012 8 234 247 Zurek W.H. Decoherence, einselection and the existential interpretation (the rough guide) Philos. Trans. R. Soc. A 1998 356 1793 1821 10.1098/rsta.1998.0250 Zurek W.H. Decoherence, einselection, and the quantum origins of the classical Rev. Mod. Phys. 2003 75 715 775 10.1103/RevModPhys.75.715 Aad G. Abajyan T. Abbott B. Abdallah J. Abdel-Khalek S. Abdelalim A.A. Abdinov O. Abenm R. Abi B. Abolins M. Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC Phys. Lett. B 2012 716 1 29 10.1016/j.physletb.2012.08.020 CMS Collaboration Combined results of searches for the standard model Higgs boson in pp collisions at √s = 7 TeV Phys. Lett. B 2012 710 26 48 Fields C. A model-theoretic interpretation of environmentally-induced superselection Int. J. Gen. Syst. 2012 41 847 859 10.1080/03081079.2012.707197 Hu B.L. Emergence: Key physical issues for deeper philosophical inquiries J. Phys. Conf. Ser. 2012 361 10.1088/1742-6596/361/1/012003 Tegmark M. The mathematical universe Found. Phys. 2008 38 101 150 10.1007/s10701-007-9186-9 Schlosshauer M. Decoherence, the measurement problem, and interpretations of quantum theory Rev. Mod. Phys. 2004 76 1267 1305 10.1103/RevModPhys.76.1267 Schlosshauer M. Decoherenceand the Quantum to Classical Transition Springer Berlin, Germany 2007 Zurek W.H. Pointer basis of the quantum apparatus: Into what mixture does the wave packet collapse? Phys. Rev. D 1981 24 1516 1525 10.1103/PhysRevD.24.1516 Zurek W.H. Environment-induced superselection rules Phys. Rev. D 1982 26 1862 1880 10.1103/PhysRevD.26.1862 Joos E. Zeh D. The emergence of classical properties through interaction with the environment Z. Phys. B 1985 59 223 243 10.1007/BF01725541 Zurek W.H. Decoherence, chaos, quantum-classical correspondence, and the algorithmic arrow of time Phys. Scr. 1998 76 186 198 10.1238/Physica.Topical.076a00186 Brune M. Haglet E. Dreyer J. Maitre X. Maali A. Wunderlich C. Raimond J.M. Haroche S. Observing the progressive decoherence of the meter in a quantum measurement Phys. Rev. Lett. 1996 77 4887 4890 Fields C. Bell’s theorem from Moore’s theorem Int. J. Gen. Syst. Available online: (accessed on 10 December 2012) in press Zeh D. The problem of conscious observation in quantum mechanical description Found. Phys. Lett. 2000 13 221 233 10.1023/A:1007895803485 Hartle J.B. The quasiclassical realms of this quantum universe Found. Phys. 2008 41 982 1006 10.1007/s10701-010-9460-0 Tanenbaum A.S. Structured Computer Organization Prentice Hall Englewood Cliffs, NJ, USA 1976 Hopcroft J.E. Ullman J.D. Introduction to Automata, Languages and Computation Addison-Wesley Boston, MA, USA 1979 Wallace D. Decoherence and ontology Many Worlds? Everett, Quantum Theory and Reality Saunders S. Barrett J. Kent A. Wallace D.D. Oxford University Press Oxford, UK 2010 53 72 Tegmark M. Many worlds in contex Many Worlds? Everett, Quantum Theory and Reality Saunders S. Barrett J. Kent A. Wallace D.D. Oxford University Press Oxford, UK 2010 553 581 Scully R.J. Scully M.O. The Demon and the Quantum: From the Pythagorean Mystics to Maxwell’s Demon and Quantum Mystery Wiley New York, NY, USA 2007 Zanardi P. Virtual quantum subsystems Phys. Rev. Lett. 2001 87 077901:1 077901:4 Zanardi P. Lidar D.A. Lloyd S. Quantum tensor product structures are observable-induced Phys. Rev. Lett. 2004 92 060402:1 060402:4 de la Torre A.C. Goyeneche D. Leitao L. Entanglement for all quantum states Eur. J. Phys. 2010 31 325 332 Harshman N.L. Ranade K.S. Observables can be tailored to change the entanglement of any pure state Phys. Rev. A 2011 84 012303:1 012303:4 Thirring W. Bertlmann R.A. Köhler P. Narnhofer H. Entanglement or separability: The choice of how to factorize the algebra of a density matrix Eur. Phys. J. D 2011 64 181 196 Bartlett S.D. Rudolph T. Spekkens R.W. Reference frames, superselection rules, and quantum information Rev. Mod. Phys. 2007 79 555 609 Bohr N. The quantum postulate and the recent developments of atomic theory Nature 1928 121 580 590 Zurek W.H. Relative states and the environment: Einselection, envariance, quantum darwinism, and the existential interpretation Available online: (accessed on 10 December 2012) Castagnoli G. Quantum correlation between the selection of the problem and that of the solution sheds light on the mechanism of the speed up Phys. Rev. A 2010 82 052334:1 052334:8 Castagnoli G. Probing the mechanism of the quantum speed-up by time-symmetric quantum mechanics Available online: (accessed on 10 December 2012) |
383bcba61f388245 | Sunday, August 9, 2015
A very brief introduction to the electron correlation energy
RHF is often not accurate enough for predicting the change in energies due to a chemical reaction, no matter how big a basis set we use. The reason is the error due to the molecular orbital approximation
and the energy difference due to this approximation is known as the correlation energy. Just like we improve the LCAO approximation by including more terms in an expansion, we can improve the orbital approximation by an expansion, in terms of Slater determinants
$$\Psi ({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N}) \approx \sum\limits_{i = 1}^L {{C_i}{\Phi _i}({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N})} $$
The “basis set” of Slater determinants $\{\Phi_i \}$ is generated by first computing an RHF wave function $\{\Phi_0 \}$ as usual, which also generates a lot of virtual orbitals, and then generating other determinants with these orbitals. For example, for an atom or molecule with two electrons the RHF wave function is $\left| {{\phi _1}{{\bar \phi }_1}} \right\rangle $ and we have $K-1$ virtual orbitals (${\phi _2}, \ldots ,{\phi _K}$ , where $K$ is the number of basis functions), which can be used to make other Slater determinants like $\Phi _1^2 = \left| {{\phi _1}{{\bar \phi }_2}} \right\rangle $ and $\Phi _{11}^{22} = \left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $ (Figure 1).
Figure 1. Schematic representation of the electronic structure of some of the determinants used in Equation 3
Conceptually (in analogy to spectroscopy), an electron is excited from an occupied to a virtual orbital: $\left| {{\phi _1}{{\bar \phi }_2}} \right\rangle$ represents a single excitation and $\left| {{\phi _2}{{\bar \phi }_2}} \right\rangle $ a double excitation. For systems with more than two electrons higher excitations (like triple and quadruple excitations) are also possible. In general
$$\Psi \approx {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} } + \sum\limits_a {\sum\limits_b {\sum\limits_r {\sum\limits_s {C_{ab}^{rs}\Phi _{ab}^{rs}} } } } + \ldots $$
The expansion coefficients can be found using the variational principle
$$\frac{{\partial E}}{{\partial {C_i}}} = 0 \ \textrm{for all} \ i$$
and this approach is called configuration interaction (CI). The more excitations we include (i.e. increase L in Eq 2.12.1) the more accurate the expansion and the resulting energy becomes. If the expansion includes all possible excitations (known as a full CI, FCI) then we have a numerically exact wave function for the particular basis set, and if we use a basis set where the HF limit is reached then we have a numerically exact solution to the electronic Schrödinger equation! That’s the good news …
The bad news is that the FCI “basis set of determinants” is much, much larger than the LCAO basis set (i.e. $L >> K$),
$$L = \frac{{K!}}{{N!(K - N)!}}$$
where $N$ is the number of electrons. Thus, an RHF/6-31G(d,p) calculation on water involves 24 basis functions and roughly $\tfrac{1}{8}K^4$ = 42,000 2-electron integrals but a corresponding FCI/6-31G(d) calculation involves nearly 2,000,000 Slater determinants.
Just like finding the LCAO coefficients involves the diagonalization of the Fock matrix, finding the CI coefficients (Ci) and the lowest energy also involves a matrix diagonalization.
$$\bf{E} = {{\bf{C}}^t}{\bf{HC}}$$
where $\bf{E}$ is a diagonal matrix whose smallest value ($E_0$) corresponds to the variational energy minimum. While the Fock matrix is a $K \times K$ matrix, the CI Hamiltonian ($\bf{H}$) is an $L \times L$ matrix. Just holding the 2 million by 2 million matrix for the water molecule using the 6-31G(d,p) basis set requires millions of gigabites! Clever programming and large computers actually makes a FCI/6-31G(d,p) calculation on $\ce{H2O}$ possible, but FCI is clearly not a routine molecular modeling tool.
Using, for example, only single excitations (called CI singles, CIS)
$${\Psi ^{CIS}} = {C_0}{\Phi _0} + \sum\limits_a {\sum\limits_r {C_a^r\Phi _a^r} } $$
is feasible, however is doesn’t result in any improvement. The CIS Hamiltonian has three kinds of contributions
& \langle \Phi _0\left| {\hat H} \right| \Phi_0 \rangle = E_{RHF}\\
\langle\Phi^{CIS}\left| {\hat H} \right| \Phi^{CIS} \rangle \rightarrow & \langle \Phi _0\left| {\hat H} \right| \Phi_a^r \rangle = F_{ar} = 0 \\
& \langle \Phi _a^r\left| {\hat H} \right| \Phi_a^r \rangle
which means that when this matrix is diagonalized $E_0=E_{RHF}$. Thus CIS does not give us any correlation energy. However, CIS is not completely useless. The second lowest value of $\bf{E}$, $E_1$, represents the energy of the first excited state, at roughly an RHF quality.
Thus, we need at least single and double excitations (CISD)
to get any correlation energy. However, in general including doubles already results in an $\bf{H}$ matrix that is impractically large for a matrix diagonalization. CI, i.e. finding the $C_i$ coefficients using the variational principle, is therefore rarely used to compute the correlation energy.
Perhaps the most popular means of finding the $C_i$’s is by perturbation theory, a standard mathematical technique in physics to compute corrections to a reference state (in this case RHF). Perturbation theory using this reference is called Møller-Plesset pertubation theory, and there are several successively more accurate and more expensive variants: MP2 (which includes some double excitations), MP3 (more double excitations than MP2), and MP4 (single, double, triple, and some quadruple excitations).
Another approach is called coupled cluster which has a similar hierarchy of methods, such as CCSD (singles and doubles) and CCSD(T) (CCSD plus an estimate of the triples contributions). In terms of accuracy vs expense, MP2 is the best choice of a cheap correlation method, followed by CCSD, and CCSD(T). For example, MP4 is not too much cheaper than CCSD(T), but the latter is much more accurate. In fact for many practical purposes it is rarely necessary to go beyond CCSD(T) in terms of accuracy, provided a triple-zeta or higher basis set it used. However, CCSD(T) is usually too computationally demanding for molecules with more than 10 non-hydrogen atoms. In general, the computational expense of these correlated methods scale much worse than RHF with respect to basis set size: MP2 ($K^5$), CCSD ($K^6$), and CCSD(T) ($K^7$). These methods also require a significant amount of computer memory, compared to RHF, which is often the practical limitation of these post-HF methods. Finally, it should be noted that all these calculations also imply an RHF calculation as the first step.
In conclusion we now have ways of systematically improving the wave function, and hence the energy, by increasing the number of basis functions ($K$) and the number of excitations ($L$) as shown in Figure 2.
Figure 2 Schematic representation of the increase in accuracy due to using better correlation methods and larger basis sets.
The most important implication of this is that in principle it is possible to check the accuracy of a given level of theory without comparison to experiment! If going to a better correlation method or a bigger basis set does not change the answer appreciably, then we have a genuine prediction with only the charges and masses of the particles involved as empirical input. These kinds of calculations are therefore known as ab initio or first principle calculations. In practice, different properties will converge at different rates, so it is better to monitor the convergence of the property you are actually interested in, than the total energy. For example, energy differences (e.g. between two conformers) converge earlier than the molecular energies. Furthermore, the molecular structure (bond lengths and angles) tends to converge faster than the energy difference. So it is common to optimize the geometry at a low level of theory [e.g. RHF/6-31G(d)] followed by an energy computation (a single point energy) at a higher level of theory [e.g. MP2/6-311+G(2d,p)]. This level of theory would be denoted MP2/6-311+G(2d,p)//RHF/6-31G(d).
Finally, the correlation energy is not just a fine-tuning of the RHF result but introduces an important intermolecular force called the dispersion energy. The dispersion energy (also known as the induced dipole-induced dipole interaction) is a result of the simultaneous excitation of at least two electrons and is not accounted for in the RHF energy. For example, the stacked orientation of base pairs in DNA is largely a result of dispersion interactions and cannot be predicted using RHF.
Post a Comment |
25ac07c07a6a52c5 | Kwagunt: Creek and Canyon
When I was 18 I was fascinated with American Pragmatism and its theory of truth. I devoured the works of William James and Charles Peirce, the founders of that epistemological school (most of them, anyway; when it comes to scholarship, I’m a hopeless dilettante). They are two of the most amiable minds I have ever encountered. They argued that we come to believe that propositions are true, not so much because they really are, as because they are expedient for us to believe. So, what we call truth is what it is expedient for us to believe – whether or not what we believe really is true.
This notion raised a firestorm when it was proposed in the late 19th century. James and Peirce both expressed themselves strongly, so it was not perhaps unnatural that they were widely understood to mean that truth is nothing but what it is expedient for us to believe. They did not; they meant only that we are so made as to feel that a proposition is true, or likely to be true, or “close enough for government work,” when it works out well in practice – in mundane life, or in scientific experiment, or when tested by logic, or when fitted to our other well-tested beliefs. So, Pragmatism is not so much an epistemological theory, properly speaking, as it is psychological. This has not stopped later generations of Pragmatists from insisting that there is no final Truth, no terminus ad quem of intellectual inquiry, but rather only one waypoint after another in an endless process of searching that is designed only to get us through life, from one approximation of a good understanding to the next.
I was thinking about all this one day as I hiked along the slick muddy bed of Kwagunt Creek, which flows down a canyon to meet the Colorado River in the Grand Canyon, where I was then sojourning as a whitewater boatman. Pragmatism’s insights into our intellectual operations – or mine, anyway – seemed undeniably accurate. How then could I ever know that I had understood a real truth? I mean, there would be nothing to prevent me from such a veridical discovery, but absent any objective criterion of truth – such as, you know, whether or not a notion was true *in fact* – nothing to show me that I had ever achieved it, either.
It was then that I slipped in the mud, very nearly falling on a small boulder and hurting myself quite badly. I thought first, chuckling, about Dr. Johnson’s retort to Bishop Berkeley’s Idealism, which was to kick a stone and demand whether the pain that resulted were merely ideal. I thought then about pain, and what it tells us about our relation to the world. It occurred to me suddenly that pain would be totally useless, indeed worse than useless, unless it conveyed veracious information. There would be no reason for an animal to be equipped with pain, and good reason for it to be insensible thereto, unless the pain conveyed knowledge. Indeed, if an animal’s perceptions of any sort were not at least mostly veridical, its survival prospects would be terrible. So, there can be no way that animals – including man – that have survived millions of years of testing by nature can be poorly set up to apprehend those aspects of the environment that are really important to their lives, to their prosperity, survival, and reproduction. On the contrary.
I realized then that a proposition could not be expedient or useful for us to believe unless it were in some sense, or from some point of view, truly expedient – i.e., a truly reliable guide to action, given the real situation we faced. If the expedience were not really in operation, it would not be expedience at all, but noise that would lead us to disaster and death. If expedience were in fact false as a guide to action, then we could never arrive at the feeling that any proposition whatsoever was true. Our feelings of truth, then, must be more or less reliable guides to truth. We must be fitted by nature to apprehend the truth. If we were not, we never would have made it this far.
It was simple to generalize to our feelings of moral and aesthetic truth, to religious and philosophical truth. It was simple also to generalize to the notion that convention, tradition, custom all represented hard won and true understandings of good policy: of, that is to say, the principles that it is good and useful to believe, so that they order one’s behavior properly and prudently.
These notions were reinforced a few years later when I read physicist John Barrow’s book The Artful Universe. Barrow argues that we are fitted to the appreciation of values really present in nature, and each other, in just the same way and for the same reason that we are fitted to the appreciation of light. The physical structures of the eye and visual cortex are analogous to our organs of moral and aesthetic appreciation, in that their objects are really out there, and are really important and useful for us to understand; and these organs all tell us true things about the world. Thus the moral and aesthetic character of human experience accurately reflects the moral and aesthetic character of the world.
From these considerations it is a short step to the conclusion that the feeling of sehnsucht refers to something real. Sehnsucht is a feeling of profound and acute nostalgia and longing for a home of rest, and for a state of utter perfusion in experience of the feeling of inexhaustible plenitude of significance, importance, fullness, rightness, goodness, beauty and completion, which we feel must there obtain. Sehnsucht was a major factor in the inner life of CS Lewis from earliest childhood. His restless impulse toward the home of rest informed much of his work. It was that intense longing to which his friend JRR Tolkien appealed in the conversation they shared during a hike, wherein the atheist Lewis first opened his mind, seriously and honestly, to Tolkien’s evangelical question – “Jack, what if it’s just true?” – and so became a Christian.
Lewis later insisted that there can be no felt need or aptitude common to the human being that does not correspond, in Barrow’s sense, to values actually present in the objects of our experience. Considered in its most general aspect, the idea is that Leibniz’s Principle of Sufficient Reason applies, not just to truths, but to facts: if a thing X exists at all, it must exist for reasons arising from its ontological milieu that, when fully specified (assuming that such a completely exhaustive description were completable), would turn out to specify just X, exactly and completely – and would, indeed, demand its existence. Thus the river demands its bed, and vice versa. If you see a river bed, there must be an actual river somehow involved in its formation, and if you see a river, you see where its bed lies. Kwagunt Creek would not be where and what it is without Kwagunt Canyon; and vice versa.
Likewise, if you see an eye and a visual cortex in an organism, there must be actual light somehow involved in its formation. And, again, likewise: if you see a pervasive longing for a lost homeland in a species, there must be some such actual homeland somehow involved in its formation. Our feeling of sehnsucht is analogous to what a canyon feels, when no water has flowed through it in millions of years.
Not that canyons feel anything. I would not suggest that they do. But, on the other hand, I have seen too many of them, in too many of their moods, to suggest that they do not.
This correspondence of our aptitudes and felt needs to our world is more than mere mechanical fit of organism to environment, as key to lock. It is, rather, the fit of aboutness, or intension, in which the key fits the lock because it is about the lock, because it is intended ab initio for the lock; and vice versa. If we feel sehnsucht, there must therefore be something out there, something real, that we feel sehnsucht about; and, indeed, that we feel sehnsucht means that we ourselves are about its mysterious object, and are intended for its realization in actuality.
Concluding from the reality of sehnsucht to the reality of its object is of course but a step from a like conclusion from the reality of the universal human experience of the numinous to the reality of its object, so ably and precisely documented by Rudolph Otto in his epochal work, The Idea of the Holy. When we venture into certain places, at certain times, and do or say or think certain things, we can feel the loom of something tremendous, mysterious, dreadful, and joyous. We apprehend at such times the presence of the object of our sehnsucht, and indeed the source of all the goodly things for which we long, and restlessly search, strangers in a strange land. It is then, and only then, that our restless hearts are full, and still, even as they tremble with fear and exaltation.
– Pascal, Pensées (10.148)
They got the boat out, and the Rat took the sculls, paddling with caution. Out in midstream, there was a clear, narrow track that faintly reflected the sky; but wherever shadows fell on the water from bank, bush, or tree, they were as solid to all appearance as the banks themselves, and the Mole had to steer with judgment accordingly. Dark and deserted as it was, the night was full of small noises, song and chatter and rustling, telling of the busy little population who were up and about, plying their trades and vocations through the night till sunshine should fall on them at last and send them off to their well-earned repose. The water’s own noises, too, were more apparent than by day, its gurglings and ‘cloops’ more unexpected and near at hand; and constantly they started at what seemed a sudden clear call from an actual articulate voice.
‘It’s gone!’ sighed the Rat, sinking back in his seat again. ‘So beautiful and strange and new. Since it was to end so soon, I almost wish I had never heard it. For it has roused a longing in me that is pain, and nothing seems worth while but just to hear that sound once more and go on listening to it for ever. No! There it is again!’ he cried, alert once more. Entranced, he was silent for a long space, spellbound.
‘Now it passes on and I begin to lose it,’ he said presently. ‘O Mole! the beauty of it! The merry bubble and joy, the thin, clear, happy call of the distant piping! Such music I never dreamed of, and the call in it is stronger even than the music is sweet! Row on, Mole, row! For the music and the call must be for us.’
The Mole, greatly wondering, obeyed. ‘I hear nothing myself,’ he said, ‘but the wind playing in the reeds and rushes and osiers.’
‘Clearer and nearer still,’ cried the Rat joyously. ‘Now you must surely hear it! Ah— at last— I see you do!’
On either side of them, as they glided onwards, the rich meadow-grass seemed that morning of a freshness and a greenness unsurpassable. Never had they noticed the roses so vivid, the willow-herb so riotous, the meadow-sweet so odorous and pervading. Then the murmur of the approaching weir began to hold the air, and they felt a consciousness that they were nearing the end, whatever it might be, that surely awaited their expedition.
A wide half-circle of foam and glinting lights and shining shoulders of green water, the great weir closed the backwater from bank to bank, troubled all the quiet surface with twirling eddies and floating foam-streaks, and deadened all other sounds with its solemn and soothing rumble. In midmost of the stream, embraced in the weir’s shimmering arm-spread, a small island lay anchored, fringed close with willow and silver birch and alder. Reserved, shy, but full of significance, it hid whatever it might hold behind a veil, keeping it till the hour should come, and, with the hour, those who were called and chosen.
– Kenneth Grahame, The Wind in the Willows, Chapter 7
Consider now how the shepherds felt, and the Magi, and the wise old ox and the foolish ass.
What if it were all true?
Merry Christmas.
6 thoughts on “Kwagunt: Creek and Canyon
1. if a thing X exists at all, it must exist for reasons arising from its ontological milieu that, when fully specified would turn out to specify just X, exactly and completely – and would, indeed, demand its existence.
So there are no contingent facts (or things) and all facts (or things) are necessary?
• A percipient question.
What is demanded ex ante by a particular state of an actual world can be understood – indeed, ought to be understood – as a specific step in the evolution of the Schrödinger equation (or its equivalent), which admits of many possible realizations. So what you get from the antecedents of X is a tightly specified range of outcomes of a type compossible with what has gone before. X realizes one of them (except under Many Worlds Interpretations).
Ex post, what has happened appears inevitable, given its antecedents. But the full character of those antecedents is unknowable except ex post. Indeed, the antecedents of X are knowable qua antecedents of X only by reference to their realization in and by X, for it is only in virtue thereof that we can know what has actually happened in respect to the causal inputs of X – what X has made of them.
Once the process of the actualization of X is completed and X is thereby rendered fully definite, then and only then can we see the causal relations between X and its antecedents; for, only then can we see X at all. And these causal relations then appear to us to be as real, concrete and solid as X, in whom they are revealed and made real, and whom they characterize.
Ex post, X appears therefore to be the only possible product of the inputs that eventuated in X.
This is only to say that – indeed, this is just why it is true that – the past as actual and concrete is immutable, and has no longer any alternatives.
2. From Tolkien’s ‘The debate of Finrod and Andreth’
A High Elf speaks with a Woman – ‘Each of our kindreds perceives Arda differently, and appraises its beauties in different mode and degree. How shall I say it? To me the difference seems like that between one who visits a strange country, and abides there a while (but need not), and one who has lived in that land always (and must)’…
‘Were you and I to go together to your ancient homes east away I should recognize the things there as part of my home, but I should see in your eyes the same wonder and comparison as I see in the eyes of Men in Beleriand who were born here’.
3. The skeptic will counter, we may presume with a bravado proportionate to the square of the weakness of the criticism, that this sense of the numinous, these desires which have no possible earthly fulfillment, are a mere epiphenomenon of consciousness. But it is an awfully strange and persistent epiphenomenon.
If this sense confers no actual adaptive advantage, why would it not be largely bred out of the herd? In fact it must confer an adaptive advantage, or it would not be so widely dispersed in human populations. A fortiori, the peoples more advanced in understanding this sense of the numinous, in separating it from mere superstition, in codifying, formalizing, and explaining it, have it seems reaped the greater adaptive advantage. Go figure!
So it seems that nature itself (herself, I say), among the myriad of possible natures, has conspired to make creatures to believe (more or less) in God. And it really is quite fantastical to think that in this one instance, alone among all other human senses and desires, there is no reality which corresponds to the sense.
• Hah! Well there’s the old sermon margin note: “Weak point; pound pulpit!”
I think that most people, and among these more women than men, are convinced (if you can call it that) by feeling, and then making up the rationalizations later. Of course, if the thing of which you are being convinced is, in fact, the truth; then: no harm done!
But if the thing you feel happens to be the quite opposite of the truth; then it can lead to much harm… as well as very humorous rationalizations. I would like to believe that it is the peculiar vocation of stand-up comedians to correct this defect in human populations, but only a few of them seem to be able to live up to their calling.
So just as the theist looks around and sees God under every rock, every boson, every fact of which he is aware; so too the atheist sees in every fact an argument against the existence of God. And this sort of epiphenomenon of consciousness is just that sort of thing, a post hoc rationalization that allows them to defend an idea to which they are emotionally committed. Of course it explains nothing… so what IF the desire for God is an epiphenomenon of consciousness? That doesn’t make it untrue. In fact, it could just as likely mean it IS true!
Ya know, I found, via Sailer I think, the recent Norwegian documentary series Brainwashed (Mjernevask), recently subtitled into English (except for the parts that are actually in English… Harald Eia speaks perfect English). It is pretty fantastic– about the best you could hope for from a semi-mainstream source in exposing the patent, but politically motivated, scientific falsehoods about genetics and environment. And the one very striking thing about the series is that time and again, Harald would bring back solid, non-partisan, open-minded, moderate arguments against the (I would say absurd) views of various Norwegian social scientists, and you can visibly see them get hot under the collar, visibly put up defenses. Several times, the Norwegian social scientists use the word: uninteresting to describe areas of biological research. It is an emotional defense mechanism… to defend what… to defend what is to them psychologically a religious viewpoint. (Of course it just happens to be a religious viewpoint very much at odds with everyone’s lying eyes. Say what you will about the Holy Trinity, but you don’t see the Holy Trinity not existing around you every day.)
I do think that the yawning gulf between Traditionalists and the world domination that they (and the world) deserve is our failure to understand that most people are not argued into the right way of thinking. That is they do not think themselves into a way of thinking. They are, most people that is, socialized to feel a certain way, and that way of feeling leads them into a way of thinking. This fact is itself morally neutral. It’s merely that the folks in charge of propaganda (government, academia, organized religion, and the press) in any generation has a moral obligation to socialize people into the truth. That, and that alone, is the signal failure of liberal democracy (I happen to believe it is an inherent flaw in liberal democracy). Today those in charge of the organs of propaganda socialize people into convenient (for the elites) lies. The goal then ought not be so much to contend intellectually for the truth, which we should of course do, but to more strongly contend for the organs of propaganda, by which we may socialize (dispose) the people toward the truth.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
3fe5166e8688a451 | Saturday, January 21, 2012
Some parallels between classical and quantum mechanics
This isn't really a blog post. More of something I wanted to interject in a discussion on Google plus but wouldn't fit in the text box.
I've always had trouble with the way the Legendre transform is introduced in classical mechanics. I know I'm not the only one. Many mathematicians and physicists have recognised that it seems to be plucked out of a hat like a rabbit and have even written papers to address this issue. But however much an author attempts to make it seem natural, it still looks like a rabbit to me.
So I have to ask myself, what would make me feel comfortable with the Legendre transform?
The Legendre transform is an analogue of the Fourier transform that uses a different semiring to the usual. I wrote briefly about this many years ago. So if we could write classical mechanics in a form that is analogous to another problem where I'd use a Fourier transform, I'd be happier. This is my attempt to do that.
When I wrote about Fourier transforms a little while back the intention was to immediately follow it with an analogous article about Legendre transforms. Unfortunately that's been postponed so I'm going to just assume you know that Legendre transforms can be used to compute inf-convolutions. I'll state clearly what that means below, but I won't show any detail on the analogy with Fourier transforms.
Free classical particles
Let's work in one dimension with a particle of mass whose position at time is . The kinetic energy of this particle is given by . Its Lagrangian is therefore .
The action of our particle for the time from to is therefore
The particle motion is that which minimises the action.
Suppose the position of the particle at time is and the position at time is . Then write for the action minimising path from to . So
where we're minimising over all paths such that .
Now suppose our system evolves from time to . We can consider this to be two stages, one from to followed by one from to . Let be the minimised action analogous to for the period to . The action from to is the sum of the actions for the two subperiods. So the minimum total action for the period to is given by
Let me simply that a little. I'll use where I previously used and for . So that last equation becomes:
Now suppose is translation-independent in the sense that . So we can write . Then the minimum total action is given by
Infimal convolution is defined by
so the minimum we seek is
So now it's natural to use the Legendre transform. We have the inf-convolution theorem:
where is the Legendre transform of given by
and so (where we use to represent Legendre transform with respect to the spatial variable).
Let's consider the case where from onwards the particle motion is free, so . In this case we clearly have translation-invariance and so the time evolution is given by repeated inf-convolution with and in the "Legendre domain" this is nothing other than repeated addition of .
Let's take a look at . We know that if a particle travels freely from to over the period from to then it must have followed the minimum action path and we know, from basic mechanics, this is the path with constant velocity. So
and hence the action is given by
So the time evolution of is given by repeated inf-convolution with a quadratic function. The time evolution of is therefore given by repeated addition of the Legendre transform of a quadratic function. It's not hard to prove that the Legendre transform of a quadratic function is also quadratic. In fact:
Addition is easier to work with than inf-convolution so if we wish to understand the time evolution of the action function it's natural to work with this Legendre transformed function.
So that's it for classical mechanics in this post. I've tried to look at the evolution of a classical system in a way that makes the Legendre transform natural.
Free quantum particles
Now I want to take a look at the evolution of a free quantum particle to show how similar it is to what I wrote above. In this case we have the Schrödinger equation
Let's suppose that from time onwards the particle is free so . Then we have
Now let's take the Fourier transform in the spatial variable. We get:
We can write this as
So the time evolution of the free quantum particle is given by repeated convolution with a Gaussian function which in the Fourier domain is repeated multiplication by a Gaussian. The classical section above is nothing but a tropical version of this section.
I doubt I've said anything original here. Classical mechanics is well known to be the limit of quantum mechanics as and it's well known that in this limit we find that occurrences of the semiring are replaced by the semiring . But I've never seen an article that attempts to describe classical mechanics in terms of repeated inf-convolution even though this is close to Hamilton's formulation and I've never seen an article that shows the parallel with the Schrödinger equation in this way. I'm hoping someone will now be able to say to me "I've seen that before" and post a relevant link below.
I'm not sure how the above applies for a non-trivial potential . I wrote this little Schrödinger equation solver a while back. As might be expected, it's inconvenient to use the Fourier domain to deal with the part of the evolution due to . In order to simulate a time step of the code simulates in the Fourier domain assuming the particle is free and then spends solving for the -dependent part in the spatial domain. So even in the presence of non-trivial it can still be useful to work with a Fourier transform. Almost the same iteration could be used to numerically compute the action for the classical case.
1 comment:
John Baez said...
Great blog post! I feel pretty sure this material is known, since there's a long tradition of 'idempotent analysis' in Russia which seeks to treat classical mechanics using linear algebra over the tropical semirig. I've provided a short list of references here. I'm not sure they contain what you want, but they should give a reasonably good picture of the state of the art.
Blog Archive |
5346fd4edaf1c099 | All Issues
Volume 11, 2021
Volume 10, 2020
Volume 9, 2019
Volume 8, 2018
Volume 7, 2017
Volume 6, 2016
Volume 5, 2015
Volume 4, 2014
Volume 3, 2013
Volume 2, 2012
Volume 1, 2011
Mathematical Control & Related Fields
June 2018 , Volume 8 , Issue 2
Select all articles
Asymptotic behavior of a Schrödinger equation under a constrained boundary feedback
Haoyue Cui, Dongyi Liu and Genqi Xu
2018, 8(2): 383-395 doi: 10.3934/mcrf.2018015 +[Abstract](4632) +[HTML](616) +[PDF](434.73KB)
Design of controller subject to a constraint for a Schrödinger equation is considered based on the energy functional of the system. Thus, the resulting closed-loop system is nonlinear and its well-posedness is proven by the nonlinear monotone operator theory and a complex form of the nonlinear Lax-Milgram theorem. The asymptotic stability and exponential stability of the system are discussed with the LaSalle invariance principle and Riesz basis method, respectively. In the end, a numerical simulation illustrates the feasibility of the suggested feedback control law.
Compact perturbations of controlled systems
Michel Duprez and Guillaume Olive
2018, 8(2): 397-410 doi: 10.3934/mcrf.2018016 +[Abstract](4929) +[HTML](466) +[PDF](410.77KB)
In this article we study the controllability properties of general compactly perturbed exactly controlled linear systems with admissible control operators. Firstly, we show that approximate and exact controllability are equivalent properties for such systems. Then, and more importantly, we provide for the perturbed system a complete characterization of the set of reachable states in terms of the Fattorini-Hautus test. The results rely on the Peetre lemma.
Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients
Philip Trautmann, Boris Vexler and Alexander Zlotnik
2018, 8(2): 411-449 doi: 10.3934/mcrf.2018017 +[Abstract](4761) +[HTML](568) +[PDF](1072.35KB)
This work is concerned with the optimal control problems governed by a 1D wave equation with variable coefficients and the control spaces \begin{document}$\mathcal M_T$\end{document} of either measure-valued functions \begin{document}$L_{{{w}^{*}}}^{2}\left( I, \mathcal M\left( {\mathit \Omega } \right) \right)$\end{document} or vector measures \begin{document}$\mathcal M({\mathit \Omega }, L^2(I))$\end{document}. The cost functional involves the standard quadratic tracking terms and the regularization term \begin{document}$α\|u\|_{\mathcal M_T}$\end{document} with \begin{document}$α>0$\end{document}. We construct and study three-level in time bilinear finite element discretizations for this class of problems. The main focus lies on the derivation of error estimates for the optimal state variable and the error measured in the cost functional. The analysis is mainly based on some previous results of the authors. The numerical results are included.
A second-order stochastic maximum principle for generalized mean-field singular control problem
Hancheng Guo and Jie Xiong
2018, 8(2): 451-473 doi: 10.3934/mcrf.2018018 +[Abstract](4892) +[HTML](528) +[PDF](436.85KB)
In this paper, we study the generalized mean-field stochastic control problem when the usual stochastic maximum principle (SMP) is not applicable due to the singularity of the Hamiltonian function. In this case, we derive a second order SMP. We introduce the adjoint process by the generalized mean-field backward stochastic differential equation. The keys in the proofs are the expansion of the cost functional in terms of a perturbation parameter, and the use of the range theorem for vector-valued measures.
Stability and output feedback control for singular Markovian jump delayed systems
Jian Chen, Tao Zhang, Ziye Zhang, Chong Lin and Bing Chen
2018, 8(2): 475-490 doi: 10.3934/mcrf.2018019 +[Abstract](5086) +[HTML](592) +[PDF](253.58KB)
This paper is concerned with the admissibility analysis and control synthesis for a class of singular systems with Markovian jumps and time-varying delay. The basic idea is the use of an augmented Lyapunov-Krasovskii functional together with a series of appropriate integral inequalities. Sufficient conditions are established to ensure the systems to be admissible. Moreover, control design via static output feedback (SOF) is derived to achieve the stabilization for singular systems. A new algorithm is built to solve the SOF controllers. Examples are given to show the effectiveness of the proposed method.
2019 Impact Factor: 0.857
Email Alert
[Back to Top] |
239cbcab276bf873 | Skip to main content
Chemistry LibreTexts
13.11: Time-Dependent Perturbation Theory
• Page ID
• In time-independent perturbation theory the perturbation Hamiltonian is static (i.e., possesses no time dependence). Time-independent perturbation theory was presented by Erwin Schrödinger in a 1926 paper,shortly after he produced his theories in wave mechanics. Time-dependent perturbation theory, developed by Paul Dirac, studies the effect of a time-dependent perturbation V(t) applied to a time-independent Hamiltonian \(H_0\). Since the perturbed Hamiltonian is time-dependent, so are its energy levels and eigenstates. Thus, the goals of time-dependent perturbation theory are slightly different from time-independent perturbation theory, where one may be interested in the following quantities:
• The time-dependent expectation value of some observable A, for a given initial state.
The first quantity is important because it gives rise to the classical result of a measurement performed on a macroscopic number of copies of the perturbed system. The second quantity looks at the time-dependent probability of occupation for each eigenstate. This is particularly useful in laser physics, where one is interested in the populations of different atomic states in a gas when a time-dependent electric field is applied. We will briefly examine the method behind Dirac's formulation of time-dependent perturbation theory. Choose an energy basis \(| n \rangle \) for the unperturbed system. (We drop the (0) superscripts for the eigenstates, because it is not useful to speak of energy levels and eigenstates for the perturbed system.)
If the unperturbed system is in eigenstate \(|j \rangle \) at time \(t = 0\), its state at subsequent times varies only by a phase (this is the Schrödinger picture, where state vectors evolve in time and operators are constant)
\[|j(t)\rangle =e^{-iE_{j}t/\hbar }|j\rangle\]
Now, introduce a time-dependent perturbing Hamiltonian \(H_1(t)\). The Hamiltonian of the perturbed system is
Let \(|\psi (t)\rangle \) denote the quantum state of the perturbed system at time \(t\) and obeys the time-dependent Schrödinger equation,
\[H|\psi (t)\rangle =i\hbar {\dfrac {\partial }{\partial t}}|\psi (t)\rangle\]
The quantum state at each instant can be expressed as a linear combination of the complete eigenbasis of \( | n \rangle \):
\[|\psi (t)\rangle =\sum _{n}c_{n}(t)e^{-iE_{n}t/\hbar }|n\rangle\]
where the \(c_n(t)\) coefficients are to be determined complex functions of t which we will refer to as amplitudes
We have explicitly extracted the exponential phase factors \(\exp(-iE_{n}t/\hbar)\) on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state {\displaystyle |j\rangle } \(|j\rangle \) and no perturbation is present, the amplitudes have the convenient property that, for all t, \(c_j(t) = 1\) and \(c_n(t) = 0\) if \(n \neq j\).
The square of the absolute amplitude \(c_n(t)\) is the probability that the system is in state \(n\) at time \(t\), since
Plugging into the Schrödinger equation and using the fact that \(\partial/ \partial t\) acts by a chain rule, one obtains
\[\sum _{n}\left(i\hbar {\dfrac {\partial c_{n}}{\partial t}}-c_{n}(t)V(t)\right)e^{-iE_{n}t/\hbar }|n\rangle =0~.\]
By resolving the identity in front of V, this can be reduced to a set of partial differential equations for the amplitudes,
\[{\dfrac {\partial c_{n}}{\partial t}}={\dfrac {-i}{\hbar }}\sum _{k}\langle n|H_1(t)|k\rangle \,c_{k}(t)\,e^{-i(E_{k}-E_{n})t/\hbar }~.\]
The matrix elements of \(H_1\) play a similar role as in time-independent perturbation theory, being proportional to the rate at which amplitudes are shifted between states. Note, however, that the direction of the shift is modified by the exponential phase factor. Over times much longer than the energy difference \(E_k − E_n\), the phase winds around 0 several times. If the time-dependence of \(H_1\) is sufficiently slow, this may cause the state amplitudes to oscillate (e.g., such oscillations are useful for managing radiative transitions in a laser).
Two-Level System
Consider the two level system (i.e. \(n=1,2\))
\[ | \psi \rangle = \sum_ {n=1,2} c_n(t) | n \rangle_o\]
Solution of time-dependent perturbation for two level system:
\[ i \hbar \dfrac{\partial c_1(t)}{\partial t} = c_1(t) H_{11}(t) + c_2 e^{-i \omega_o t} H_{12} (t)\]
\[ i \hbar \dfrac{\partial c_2(t)}{\partial t} = c_2(t) H_{22}(t) + c_1 e^{+i \omega_o t} H_{21} (t)\]
where the matrix elements of the permutation (in terms of the eigenstates of \(H(0)\)) are
\[ \langle m | H_1(t) | n \rangle = H_{mn}(t)\]
Assume initial state is \(n=1\), and \(H_{11}=H_{22}=0\)
\[ | \psi (t=0) \rangle = |1 \rangle\]
Probability of particle at \(n=2\) at time \(t\) after the perturbation is turned on (i.e., incident light):
\[ c_2(t) = \dfrac{-i}{\hbar} \int_o^t e^{i \omega_o t} dt' H_{21}(t') \label{EQ1}\]
\[H_1(t) = \cos (\omega t) V(r)\]
\(V(r)\) is an amplitude of polarization vector, which we can ignore for now.
If we assume incident frequency of incident light \(\omega\) is comparable to the natural frequency of oscillation from \(\omega _o\)
\[\omega \approx \omega_o\]
then Equation \(\ref{EQ1}\) can be simplified to
\[c_2(t) = - \dfrac{2i}{\hbar} \dfrac{\sin (\omega-\omega_o)t /2}{ (\omega-\omega_o )t} e^{i (\omega_0-\omega)t/2} H_{21}\]
Transition Probability
Assume initial state is \(n=1\), and probability of transition from \(n=1\) state to \(n=2\) state is:
\[ P_{12}(t) = | c_2(t) |^2 =\dfrac{4}{\hbar^2} | \dfrac{\sin (\omega-\omega_o)t /2}{ (\omega-\omega_o )t} |^2 | H_{21} |^2\]
What does this mean? Strangely, it means that the probability of makinga transition is actually oscillating sinusoidally (squared)! If you want to cause a transition, should turn off perturbation after time \(\pi / |\omega- \omega_o| \)or some odd multiple, when the system is in upper state with maximum probability.
\(P_{12}(t)\) is peaked at \( \omega- \omega_o = 0\). The height of \( |H_{12}t/2 \hbar |^2\) and width of \(4\pi/t\) gets higher and narrower as time goes on. Recall this is perturbative treatment, however, and \(P_{12}(t)\) cannot get bigger than 1, so perturbation theory breaks down eventually.
Contributors and Attributions
• Wikipedia
• Was this article helpful? |
5abb99df1d9c2a7b | Towards Reconciliation of Biblical and Cosmological Ages of the Universe[1]
Alexander Poltorak
Two opposite views of the age of the universe are considered. According to the traditional Jewish calendar based on the Talmud the age of the universe is less then six thousand years. The cosmological models of the universe supported by the abundant empirical data place the age of the universe in the twelve billion years range. Critical examination of both views is presented in the first part of the paper. In the second part, we consider quantum-mechanical state of matter before and after the introduction of a conscious observer. Role of the observer’s free will is examined. The definitions of physical and proto-physical states of matter are proposed. It is suggested that creation of the first conscious being with free leads to collapse of the global quantum wave-function thereby bringing the world from a proto-physical to physical state. We propose that the total cosmological age of the universe is comprised of two periods: proto-physical on the order of twelve billion years and physical which is no longer then the age of the conscious human observer. This thesis is used to reconcile the Biblical and scientific views on the age of the universe. This conclusion is analyzed within the framework of classical Jewish thought.
1. Cosmological Age of the Universe
Contemporary science places the age of the universe in the twelve billion years range, give or take a billion years. This number is derived from both theoretical models as well as experimental data. Let us first briefly consider the theoretical foundations of modern cosmology.
1.1. Theoretical models
Modern cosmology is based on the theoretical foundation of Einstein’s General Theory of Relativity (GR).[[1]] As Albert Einstein stated in 1942, “It is impossible to achieve any reliable theoretical results in cosmology outside of the principles of General Theory of Relativity.”
General Relativity
The main equation of GR is
G = 8pT (1a)
Âik – ½Â*gik = 8p Tik (1b)
Let us consider a simple cosmological model based in GR. For this purpose the following assumptions are made:
(a) Homogeneous density. Let us assume that the stars are dispersed in the cosmos like dust with a constant average density of mass-energy r.
(b) Homogeneous and isotropic geometry. Let us also assume that the curvature of the space-time is constant throughout the universe.
(c) Geometry is closed. Let us further assume that the universe is closed, as the boundary conditions for the Einstein field equations.
Three-dimensional sphere satisfies all of the three conditions above. The space-time geometry of such a sphere is described by the following metric:
ds2 = –dt2 + a2 (t) [dc2 + sin2c (dq2 + sin2qdf2)] (2)
The Einstein field equation (1) for this metric is rather simple:
6/a2 (da/dt) 2 + 6/a2 = 16 pr (3)
The first term in this equation is called the “second invariant of external curvature of the space section” of 4-geometry, which shows the rate of expansion of all linear dimensions with time. The second term is the “internal invariant of 3-dimensional curvature of the space section, taken in a given moment in time.
The total “mass” of the universe is
M = r 2p2a3 (4)
And the maximum radius of the universe is
a max = 4M/3p (5)
The field equation (3) now takes a simple form:
(da/dt) 2a max /a = –1 (6)
The first term of this equation is analogous to the kinetic energy and the second – to the potential energy. It becomes now obvious that the expanding universe can not expand beyond the maximum radius a max because it would render the kinetic energy of expansion negative, which, of course, is impossible. We see that that the universe begins to expand from a very small radius a with an ever slowing rate of expansion until it stops at the maximum radius a max and begins to collapse back to it’s original state. This is a very simple cosmological model of a closed universe, which begins its evolution with a Big Bang and ends in a Big Crunch.
The astonishing prediction of General Relativity that our universe was expanding, was very disconcerting to Albert Einstein. In order to do away with this supposedly “erroneous” result, Einstein proposed an ad hoc cosmological constant as an additional term in the GR field equation. When Hubble proved experimentally in 1929 that the universe was indeed expanding [[2]], Einstein admitted that the addition of the cosmological term was the biggest mistake of his life. It is interesting to note that two years ago new experimental data obtained from the Hubble telescope demonstrated that the universe is expanding at accelerated pace. This fact rekindled interest among cosmologists in the cosmological term which represent a mysterious repelling anti-gravity force permeating even the empty space. The nature of this force is now a subject of much speculation.
The ration of the speed of expansion over distance is called Hubble constant:
H0 = (speed of expansion)/(distance to the galaxy) = (da/dt)/a (7)
The Hubble constant is measured in kilometers per second (km/sec) per million light-years. The observable galaxies provide us with the distance and their rate of expansion allowing thereby to calculate the Hubble constant. The Hubble constant is approximately 55 km/sec per Mps. The reverse Hubble constant, H0-1 is called Hubble time and it is found to be approximately 18 billion years
TH = H0-1 ~ 18 · 109 years (8)
The Hubble time is the time required to reach present observed distances between galaxies assuming that the speed of expansion was constant from time of the Big Bang. Hubble time is approximately 1.5 times larger than the cosmological age of the Universe, which is, therefore in the range of twelve billion years
TU ~ 12 · 109 years (9)
The Cosmological Models
Contemporary Curvature of the Space
Cosmological Term L
Hyperbolic; K0 < 0 < 0 The universe evolves from the Big Bang expanding until maximum density and then beginning ever accelerating contraction into the Big Crunch
Hyperbolic; K0 < 0 = 0 The universe evolves from the Big Bang expanding into a flat Minkowski space when the rate of expansion becomes constant
Closed; K0 > 0 <= 0 Friedmann Cosmology: Expansion from the Big Bang, followed by collapse into the Big Crunch
Closed; K0 > 0 0 < L > Lcrit. The Universe evolving from the Big Bang slows down its expansion rate until almost stopping still, then beginning to accelerate the expansion exponentially
Closed; K0 > 0 0 < L = Lcrit. Einstein Cosmology. The Universe evolving from the Big Bang asymptotically approaches the maximum radius where it becomes static. This Cosmology is unstable and contradicts the experimental data
Closed; K0 > 0 0 < L < Lcrit. Infinitely large Universe contracting exponentially until reaching a minimal radius, then beginning exponential expansion into infinity
Simply speaking, the universe can be either closed like a hyper-sphere, open like a saddle, or flat. The most recent experimental data seems to support the flat universe. However, instead of the slowing rate of expansion it appears to be accelerating. This fact led to recent resurrection of interest to cosmological constant.
Big Bang
Extrapolating backwards the expanding Universe one arrives at the initial point where the entire infinitely dense Universe was contained in one point, a singularity. The evolution of the Universe, according to this theory, called the Big Bang, begins from one singularity point, infinitely dense and infinitely hot; a point at which the concepts of space and time do not yet exist. Inexplicable, ineffable explosion, the courses of which are beyond the limits of scientific inquiry, created the space, time and matter in the first moment after the Big Bang.
The cosmology describes the primordial chronology as follows. The Big Bang created a dot of space of the size of approximately 10-33 cm. The first moment we can speak of is about 10-43 c. Before this Plank time interval we can no longer speak of time as we know it. At this point in time all four fundamental forces of nature: gravitation, electro-magnetism, strong and weak nuclear forces were combined in one “super force”[[3]]. The quarks begin to bond into photons, positrons and neutrinos along with their antiparticles. The density of the universe at this point is estimated to have been 1094g/cm3 much of it being radiation. This fireball continued to expand at astonishing speed, many times the speed of light, into a size of a pinhead, an apple, a ball. One millisecond after the explosion, the Universe was a fireball 30 million times hotter than the surface of the sun, 50 million times denser than lead. Known as the inflationary epoch, the universe doubled in size one hundred times in less than one millisecond, from an atomic nucleus to 1035 meters in diameter. An isotropic expansion of the Universe, when it was perfectly smooth, ends at 10-35c. A small fluctuation of the density at this point is thought to have led to the creation of galaxies.[[4]]
When the universe aged to one hundredth of a second, the temperature dropped to 1013K, the Electromagnetic, Strong and Week Nuclear interactions split off the “super force”. Because of the continuous annihilation of the particles and antiparticle, the matter was not yet stable, unable to survive for more than a few nanoseconds. The light was not visible yet being trapped in the dense energy ball. It is called the “Epoch of Last Scattering”.
One second after the Big Bang, the universe has expanded to the size of 20 light-years. The temperature cooled off to ten billion degrees. After three minutes, when the temperature cooled to one billion degrees, nucleosynthesis first began to take place.
The next important stage in the expansion occurred around thirty minutes later when creation of photons increased through annihilation of electron-positron pairs.
For the next 300,000 years the universe was expanding while cooling to 10,000K. It was then that the first helium atoms are thought to be born. At this point, as the density decreased, the light began to be visible. From this point on, the universe has been expanding at, apparently, an accelerated pace up until the present time.
In 1980, Dr. Alan Guth of MIT proposed an inflation theory to explain the initial explosion of the singularity – the Big Bang. This inflation theory seems to be well supported by the most recent experiments measuring the size of the ripples in the background microwave radiation.
1.2. Experimental Data
Light from Distant Stars
It takes eight minutes for light to travel from the sun to earth. Knowing the velocity of light and the distance to the stars, it easy to calculate that it takes many millions of years for the light of the distant stars to reach earth. By measuring the position of a star at different times of year, astronomers can see the apparent motion of this star compared to more distant stars, and this information can be used to calculate the distance to the nearby star. Measuring the distances to nearby stars is the first step towards measuring distances to very remote objects, and ultimately in determining the distances to the most remote objects in the universe.
Astronomers rely on a stacked set of yardsticks of different lengths to measure distances to stars and galaxies. Each yardstick in the set is measured against, or calibrated to, the previous one. The most accurate yardsticks in this system are parallax measurements. The measurements across great cosmic distances run into an inherent problem: one must distinguish between a far, bright object and a nearby faint one. For this purpose the astronomers use an extremely bright object as “standard candles”, such as supernovae.
Let us consider several examples for illustration purposes. The star closest to us, Proxima Centauri from the Alpha Centauri system, is 4.22 light-years away from us, which means that it takes 4.22 years for the light of Proxima to reach us on earth.
Eta Carinae in our galaxy is more than 8,000 light-years away. Estimated to be 100 times more massive than our Sun, Eta Carinae may be one of the most massive stars in our Galaxy. Eta Carinae was observed by Hubble in September 1995.
Trifid Nebula, in the constellation Sagittarius, is located about 9,000 light-years from Earth.
The galaxy M100 (100th object in the Messier Catalog of non-stellar objects) is one of the brightest members of the Virgo Cluster of galaxies. The galaxy is estimated to be tens of millions of light-years away. One of the prime goals of the Hubble Space Telescope has been the detection of Cepheid variable stars in distant galaxies. Before HST Cepheids had only been detected in very nearby galaxies, out to about 12 million light years. A team led by Dr. Wendy Freedman of the Carnegie Observatories has detected the furthest Cepheids yet in the Virgo Cluster spiral M100 at a distance of about 50 million light-years.
Sextans A in the Milky Way is about 10 million light years distant.
The pair of clusters are 166,000 light-years apart in the Large Magellanic Cloud (LMC) in the southern constellation Doradus. About 60 percent of the stars belong to the dense cluster called NGC 1850, which is estimated to be 50 million years old. A loose distribution of extremely hot massive stars in the same region are only about 4 million years old and represent about 20 percent of the stars in the image.
A rare and spectacular head-on collision between two galaxies appears in this Hubble Space Telescope true-color image of the Cartwheel Galaxy, located 500 million light-years away in the constellation Sculptor.
Hubble astronomers conducting research on a class of galaxies called ultra-luminous infrared galaxies (ULIRG), within 3 billion light-years of Earth, have discovered that over two dozen of these are found within “nests” of galaxies, apparently engaged in multiple collisions that lead to fiery pile-ups of three, four or even five galaxies smashing together.
In other words the stars we see in the night sky are not the stars as they exist now but the stars as they existed billions of years ago. Many of them are long gone, exploded into supernovas, collapsed into black holes or simply burned out. The light from supernova number 1987-A in the Large Magellan Cloud, for example, that exploded 169,000 light years ago has just recently reached earth. Before the explosion, this supernova was first a red giant and then a blue giant star. The incontestable fact that we see the stars billions years old is the simplest and most direct argument for the age of the universe being at least as old as the oldest star.
Expanding Universe
In 1929 American astronomer Edwin P. Hubble discovered that the galaxies were moving apart in all directions. He earlier discovered the red shift in the spectrum of light emitted by remote stars. The shift in the wave frequency is usually associated with the Doppler effect. Observing two dozen galaxies 106 light-years away and gauging their distance by their brilliancy Hubble discovered that more distant galaxies were racing away from the Earth faster than the brighter ones, more closed to us. He further discovered that the rate at which galaxies were racing away from Earth was proportional to their distance from Earth. This let to the conception of the expanding Universe. According to the Friedmann cosmological model, the universe is expanding as 3-dimensional air-balloon blown in an imaginary 4-dimensional space.
The study of several dozen supernovae four to seven billion light-years away demonstrated that the explosions were about 25% dimmer than expected. This suggested that the universe was expanding slower in the past than it is now and, therefore, it took longer for the universe to reach it present stage. Thus, an accelerated expansion suggests an older age of the universe.
Cosmic Background Radiation
Initially the matter and radiation were in thermal equilibrium. The energy released as the radiation cooled must have obeyed the laws of black-body statistics. If the temperature of this relict radiation, called cosmic microwave background radiation (CBR), can be measured now, one can calculate the original temperature and vice versa. Based on the theory of the Big Bang, this temperature was predicted to be around 2.7oK. In 1965 Arno Penzias and Robert Wilson discovered uniform and isotropic relict radiation having this temperature, for which they received a Nobel Price. As it was later confirmed by NASA Cosmic Background Explorer (COBE), this discovery was the first sound experimental validation of the Big Bang theory.
The CBR has ripples, minor fluctuations, which allow astronomers to use them as the yardsticks to measure the cosmos. The size of the ripples, as measured recently by three teams and the Caltech, Princeton and Berkley, turned out to be approximately one degree on the sky, or twice the size of the Moon as seen from Earth. The size of the ripples is an indication of the geometry of space, which in turn is determined by the density of mass in the universe. The ripples of one degree are indicative of a flat universe, predicted by the inflation theory of Dr. Guth. This discovery once again brought back in focus the mysterious cosmological constant, introduced and then abandoned by Einstein.
As recent as five years ago, the experimental data was insufficient to predict the age of the universe more accurately than within the range of 10 to 20 billion years. In 1994 a team led by Wendy Freedman from Carnegie Observatories in Pasadena, Cal., suggested that the universe was much younger, between 8 to 12 billion years old. This finding suggested that universe may be younger than some of its oldest stars. The rival group led by Allan Sandage, interpreting the same data, defended the older universe. Both groups now converge on the number 12 billion years, which seems to be a consensus for the age of the universe among the scientists today.
Geological Age of the Earth
Even though the dating of the fossils and geological strata lies outside the scope of this paper, we will mention here in passing that geological age of our own planet further exacerbates the problem.
Carbon dating techniques using Carbon-14 (C14) isotopes further corroborated by other methods, such as uranium-thorium radioactive decay place the age of the earth well beyond the biblical age.
To summarize, there are compelling theoretical considerations fully corroborated by available experimental data to establish that the ages of our planet, other stars and the entire universe well beyond the apparent biblical age of less than six thousand years. Amounting to billions of years this discrepancy is so enormous that no amount of criticism related to the scientific methods and assumptions used to arrive at these numbers is going to help to reconcile this discrepancy. Even if the scientists were overestimating the age of the universe by 50%, which is highly unlikely, Torah and science would still be six billion years apart! Thus, we find such criticism unproductive and we shall look for solution of this problem elsewhere.
2. Torah View
2.1. The Jewish Calendar
According to the traditional Jewish calendar we live now in the year 5760. The implication of this number is that it seems as the world, from the traditional Jewish point of view is no older than six thousand years. First of all let us note that the popular misconception that the Torah begins counting the calendar from the beginning of creation of the world has no basis. In fact, the calendar begins with the creation of Adam – the first man. Thus, when we say that, according to Jewish tradition, today, for example is five thousand seven hundred sixty years three months and five days, it is from the date first man was created and not from date the world was created.
3. Previous Attempts to Reconcile the Conflict
In his book “Immortality, Resurrection, and the Age of the Universe: a Kabbalistic View”[5], the late Rabbi Aryeh Kaplan present an excellent overview of the various attitudes towards this problem and attempts to resolve it. In summary, these attitudes may be categorized as follows:
Six Days as Six Epochs Each day represents an entire epoch billions of years long This interpretation of the biblical text is far from the literal meaning is not based on any classical commentaries
Dismissal If G‑d created the first man fully grown he could have created a “mature” universe which was already billions of years old at the point of creation Irrefutable and, therefore, unscientific approach
Sabbatical Cycles Base on the concept of the cosmic sabbatical cycles, the world was 15 billion years old the first man was created A significant but now widely accepted view expressed by some important kabbalists almost two thousand years ago
To this we may add another recent approach expressed by Gerald Schroeder that attempts to explain the difference in ages by means of gravitational time dilation.[6]
3.1. Sabbatical Cycles
The most interest for our discussion present the kabbalistic approach of sabbatical cycles expounded by R. Kaplan. This section closely follows R. Kaplan’s exposition of this approach. The idea of sabbatical cycles is based on esoteric interpretation of several scriptural and talmudic sayings. According to the Talmud, the world will exist for seven thousand years and in the [end of] seventh millenium it will be destroyed.[7]
According to the Talmudic sage and a great kabbalist of the first century rabbi Nehunya ben HaKanah expressed in his important work Sefer HaTemunah, this seven thousand years period is only one cycle out of total even. This idea is based on a biblical concept of a Jubilee, which consists of seven sabbatical (seven-year) cycles. This leads to forty nine thousand years as the total age of the universe. According to many later kabbalists the present cycle is the last of the seven and, therefore, when Adam, the first man, was created, the world was forty two thousand years old.
This approach is alluded to in some midrashic sources. Thus Midrash Rabbah on the verse “It was evening and it was morning, one day” (Genesis 1:5) states, “This teaches that there were orders of time before this.” Another Midrash teaches that “G‑d created universes and destroyed them”[8] seams to support the concept of sabbatical cycles, as it is explain in another kabbalistic treatise Ma’arekheth Elokuth. Interestingly, the Talmud states that there were 974 generations before Adam.[9] The idea of sabbatical cycles was expressed and elaborated in the works of such sages of Jewish philosophy and kabbalah as Bahya, Ziyoni, Recanati and Sefer HaChinukh’s commentaries on Leviticus 25:8. This idea is also alluded to in the commentaries of Nachmanides on Genesis 2:3, Yehuda HaLevy[10] and Ibn Ezra’s commentary on Genesis 8:22.
Rabbi Kaplan’s discovery of a little known commentary by Rabbi Isaac of Akko sheds entirely new light on the concept of sabbatical cycles. Commenting on the verse, “A Thousand years in Your sight are as a day” (Psalms 90:4) midrashic sources have stated that one divine day is equal to a thousand terrestrial years. In his kabbalistic treatise, Otzar HaHayim, Rabbi Isaac of Akko states that the first six sabbatical cycles are counted in the divine not human years. If a divine day is thousand years and, then a divine year, equal to 3651/4 divine days, is 365,250 terrestrial years. If we multiply this number by forty two thousand years comprising the first six cycles before Adam we get fifteen billion three hundred forty and a half million (15,340,500,000) years. Thus, according to one of the greatest Talmudic sages of the first century, R. Nehunya ben HaKanah, as explained by a prominent kabbalist of the 13th century, R. Isaac of Akko, at the time Adam was created, the universe was already more than fifteen billion years old – a number very closely correlated with the current estimates of the cosmological age of the universe!
We must note however, that this approach was strongly contested by Isaac Luria, the holly lion, Ari, who is considered by many the greatest kabbalist of all times. Ari maintained that the previous sabbatical cycles did not exist on the terrestrial plane and were purely spiritual worlds. Most of the later kabbalists with rare exceptions accepted the opinion of Ari. Apparently there was a difference of opinion between these two (pre and post lurianic) schools of kabbalah whether the first phase of creation which stretched for fifteen billion years took place in the physical or spiritual universe.
4. Quantum reality
4.1. Particle-Wave Dualism
In 1923, Louis de Broglie suggested that every particle has a wavelength associated with it:
l = ћ/p (10)
were p is the momentum of the particle and ћ is the Plank constant.[11]
Sin 19926, Erwin Schrödinger formulated his famous equation[12]
ћ /2m · Ñ2ψ +Vψ = Eψ (11)
where V is the potential energy of a particle, E is the kinetic energy and ψ is the wavefunction that describes the quantum-mechanical state of the particle.
4.2. Wave Function
What is the wavefunction? The attempts by Schrödinger and others to interpret it as a scalar potential of some physical field were not successful. In 1926, Max Born noticed that the square of the amplitude of the particle wavefunction in a given region gives the probability of finding the particle in this region of configuration space. He suggested that the wavefunction represented not a physical reality by rather our knowledge of the quantum state of an object.
The wavefunction represent our knowledge of all possible quantum mechanical states of an object. In other words the quantum mechanical state of a physical system is a linear superposition of all possible states of this system. Thus, for example, the state vector for a left circularly polarized photon | ψL > is a linear superposition of the vertical and horizontal eigenstates
| ψL > = 1/Ö2 (| ψv > + i| ψh >) (12)
When a left circularly polarized photon goes through a calcite crystal it is detected to be in either vertical or horizontal polarization states. At the moment of the measurement the state vector | ψL > being a superposition of two possibilities | ψv > and | ψh > is suddenly reduced to one actuality: either | ψv > or | ψh >. This is called the collapse of the wavefunction. What actually happens during the collapse of the wavefunction is that the previously amorphous reality existing in undetermined state of various possibilities is suddenly comes into a physical reality in one particular state (eigenstate).
The irreversible (time-asymmetric) collapse of the wavefunction does not follow from the Schrödinger equation.
5. Introduction of an Observer [[13]]
The collapse of the wave functions is a serious problem in the quantum theory. The trouble is that it doesn’t follow from the Schrödinger equation. Let us consider an experiment in which we collide one elementary particle with another to measure it’s momentum. Such an experiment is an interaction of two subatomic particles and should obey the Schrödinger equation. However, as we said before, this Schrödinger equation does lead to a collapse of the wave function, which is a necessary result of any experiment. So, what than causes the collapse of the wavefunction?
To resolve this paradox, it was proposed by the Copenhagen interpretation of quantum mechanics to attribute the collapse of the wave function to the interaction of a microscopic particle with a macroscopic measurement apparatus. Since the macroscopic object behave according to the classical Newtonian physics and is not described by a wavefunction, it was thought to cause the collapse of the wavefunction of a microscopic object under measurement. The apparent difficulty with such an explanation is that there is no reason why a macroscopic object should not obey the Schrödinger equation. In deed, any macroscopic object is composed of microscopic molecules and atoms, which do obey the laws of quantum physics.
This situation leads to absurd as clearly demonstrated by the Schrödinger Cat gedanken experiment. If one places a cat in a closed steal chamber, together with a Geiger tube containing some radioactive material, a hammer connected to the Geiger tube and a phial of prussic acid. From the amount of the radioactive material and it half-life we calculate that there is 50% chance that within one hour one atom will decay. If an atom decays, the Geiger counter is triggered and causes the hammer to break the phial of prussic acid, which kills the cat. Prior to the measurement the state vector of the atom is a linear superposition of two possibilities: decayed and not-decayed atom. Accordingly the state vector of the cat is also a linear superposition of two physical possibilities: cat is alive and cat is dead. In other words, before the measurement takes place the cat is dead and alive at the same time! To be more precise, the cat is neither alive nor dead but is in state, which is a blurred combination of both possible states.
5.1. Role of a Conscious Observer
In 1932 the mathematician Von Neumann published his famous work, Mathematical Foundations of Quantum Mechanics[14], in which he first clearly demonstrated the discrepancy between continuous time-symmetrical wave function in Schrödinger equation and a discontinuous time-asymmetrical (irreversible) event of measurement. In this book Von Neumann made a startling suggestion that it must be a conscious observer who causes the wavefunction to collapse. The reason for this is that the conscious is the only element present in the quantum-mechanical measurement process which is not time-symmetrical and is not required to observe the laws of quantum mechanics. In other words, Von Neumann replaced the dualism of macroscopic-microscopic worlds with the mind-matter dualism. While the former is easily critiqued the latter is immune to criticism because whatever we mean by the word consciousness it does not have to obey the Schrödinger equation.
Since the mind is to a large degree the product of biochemistry of the brain, once we distill that level of the mind, which is no longer vested in a physical brain and is not a product of biochemical reactions, such non-physical mind has been called by some a human soul, or, more specifically, the intellectual faculty of the soul. Hence, Von Neumann’s approach to collapse of the wavefunction leads us to a classical Cartesian body-soul dualism.
In 1961, Eugene Wigner revisited the hypothesis of a conscious observer.[15] He posed a question: whose mind exactly collapses the wavefunction? In one considers a gedanken experiment in which an observer relegates the measurement to his assistant and leaves the room. After his return he inquires of the result of the measurement. Until he learns of the result, as far as he is concerned, the state of the quantum-mechanical system under observation is a linear superposition of all possible eigenstates. However, when he asks his assistant whether he knows definitively the results of the experiment, the assistant answers that of course he does. This led Wigner to conclude that it is the very first conscious observer who collapses the wavefunction.
One can ask a question at this point, what level of consciousness an observer must poses in order to collapse the wavefunction? Is the cat in the Schrödinger experiment considered a conscious enough creature to collapse the wavefunction and thereby escape the inconvenience of being dead and alive at the same time?
What about the omniscient G-d? If G‑d knows the eigenstate of all wavefunctions doesn’t He immediately collapse them all?! This question is closely related to the paradox of free will. If G‑d knows everything, and His knowledge must be absolute and true, then how can anybody poses free will? Doesn’t G‑d forces us into acting in a certain way simply by virtue of Him knowing that we were going to act this way? One possible answer to this paradox, as it is given in Chasidic philosophy, is that G-d indeed knows everything but He keeps His knowledge to Himself without affecting the actions and decisions of His creations. One may try apply a similar rational here and suggest that, perhaps, G‑d’s knowledge in some peculiar way does not automatically collapse all the wavefunctions of the universe. Alternatively, one may say that the global collapse of the world wavefunction caused by G‑d’s ultimate knowledge further underlines the paradox of free will.
6. Resolution of the Conflict
Putting aside any discussion of the merits of Von Neumann-Wigner hypothesis of conscious observer, accepting for the time this hypothesis will allow us to resolve the discrepancy between biblical and cosmological ages of the universe.
Let us consider the wavefunction ψ0 in the initial moment of time t0 describing all possible eigenstates of the singularity from which, according to the Big Bang theory, the universe is about to be born. One of the eigenstates of this wavefunction is |ψ+>, which represents the possibility of the explosion that we call Big Bang. Another eigenstate |ψ> represents an alternative – no Big Bang. The state vector of the universe in this instance is a linear superposition of both eigenstates: to be or not to be. Even though the probability of the Big Bang and the subsequent birth of the universe is greater than zero, the state vector of the universe will remain such superposition of existence and non-existence for billions of years, until such time as a conscious observer enters the scene and collapses the world wavefunction thereby realizing the one and only one eigenstate of the universe corresponding to its existence. The universe is like a giant Schrödinger cat who is awaiting its observer to find out whether it is alive or dead. It is the man who brings the universe into existence from its undefined state of mathematical probabilities. The great paradox of the Creation is that if the universe was ever born, it needed a human for a midwife.
Therefore, we may say that the universe has two important dates the date of its conception, t0, and the date of its birth. When human observer probes the age of the universe the physics dictated that he will arrive at the age when the universe was conceived as a mathematical wavefunction having probability of existence. The real age of the universe may by definition be no greater than the age of the first human observer who collapsed the world wavefunction.
6.1. Adam as the First Observer
According to the biblical account of creation, Adam was the first fully conscious being – the first observer. Prior to first human the universe existed in a superposition of all possible states, including the states of existence and no-existence. When the first man looked for the first time at the universe he immediately collapsed the world wavefunction and brought the world into physical existence.
It is easy to see now why the Bible begins the chronology of creation with Adam and not before. Even though the universe could have been already billions of years old it was the first human (Adam) who actualized the creation and brought it from a fuzzy state of existence/non-existence into the definite physical existence.
Perhaps this is why the Bible states:
“And G‑d blessed the seventh day, and sanctified it; because on it He had rested from all his work which G‑d created to make” (Genesis 2:3)
The classical Jewish commentators on the Bible suggest that the meaning of the peculiar expression “G‑d created to make” is that G‑d created a man to complete His creation, to be a partner of G‑d in creating the universe. Now it makes perfect sense. Initially G‑d created the universe in amorphous spiritual form and He created a man to complete the creation and to bring the universe from its potential to an actual reality.
This approach allows us to rationally resolve the apparent contradiction between the scientific and the biblical ages of the universe.
6.2. Resolution of the Dispute regarding Sabbatical Cycles
As we noted before, the dispute related to the age of the universe existed not only between science and religion but also between two main schools of Jewish esoteric philosophy – kabbalah. According to the ancient school of Rabbi Nehunya ben HaKanah, as explained by Rabbi Isaac of Akko, the universe existed for over fifteen billion years before creation of Adam. While the lurianic school of kabbalah maintained that this took place in the spiritual rather than physical world.
It seems that our approach allows to resolve that contradiction as well. Indeed, both opinions may be correct at the same time and do not contradict each other. When Rabbi Nehunya ben HaKanah and Rabbi Isaac of Akko along with other early sages of kabbalah spoke of sabbatical cycles and billions of years in pre-human history they spoke of the universe as originally created by G‑d in general terms. Ari, however, further clarified the picture by pointing out that the initial phase of pre-human world history was different from the phase the post-human phase and existed on a different plane, which he called spiritual world. In fact, the quantum mechanics confirms that prior to the first human, the world indeed existed on the different (almost spiritual) plain described by purely mathematical constructs such as the wavefunction. This completes the puzzle.
Following the approach advocated by some of the most respected scientists of this century, Von Neumann, Wigner, Wheeler and others, we are able to reconcile the apparent discrepancy in the age of the universe as it is predicted in the biblical account of creation and contemporary cosmology. The history of the universe is comprised of two main periods: pre and post human. In the first period, before first conscious observer peered into the universe, the universe was in an amorphous fuzzy state of linear superposition of all possible states. The universe at this stage existed only mathematically, as a distribution of probabilities. This period lasted approximately twelve billion years. When the first human opened his or her eyes he/she collapsed the world wavefunction and brought the universe into actual existence. From that point on the Bible and the humanity began counting the new age of the universe.
The approached outlined above, while appears to be promising, does not purport to solve all apparent contradictions between science and religion even in the area of biblical chronology which is the subject of this paper. We limited ourselves to the attempt of reconciling the general age of the universe without going into specific interpretations of the meaning of the six days of creation and other specific details of the biblical account of creation. Some of these problematic areas include the sequence of creation of planets and stars, the meaning appearance and evolution of the biological flora and fauna, such as apparent indications that Bible speaks of the creation of the first humans, Adam and Eve, in a fully grown and mature form and others. These remain to be addressed in the future. All that we attempted to do that not the dismissal of contradictions by there honest assessment and scientific analysis may not only lead to a resolution of an apparent contradiction by moreover, help to enrich our understanding the Bible and the science alike and further our quest for the unified and harmonious view of the universe.
[1] In this chapter we follow substantially the treatment of the subject by Misner, Ch. W., Thorne, K. S. and Wheeler, J. A. Gravitation. (San Francisco: W. H. Freeman and Co., 1973), II., ch. 6.
[2] Hubble E. P., Proc. Nat. Acad. Sci. (US), 15, 169 (1929)
[3] Wald
[4] Parker
[5] Kaplan, Aryeh. Immortality, Resurrection, and the Age of the Universe: a Kabbalistic View. (Hoboken, NJ: KTAV, 1993) ch.1, pp.1-16
[6] Schroeder, Gerald. The Science of God, (New York: The Free Press, 1997), ch. 3, p.41
[7] Babylonian Talmud, Sanhedrin 97a
[8] Rabbi Nehunya ben HaKanah, Sefer HaTemunah. p.314
[9] Babylonian Talmud , Hagigah 13b
[10] Ha Levi, Yehuda. Kuzari, 1:167
[11] De Broglie, L. Annales de Physique. (1925)
[12] Schrödinger, E. Annalen der Physik. 79, 361. (1926)
[13] This discussion follows J. Baggott. The Meaning of Quantum Theory (Oxford: Oxford University Press, 1992), 5.3, p. 185-194
[14] Von Neumann, John. Mathematical Foundations of Quantum Mechanics (Princeton, NJ: Princeton University Press, 1955)
[15] Wigner, Eugene in Good, I.J. (ed.) The Scientist speculates: an anthology of partly-baked ideas. (London: Heinemann, 1961).
Spread the love |
cae6c0c400bdd225 | Nine Nobel Prize Predictions for 2020
These significant advancements could win the Nobel Prizes in physiology or medicine, physics and chemistry.
Nobel 2020 artwork
Media credits
Abigail Malate, Staff Illustrator
Media rights
Copyright American Institute of Physics
Inside Science Staff
(Inside Science) -- Making predictions for the Nobel prizes in physiology or medicine, physics and chemistry has become an annual pastime at Inside Science. We've had some success in prior years. For example, in 2018 we included the winning research about cancer immunotherapy in our physiology or medicine predictions. We also included eventual 2019 winner lithium ion batteries in the 2018 chemistry predictions. In 2019, we correctly picked exoplanets for the physics prize. This year we've searched for hints hidden in data -- as well as relied on our nonscientific intuitions -- to make our nine best predictions for the winners of the 2020 Nobel Prizes.
To check out the research we highlighted last year that didn't win but may be recognized this year, please read our 2019 predictions.
The Nobel Prize in Physiology or Medicine -- Announced October 5
Written by Nala Rogers
The shape of the immune system's signposts
Killer T cells are immune system assassins that destroy traitorous versions of the body's own cells -- usually cells infected with viruses. By the 1980s, researchers knew that killer T cells couldn't find such traitors without the help of major histocompatibility, or "MHC," proteins, which stick out from the surfaces of cells. But it wasn't clear how MHC proteins identified the T cells' targets.
As a postdoctoral researcher in Don Wiley's Harvard laboratory, Pamela Björkman used X-ray crystallography to solve the physical structure of an MHC protein in 1987. The protein had a groove on its surface, formed by two helix-shaped structures that could clamp together like a bear trap. These helixes were already clamped around a mixture of short protein segments called peptides.
That gave the researchers the clues they needed to understand how MHC proteins work. Cells are constantly chopping old proteins into peptides as part of normal cleanup processes. These peptides get shuttled into a compartment in the cell where they meet up with MHC proteins. The MHC proteins grab the peptides, migrate out of the cell and then display their bounty to passing T cells. If a cell gets infected with a virus and starts producing more viruses, viral material will be chopped up and displayed as well, signaling that the cell should be destroyed.
The assembly that awards the prize in physiology or medicine might be hesitant to honor this research on MHC proteins, since the discovery built on earlier work by Peter Doherty and Rolf Zinkernagel that won a Nobel Prize in 1996. Still, it was profoundly important, helping to lay the foundation for our modern understanding of the cellular immune system. And this year, Björkman and her colleague Jack Strominger joined the ranks of Clarivate's "Citation Laureates," researchers thought to be Nobel candidates because their research publications have received exceptionally high numbers of citations.
Mini-organs as models
It's been scarcely more than a decade since researchers first coaxed stem cells to grow into 3D structures resembling miniature mouse intestines. These structures were composed of multiple types of tissue that organized themselves and interacted much as they do in the gut of a living mouse.
In the intervening years, various research teams have performed similar feats with human stem cells, producing "organoids" representing everything from the liver to the brain. Organoids have led to a revolution in medical research, providing a realistic alternative to the tissue cultures and lab animals traditionally used in preclinical experiments.
Organoids can be used to test how humans will react to new drugs or toxins and to study how the body interacts with beneficial microbes or disease-causing organisms. Organoids grown from a patient's own cells can help reveal which treatments that person will respond to -- a form of personalized medicine. And organoids grown from cancer cells are allowing researchers to study cancer in new ways.
If organoids were to be honored with a Nobel Prize, Hans Clevers would almost certainly be among the recipients. Clevers published one of the landmark papers on mouse intestine organoids with Toshiro Sato in 2009, and he has continued to be a leader in the field, receiving numerous awards and honors. Akifumi Ootani was also one of the first to produce organoids, publishing his technique the same year as Clevers and Sato. Yoshiki Sasai deserves recognition for developing brain organoids, but sadly he is out of the running for the Nobel, since he passed away in 2014 and the prizes are not awarded posthumously.
Histones and epigenetics
The strands of DNA in a cell's nucleus aren't crumpled up willy-nilly. They are wound around a series of structures called histones, somewhat like thread on spools.
But histones are more than a tidy storage mechanism. Research in the second half of the 20th century revealed that they help control which genes are active. Active genes are ones that are being transcribed into RNA, which can then be translated into the proteins that make up living things.
When DNA is wound tightly around histones, it is less accessible to the cellular machinery that translates it into RNA. But a variety of chemical groups can modify gene activity by binding to or detaching from histones. The first example that researchers found is how a chemical component known as an acetyl group attaches to the end of a histone, activating certain genes by loosening the histone's hold on DNA.
This discovery was a momentous advance in the budding field of epigenetics, the study of changes to genetic material that help determine the effects of genes without changing the underlying DNA sequence. Researchers have since attributed certain genetic disorders to defects in the body's ability to modify histones. Histone modifications and other epigenetic alterations may also play roles in diverse conditions ranging from schizophrenia to cancer.
Key figures in the discovery of histone acetylation include Michael Grunstein at UCLA, C. David Allis at The Rockefeller University, and Shelley Berger at the University of Pennsylvania. In 2018, Grunstein and Allis received a Lasker Award, often considered a harbinger of a possible Nobel.
The Nobel Prize in Physics -- Announced October 6
Written by Yuen Yiu
First image of a black hole
black hole in Messier 87
An image of a black hole at the center of the Messier 87 galaxy, 55 million light years from Earth.
Media credits
Event Horizon Telescope Collaboration
This picture of a fuzzy orange ring was a star among the biggest science stories in 2019. It was the first-ever image of a black hole, captured by the Event Horizon Telescope Collaboration, an international effort that gathers data with telescopes around the globe.
The black hole, hovering near the center of galaxy M87, is 6.5 billion times more massive than our sun. The image of the black hole, or rather the shadow of the black hole, circled by a ring of fuzzy light, marked a milestone in our ability to study the physics of perhaps the most mysterious objects in the universe.
Announced in April 2019, the feat might have missed the traditional end-of-January deadline to be nominated for the prize in 2019.
The first detection of gravitational waves faced a similar timeline, with the discovery announced Feb. 11, 2016, missing the boat for the 2016 prize. Three scientists behind the discovery -- Kip Thorne, Rainer Weiss and Barry Barish -- were promptly awarded the prize the following year, but the recognition came too late to honor Ronald Drever, who had passed away in March 2017. Drever was instrumental in developing the experimental techniques that made the detection of gravitational waves possible.
If Drever had been alive, the Nobel Committee might have had a harder time choosing which three out of the four deserving scientists should get the prize. The Committee’s insistence upon its tradition of limiting the prize to a maximum of three individuals has drawn criticisms for reinforcing the outdated idea of “lone geniuses” in science. If the 2020 award is given for the black hole photo, it is unclear who the Committee will choose.
Density functional theory
Among the most popular and versatile theories in materials science and computational physics and chemistry, density functional theory (DFT) has been pivotal in the discovery of many functional materials used in modern gadgets.
The theory is intimately related to the Schrödinger equation, which is used to describe and predict the behavior of a quantum system. However, the equation becomes exceedingly difficult to calculate for so-called many-body systems, such as a hunk of metal containing trillions upon trillions of electrons. DFT provides a way to rethink the problem and produces effective approximations for these systems, making it possible to calculate the electronic and nuclear structures of materials.
One of the pioneers of the theory, Walter Kohn, was awarded the 1998 Nobel Prize in chemistry, along with John Pople, who pioneered computational methods in quantum chemistry that allowed scientists to put theories such as DFT to use. However, given the explosive growth in materials science, particularly in relation to information technology and clean energy production, it isn’t unthinkable for the Nobel Committee to honor others who have also contributed to the development of DFT but have not yet been recognized with the prize.
Possibilities include John Perdew, whose work at multiple institutions has made him one of the most cited scientists on DFT and in physics, or Lu Jeu Sham from UC San Diego, who worked with the aforementioned Kohn on the Kohn-Sham equation, a specialized form of DFT widely used in materials science and quantum chemistry.
Quantum supremacy
We have previously predicted that scientists working on quantum communication technology might get the nod from the Committee. In particular, we mentioned the trio of Alain Aspect, John Clauser and Anton Zeilinger, who were recognized by the Wolf Prize in 2010 “for their fundamental conceptual and experimental contributions to the foundations of quantum physics, specifically an increasingly sophisticated series of tests of Bell's inequalities.”
In the field of quantum information research, quantum computers have had a big year. Google and IBM appeared to be in a public spat when scientists from IBM tried to downplay Google’s audacious claim of having achieved “quantum supremacy,” a statement that made the rounds in science news outlets.
In a paper in the journal Nature, Google claims that their latest quantum computer, Sycamore, can complete a specific calculation in 200 seconds, and that the said calculation would take a supercomputer such as IBM’s Summit, 10,000 years to do. IBM shot back by saying that Summit can probably complete the calculation in closer to two and a half days. Regardless of the public relations battle, it was a significant milestone for quantum computing -- a quantum processor with only 53-qubits, the quantum equivalent of a classical bit, was able to outperform a supercomputer capable of performing 200 thousand trillion calculations per second.
Serge Haroche and David Wineland were awarded the 2012 prize, “for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems.” However, with practical applications for quantum information technologies becoming more apparent every year, the Committee may choose to shine the spotlight on the field once again. The remaining question is, who will they pick?
The Nobel Prize in Chemistry -- Announced October 7
Written by Catherine Meyers
Building new polymers, piece by piece
From plastic milk jugs to the epoxy resin that helps hold circuit boards together, synthetic polymers are everywhere in everyday lives. These highly versatile materials are made up of long strings of smaller units called monomers. Polymers can be soft and flexible, like a polyurethane dish sponge, or hard and rigid, like a Lego brick.
The 2020 Nobel Prize in chemistry may go to the scientists who invented a new way to make bespoke polymers in a highly controlled, efficient and economical fashion. In 1995, Carnegie Mellon University chemist Krzysztof Matyjaszewski and his colleague Jin-Shan Wang published a paper on a method called atom transfer radical polymerization. The method can be used to build complex polymers piece by piece with the help of a special catalyst to add monomers to a growing chain. The process can be started and stopped by controlling the temperature and other conditions of the reaction. Importantly, all of this can be accomplished using industrial equipment.
This technology has been adopted by commercial companies and used in cosmetics, printer inks, adhesives, sealants and more. Researchers continue to explore ways they can use it to make new materials with tailored properties, such as coatings for biomedical devices and degradable plastics.
The chemistry behind Moore’s Law
An average smartphone today has millions of times more memory than the computer aboard Apollo 11, which flew astronauts to the moon in 1969. This year's chemistry prize may recognize some of the creative chemical research that helped make this radical increase in computing capability possible.
The brains of modern computers are etched out of silicon chips. To make the patterns on these chips, manufacturers coat the chips with a material called a photoresist, which reacts to light. They then shine the desired patterns onto the chips. The light makes chemical changes in the photoresist that make it either easier or harder to remove the underlying material.
Making computers more powerful has typically required making the patterns on the chips smaller. In the late 1970s chip manufacturers were facing a limit -- if they wanted to make chips with finer details, they had to use a shorter wavelength of light. The problem was that light sources that produced these shorter wavelengths were too weak to be practical.
Researchers at IBM started investigating how they might make a more sensitive photoresist that would work with weaker light. They hit upon the idea of a chemical chain reaction, in which a small number of changes caused by the light would cascade into big changes in the material. Thus were born chemically amplified photoresists. It took a number of years to perfect the formulas and work out the kinks in the process, but by 1986 IBM was making chips with a record 1 megabits of memory using the new technology. Two of the main researchers who might be recognized with the prize are C. Grant Willson and Jean Fréchet. Hiroshi Ito, another key contributor, passed away in 2009.
Better tools for understanding life’s building blocks
In his book, “Imagined Worlds,” the late physicist Freeman Dyson wrote, “New directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained.”
The 2020 Nobel Prize in chemistry may recognize researchers who built tools that helped launched a revolution in understanding and manipulating the chemical building blocks of life. In the 1970s and ’80s, researcher Lee Hood, who was working at Caltech at the time, and his colleagues developed machines to sequence and synthesize proteins and DNA. Hood’s key collaborators included Marvin Caruthers at the University of Colorado and Michael Hunkapiller at Caltech.
The new tools gave researchers more speed and sensitivity. They have enabled breakthroughs in the study of biology and disease and led to the development of new drugs and medical treatments. For example, the automated DNA sequencer and the more advanced machines that followed made the human genome project possible, and the protein synthesizer helped drug company Merck make a key part of the HIV virus, determine its structure and design a drug to combat the virus.
An article for the Lasker Foundation, which honored Hood in 1987 for his work on key proteins of the immune system called antibodies, noted the interdisciplinary nature of Hood’s work. He’s quoted as saying the first paradigm shift of his career was “bringing engineering to biology.” But it might be the chemistry prize that recognizes his contributions.
For more of Inside Science's coverage of the 2020 Nobel Prizes in physiology or medicine, physics and chemistry, please visit our Nobel coverage page.
Article-resident newsletter signup form
Keep up to date with the latest content from Inside Science |
428710d435bee627 | Skip to main content
The shape of flowing water
Time: Thu 2019-09-05 15.15 - 16.30
Lecturer: Tomas Bohr (Technical University of Denmark)
Location: Oskar Klein auditorium FR4, AlbaNova
Abstract. When we observe fluid flows in nature, it is often because we notice the deformation of the fluid surface e.g., when light reflects on a water drop or an ocean wave. Such deformations can have great beauty and complexity, since the shape of the free surface is intimately and very nonlinearly coupled to the internal flow. In the talk, I will show examples of surfaces with shapes of thin needles or sharp walls and that lead to interesting symmetry breaking transitions, where sharp corners and polygonal structures appear - even in strongly turbulent flows. The existence of such structures, even in very “simple” flows, shows the complexity of the solutions to the Navier-Stokes equations with a free surface. Since the work of E. Madelung, it has been known that the Schrödinger equation can also be expressed as a fluid flow, and it has been suggested by Y. Couder and his collaborators that the mysteries of quantum mechanics can be imitated by bouncing droplets moving and interacting through surface waves. I shall discuss this exciting possibility briefly, but argue that the full spectrum of quantum effects cannot be obtained in this way. |
21a061c2273c903c | variational method in quantum mechanics
Teaching quantum mechanics at an introductory (undergraduate) level is an ambitious but fundamental didactical mission. The general solution of the factorization problem requires advanced mathematical techniques, like the use of a nonlinear differential equation. To this end, let the integral be recast as follows: then search those values of χ and for equation (17) to be satisfied. To cite this article: Riccardo Borghi 2018 Eur. Published 13 April 2018 • On expanding both sides of equation (A.2), it is not difficult to show that the parameters χ, β, and must satisfy the following algebraic relationships: Note that the first of the above equations coincides with equation (37). Then, partial integration is applied to the last integral, so that, after substitution into equation (18), simple algebra gives, On comparing equations (20) and (17) it is then found that χ = −α/2, so that = −(χ2 + 2χ) = α − α2/4. introduction. (Refer Section 3 - Applications of the Variational Principle). Nevertheless, in the present section we would offer teachers a way to introduce, again by using only elementary tools, a rather advanced topic of quantum mechanics, the so-called factorization method, introduced during the early days of quantum mechanics as a powerful algebraic method to solve stationary Schrödinger's equations [13–16]. analytically. The final example we wish to offer is a simple and compact determination of the ground state of the hydrogen atom. . Then also the stationary Schrödinger equation of the Morse oscillator, Students should be encouraged to prove that, starting from equation (38), the Schrödinger equation for the Pöschl-Teller potential (30) can also be factorized as. variational method (SVM), following the paper by two of the present authors [Phys. Heisenberg's uncertainty principle is the essence of quantum mechanics. formally identical to the inequality in equation (24) once letting k ~ π/a. Compared to perturbation theory, the variational method can be more robust in situations where it is hard to determine a good unperturbed Hamiltonian (i.e., one which makes the … You are free to: • Share — copy or redistribute the material in any medium or format. The technique involves guessing a reason- with χ, of course, being the solution of equation (37). This allows calculating approximate wavefunctions such as molecular orbitals. What has been shown so far is enough to cover at least two didactical units (lecture and recitation session). To this end, consider its value measured with respect to the bottom of the potential curve, which is (in terms of the above defined dimensionless units) α − α2/4. Why would it make sense that the best approximate trial wavefunction Ground State Energy of the Helium Atom by the Variational Method. Partial integration is then applied to the second integral in the rhs of equation (34), Finally, on substituting from equation (35) into equation (34), long but straightforward algebra gives, which turns out to be identical to equation (33) when χ coincides with the positive solution of the algebraic equation6, With such a choice in mind and on taking into account that = −αχ, equation (36) can be substituted into equation (32), which takes on the form. Schrödinger's equation for the electron wavefunction within the Coulomb electric field produced by the nucleus is first recalled. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary … Consider then equation (11), which will be recast in the following form: whose lhs can be interpreted in terms of the action of the differential operator x+{\rm{d}}/{\rm{d}}x on the ground state wavefunction u(x). if the following condition: It could be worth proposing to students an intuitive interpretation of the inequality (24), which I took from an exercise in the Berkeley textbook [1]. where ∇2() denotes the Laplacian operator acting on the stationary states u=u({\boldsymbol{r}}), with {\boldsymbol{r}} denoting the electron position vector with respect to the nucleus. Variational Method. To this end, we shall let, and then search for the values of χ and such that equation (33) is fulfilled. The variational method is an approximate method used in quantum mechanics. The variational method is the other main approximate method used in quantum mechanics. Compared to perturbation theory, the variational method can be more robust in situations where it's hard to determine a good unperturbed Hamiltonian (i.e., one which makes the perturbation small but is still solvable). Number 3, 1 Dipartimento di Ingegneria, Università degli Studi 'Roma tre' Via Vito Volterra 62, I-00146 Rome, Italy. In all introductory quantum mechanics textbooks, it is customarily presented as an invaluable technique aimed at finding approximate estimates of ground state energies [3–7]. No previous knowledge of calculus of variations is required. A pictorial representation of the Rosen-Morse potential in equation (42). good unperturbed Hamiltonian, perturbation theory can be more In this way, the elementary character of the derivation will appear. resulting trial wavefunction and its corresponding energy are The true Morse oscillator energy lower bound is -{(1-\alpha /2)}^{2}. The need to keep the math at a reasonably low level led me to a rather simple way to determine the full energy spectrum of the quantum harmonic oscillator [2]. In all above examples the minimization of the energy functional is achieved with the help of only two mathematical tricks: the so-called 'square completion' and the integration by parts, that should be part of the background of first-year Physics or Engineering students. The variational method was the key ingredient for achieving such a result. Some hints aimed at guiding students to find the ground state of the Rosen-Morse potential are given in the appendix. One of the most important byproducts of such an approach is the variational method. Variational methods, in particular the linear variational method, are the most widely used approximation techniques in quantum chemistry. J. Phys. As usual, suitable units for length and energy are used to make the corresponding Schrödinger equation dimensionless. to find the optimum value . efficient than the variational method. 2. Figure 3. In other words, only radially symmetric wavefunctions, i.e. The variational method was the key ingredient for achieving such a result. In [2] it was shown that the energy functional in equation (5) can be minimized in an elementary way for the special case of the harmonic oscillator. The first integral into the rhs of equation (17) is expanded to have. 39 035410. This should help students to appreciate how some basic features of a phenomenon can sometimes be grasped even by using idealized, nonrealistic models. u = u(r), will be considered into equation (44). Fit parameters are U0 4.7 eV and k 2.0 Å−1. This allows calculating approximate wavefunctions such as molecular orbitals. While this fact is evident for a particle in an infinite well (where the energy bound directly follows from boundary conditions), for the harmonic oscillator such a connection already turns out to be much less transparent. Frequently, the trial function is written as a linear combination On the other hand, in cases where there is a Figure 5. The celebrated Morse potential, described by the two-parameter function. Interaction potential energy for the ground state of the hydrogen molecule as a function of the internuclear distance (dashed curve) [10], together with the fit provided by Morse's potential of equation (13) (solid curve). The variational method lies behind Hartree-Fock theory and the Some of them have been analyzed here. Such an unexpected connection is outlined in the final part of the paper. But there is more. All (real) solutions of equation (1) describing bound energy's eigenstates must be square integrable on the whole real axis, The ground state for the potential U(x) can be found, in principle, without explicitly solving equation (1). Before proceeding to the minimization, it is better to recast equation (31) as follows: which implies that the energy must be greater than −1 (−U0 in physical units), as can be inferred from figure 4. exact eigenfunctions in our proof, since they certainly exist and form Although the eigensolutions of the Schrödinger equation for the potential (13) are out of the scope of any introductory course on quantum mechanics, the exact determination of the ground state of the Morse oscillator can be achieved via the procedure outlined in the previous section. Similar considerations hold for the Rosen-Morse potential. The integer M denotes the (finite) dimension of E M and fj Iig I=1;2;:::;M is a(not necessarily orthonormal)basis of that subspace. Revised 28 January 2018 but is still solvable). Accordingly, such a direct connection could also be offered to more expert audiences (graduate students) who would benefit from the present derivation to better appreciate the elegance and powerfulness of the variational language. We are not aware of previous attempts aimed at providing a variational route to factorization. The Variational Method. Actually the potential in equation (30) is customarily named hyperbolic Pöschl-Teller potential, and was first considered by Eckart as a simple continuous model to study the penetration features of some potential barriers [9]. In quantum mechanics, the variational method is one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states. Before continuing, it must be stressed once again how the above results have been obtained, after all, by imposing solely the localization constraint (2) on the energy functional (7). In the next section the same procedure will be used to find the ground state of the Morse oscillator. Variational Principles in Classical Mechanics by Douglas Cline is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0), except where other-wise noted. Then, on evaluating the second integral in the numerator of equation (65) again by parts, i.e. Substitution from equation (4) into equation (3) gives. @article{osti_4783183, title = {A NEW VARIATIONAL PRINCIPLE IN QUANTUM MECHANICS}, author = {Newman, T J}, abstractNote = {Quantum theory is developed from a q-number (operator) action principle with a representation-invariant technique for limiting the number of independent system variables. Schrödinger's equation, expressed via the above introduced 'natural units,' reads. wavefunction for the problem, which consists of some adjustable : To minimize the rhs of equation (7), the square in the numerator is first completed, which yields, then a partial integration is performed on the last integral. This method is free of such essential diffi- culty as the necessity of knowing the entire spectrum of the unperturbed problem, and makes it possible to make estimates of the accuracy of variational calcula- tions. As a matter of fact, it could result in being somewhat puzzling, for nonexpert students, to grasp why the oscillator zero-point energy value ω/2 should follow from the sole spatial localization. approximate wavefunction and energy for the hydrogen atom would then However, in [2] the variational method has been used in a rather unusual way to find, with only a few elements of basic calculus, the complete (energy and wavefunction) ground state of the harmonic oscillator, without any additional assumptions but wavefunction square integrability, which is the mathematical translation of the spatial confinement requirement. All above examples showed that the lhs of 1D Schrödinger's equation can be written as the product of two first order differential operators plus a constant term. is the one with the lowest energy? 39 035410. You will only need to do this once. From equation (55), on again taking equation (52) into account, it follows that the energy of the ground state is just 1. To this end, the free parameters, χ, β, and are introduced, and their values are chosen in such a way that the following relation holds: with being a constant factor which contributes to the final expression of the ground state energy. As a further example, consider again the Morse potential of section 3. Before concluding the present section it is worth giving a simple but really important example of what kind of information could be, in some cases, obtained by only the ground state knowledge. The variational method is the other main approximate method used in On the other hand, elementary derivations of Schrödinger's equation solutions constitute exceptions rather than the rule. This site uses cookies. From equation (10) it also follows that, in order for the oscillator energy bound to be attained, the wavefunction must satisfy the following first order linear differential equation: whose general integral, that can be found with elementary tools (variable separation), is the well known Gaussian function. . Export citation and abstract In the final part of the paper (section 6) it will be shown how the procedure just described could be part of a possible elementary introduction to the so-called factorization method. Functional minimization requires the knowledge of mathematical techniques that cannot be part of undergraduate backgrounds. Theorem, which states that the energy of any trial wavefunction is On coming back to physical units and on taking equation (15) into account, the ground energy is. where η ∈ (−1, 1). was proposed in 1929 by Morse [8] as a simple analytical model for describing the vibrational motion of diatomic molecules. In this way even graduate students could benefit from our elementary derivation to better appreciate the power and the elegance of the variational language. of basis functions, such as. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: 'completion of square' and integration by parts. Moreover, from the above analysis it is also evident how the localization constraint in equation (2) is solely responsible for the above energy bound. Moreover, on using Bohr's radius aB = 2/me2 and the hydrogen ionization energy {{ \mathcal E }}_{0}={{me}}^{4}/2{{\hslash }}^{2} as unit length and unit energy, respectively, it is possible to recast equation (44) as follows: Similarly as was done for the 1D cases, we multiply both sides of equation (45) by u(r) and then integrate over the whole 3D space7 Similarly as was done for the Pöschl-Teller, the integral into the numerator of equation (A.1) is written as a perfect square. a complete set, even if we don't happen to know them. It is natural to wonder whether the approach used in [2] is limited to the particularly simple mathematical structure of the harmonic oscillator potential or if it has a wider applicability. It is well known that the study of quantum mechanics poses such challenging math problems which often may obscure the physics of the concepts to be developed. If you have a user account, you will need to reset your password the next time you login. The basis for this method is the variational principle. Its characterization is complete, as promised. Rigorously speaking, to identify the internuclear distance by the x variable of equation (13) would imply the inclusion of an unphysical region corresponding to negative values of the internuclear distance. The variational method is the most powerful technique for doing working approximations when the Schroedinger eigenvalue equation cannot be solved exactly. It is a useful analytical model to describe finite potential wells as well as anharmonic oscillators, and is sketched in figure 4. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. VARIATIONAL METHODS IN RELATIVISTIC QUANTUM MECHANICS MARIA J. ESTEBAN, MATHIEU LEWIN, AND ERIC SER´ E´ Abstract. The solutions are found as critical points of an energy func-tional. To this end, it is sufficient to multiply its left and right side by u and then integrate them over the whole real axis. Schrödinger's equation for the stationary state u = u(x) reads. In this chapter, we will introduce two basic approaches—the variational and perturbation method. good unperturbed Hamiltonian (i.e., one which makes the perturbation small lengths and energies will again be measured in terms of U0 and α/k, respectively. be of the quantum harmonic oscillator [2]. International Conference on Variational Method, Variational Theory and Variational Principle in Quantum Mechanics scheduled on July 14-15, 2020 at Tokyo, Japan is for the researchers, scientists, scholars, engineers, academic, scientific and university practitioners to present research activities that might want to attend events, meetings, seminars, congresses, workshops, summit, and symposiums. To this end, we will illustrate a short 'catalogue' of several celebrated potential distributions for which the ground state can be found without actually solving the corresponding Schrödinger equation, but rather through a direct minimization of an energy functional. Find out more. From equation (49) it also follows that the ground state wavefunction must be the solution of the differential equation. It is easy to prove that the same differential equation is also obtained by expanding the rhs of equation (62), thus completing our elementary proof. Reset your password. Similarly to what was done for Morse's potential, to find the ground state of the Pöschl-Teller potential (30), the dimensionless parameter α defined in equation (15) is first introduced, i.e. RIS. 2 To Franco Gori, on his eightieth birthday. Accordingly, on using the transformations kx → x and E/U0 → E, it is immediately proved that the energy functional (5) becomes, the dimensionless parameter α being defined by. To obtain the true energy lower bound, the square inside the integral into the numerator of equation (32) has to be completed. Compared to perturbation theory, the variational Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. that having the minimum energy, will be an eigenstate of \widehat{{{\boldsymbol{L}}}^{2}} corresponding to a null value of angular momentum. In this way, equation (5) takes on the following form2 They will be examined in section 3 and in section 4, respectively. This would help to clarify how the minimization of the energy functional (5) can be carried out, in some fortunate cases, by using only 'completion of square' and integration by parts. Is the variational method useless if you already know the ground state energy? where = h/2π, h being Planck's constant. The present paper expounds a method which allows us to combine PT and the variation method in a single approach. View the article online for updates and enhancements. We know the ground state energy of the hydrogen atom is -1 Ryd, or -13.6 ev. To minimize the functional (16), the square into the integral in the numerator will first be completed. This makes our approach particularly suitable for undergraduates. This results from the Variational Consider the 1D motion of a mass point m under the action of a conservative force which is described via the potential energy function U(x). In particular, on taking equation (2) into account, we have, so that, after simple algebra, equation (7) becomes [2]. of the variational parameter , and then minimizing Moreover, to identify such a bound with the ground state energy, it is necessary to solve the following differential equation: which, by again using variable separation, gives at once. A possible elementary introduction to factorization could start again from the analysis of the harmonic oscillator potential recalled in section 2. The parameter a, that fixes the length scale, is expected to be proportional to k−1. A 'toy' model for the Morse potential. Of course, method can be more robust in situations where it's hard to determine a of Physics, Osijek November 8, 2012 Igor Luka cevi c The variational principle. Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. variational method approximations to the exact wavefunction and By continuing to use this site you agree to our use of cookies. we're applying the variational method to a problem we can't solve For this reason the ground state, i.e. For radial functions the 3D integration reduces to a 1D integration. In this approach, the origin of the nite minimum value of uncertainty is attributed to the non-di erentiable (virtual) trajectory of a quantum particle and then both of the Kennard and Robertson-Schr odinger inequalities in The first integral in the rhs of equation (33) is expanded as. Note that the first term in equation (29) does coincide with the ground state energy of the harmonic approximation of the Morse potential (13), as can be easily proved by taking the second derivative of the potential at x = 0. Moreover, on further letting x\to \alpha x, after simple algebra equation (14) can be recast as follows: Figure 1. Remarkably, such a differential equation can easily be derived by using the variational approach used throughout the whole paper. Let the trial wavefunction be denoted parameters called ``variational parameters.'' Nevertheless, there also exist many problems that may not be solved classically even with the clas-sical variational method [17{19]. Variational principles. Lett. combination of the exact eigenfunctions . To this end, consider the energy functional (5) written in terms of suitable dimensionless quantities, For what it was said, it should be desiderable to recast equation (63) as. Subsequently, three celebrated examples of potentials will be examined from the same variational point of view in order to show how their ground states can be characterized in a way accessible to any undergraduate. For the harmonic potential two natural units are the quantities \sqrt{{\hslash }/m\omega } and ω/2 for length and energy, respectively. Nevertheless, that doesn't prevent us from using the It then follows that the ground state energy of the Morse oscillator is just −χ2, with the corresponding wavefunction being the solution of the following differential equation: On again using variable separation, it is immediately found that, It should be noted that the result obtained for the Pöschl-Teller potential could be, in principle, extended to deal with other important 1D models. Here and in the rest of the lecture this will be achieved by suitably combining the physical parameters of the specific problem and Planck's constant. This wave function contains a lot more information than just the ground state energy. The variational method winds up giving you a wave function that is supposed to approximate the ground state wave function. It could also be worth exploring the Infeld/Hull catalogue to find, and certainly there are, other interesting cases to study. Any trial function can formally be expanded as a linear You will only need to do this once. The Rosen-Morse potential, originally proposed as a simple analytical model to study the energy levels of the NH3 molecule, can be viewed as a modification of the Pöschl-Teller potential in which the term -2\eta \tanh {kx} allows the asymptotic limits for x\to \pm \infty to split, as can be appreciated by looking at figure 5, where a pictorial representation of the potential (42) has been sketched. The second case we are going to deal with is the so-called Pöschl-Teller potential, defined as follows:5. The Variational Method 1. Published 13 April 2018, Riccardo Borghi 2018 Eur. The knowledge of higher-order eigenstates would require mathematical techniques that are out of the limits and the scopes of the present paper. In other words, from equation (52) it is possible not only to retrieve the ground state wavefunction u(x), as it was done before, but also the corresponding value of the ground state energy. After simple algebra the corresponding energy functional is then obtained, where it will now be assumed henceforth that the limits of r-integrals are [0,\infty ). No. The variational method in quantum mechanics: an elementary. The variational method was the key ingredient for achieving such a result. . hydrogen atom ground state. Variational principle, stationarity condition and Hückel method Variational approximate method: general formulation Let us consider asubspace E M of the full space of quantum states. It is well known that quantum mechanics can be formulated in an elegant and appealing The variational method in quantum mechanics Gauss's principle of least constraint and Hertz's principle of least curvature Hilbert's action principle in general relativity, leading to the Einstein field equations . The main result found in [2] will now be briefly resumed. Naturally, many other exist … It is useful to introduce 'natural units' for length and energy in order for the functional (5), as well as the corresponding Schrödinger equation, to be reduced to dimensionless forms. Consider that even in the probably best introduction to quantum mechanics, namely the fourth volume of the celebrated 1970 Berkeley's Physics course [1], it is explicitly stated that no rigorous approaches to solve Schrödinger's equation are attempted. It is well known that quantum mechanics can be formulated in an elegant and appealing way starting from variational first principles. Factorization was introduced at the dawn of quantum mechanics by Schrödinger and by Dirac as a powerful algebraic method to obtain the complete energy spectrum of several 1D quantum systems. The presence of the term \widehat{{{\boldsymbol{L}}}^{2}}/2{{mr}}^{2} into the Hamiltonian implies that the eigenvalues E will contain an amount of (positive) energy which has to be ascribed to the presence of centrifugal forces that tend to repel the electron from the force centre. In this way it is easy to prove that equation (5) reduces to. Now partial integration is applied to the second integral in the numerator of equation (3), which transforms as follows: where use has been made of the spatial confinement condition in equation (2). Two of these potentials are one-dimensional (1D henceforth), precisely the Morse and the Pöschl-Teller potentials. This is the essence of factorization: given the potential U(x), to find a function, say β(x), and a constant, say , such that the Hamiltonian operator8. There exist only a handful of problems in quantum mechanics which can be solved exactly. This is because there exist highly entangled many-body states that configuration interaction method for the electronic structure of Variational Methods. From: Elementary Molecular Quantum Mechanics (Second Edition), 2013. function wavefunction can be written. In this way, the operator in equation (53) turns out to be Hermitian. In section 2 the 1D stationary Schrödinger equation and the variational method are briefly recalled, together with the main results of [2]. Note that, in order for the function in equation (23) to represent a valid state, it is necessary that the arguments of both exponentials be negative, which occurs only if α < 2, i.e. To avoid symbol proliferation, the same notations will be used to denote physical as well as dimensionless quantities. One of the most important byproducts of such an approach is the variational method. It appears that quantities k−1 and U0 provide natural units for length and energy, respectively. In a monumental review paper published at the very beginning of the fifties [17], Infeld and Hull presented a systematic study about all possible 1D potentials for which the corresponding stationary Schrödinger equation can be exactly factorized. The basic idea of the variational method is to guess a ``trial'' This can be proven easily. The problem is that Variational methods certainly means the general methods of Calculus of variations.This article is just one example of these methods (perhaps not even the sole example even within quantum mechanics). At the end of the functional minimization process, equation (21) has been obtained. However, it was pointed out how such inclusion does not dramatically alter the resulting vibrational spectrum [8]. Variational Methods The variational technique represents a completely different way of getting approximate energies and wave functions for quantum mechanical systems. Now, similarly as done for the harmonic oscillator, consider the following differential operator: which, after expansion, takes on the form. © 2018 European Physical Society Students should be encouraged to study, for instance, the so-called Rosen-Morse potential, defined by [12]. Semiclassical approximation. The two approximation methods described in this chapter‐the variational method and the perturbation method‐are widely used in quantum mechanics, and has applications to other disciplines as well.
Wolf -- Super Smash Bros, Squid Fishing Ireland, Trellis For Passion Fruit, Magical Pet Maker, Maseca Recipes Cornbread, Informatica Data Engineer Resume, Yamaha Yvc-1000 Datasheet, |
35598f782d2a3b6d | Today’s post is about a personal revelation I recently had. You see, I spend a lot of time researching for this blog, making sure I understand what I’m talking about, and doing my best to explain it all clearly and concisely. And all this work, in theory, is supposed to benefit my science fiction writing.
But I don’t want to write hard Sci-Fi. I used to think science fiction existed on a spectrum from hard science fiction, where everything is super scientifically accurate (and here’s a full chapter explaining the math to prove it), to soft science fiction, where everything’s basically space wizards and technobabble magic (lol, who cares if unobtainium crystals make sense?).
I’ve since discovered another way to think about science fiction, and I find that to be more useful. But sometimes I’m still left wondering why am I doing all this extra work? What’s it all for if I’m not trying to write hard Sci-Fi?
Recently, I was talking with a new friend, and somehow the conversation turned to quantum physics. I swear I wasn’t the one who brought it up! My friend had seen a video on YouTube, and I felt the need to disillusion him of the weird quantum mysticism he’d apparently been exposed to. I was doing my best to explain what the Heisenberg uncertainty principle actually means, and I ended up digging into what I remembered about the math.
Mathematically speaking, the momentum of a quantum particle is represented by the variable p, its position by the variable q, and the relationship between p and q is often expressed as:
pq ≠ qp
I don’t have the math skills to explain how this non-equivalency equation works. I think it has something to do with matrices. My high school math teacher skipped that chapter. To this day, I still haven’t got a clue how a matrix works. I just know it’s an important concept in quantum theory.
But by this point, my friend was staring at me with a sort of dumbstruck awe, and he said: “Wow, you really do understand this stuff!”
That brought me up short.
“No, not really,” I said, feeling slightly embarrassed. I couldn’t help but recollect the famous line attributed to Richard Feynman: If you think you understand quantum theory, you don’t understand quantum theory.
So I told my friend about this blog and about my writing, and how I use the research I do for my blog to flesh out the story worlds in my science fiction. And then I said something that I don’t remember ever thinking before or being consciously aware of, but as soon as the words were out of my mouth I knew they were true: “I just want to make sure I know enough so that I don’t make a total fool of myself in my stories.”
And that’s it. That’s the answer I needed. I’m okay with stretching the truth if it suits my story. I’m okay with leaving some scientific inaccuracies in there. I just don’t want to make a mistake so glaringly obvious to my readers (some of whom know way more about science than I do) that it ruins the believability of my story world.
And now if you’ll excuse me, I have to get back to writing. The fiction kind of writing, I mean. And on Wednesday, we’ll have story time here on the blog.
15 responses »
1. Steve Morris says:
Story time, yay! (BTW, physicists say that p and q do not commute, which means that if you measure p and q you get a different answer depending on which you measure first, and that this is a fundamental fact, not a limitation of any measuring equipment. It follows from the Schrödinger equation. You probably knew that.)
Liked by 2 people
• J.S. Pailly says:
That all does sound familiar to me. I have to admit, my memory about this subject is a bit fuzzy. It’s been a while since I really read up on quantum theory. It’s probably time I did a refresher course on it.
2. I think there’s also something to be said for a sci-fi writer having a love of science, which I have to admit powers my own research more than story prep.
On making mistakes, I’m resigned to the fact that those are going to happen. As I’ve learned more science, I’ve increasingly caught published sci-fi writers making those mistakes. But since their sales don’t appear to have taken any hits, it appears that as long as the mistakes aren’t basic ones, it’s okay.
Although I’m sure those authors hear about them anyway. Actually, I suspect when sci-fi fans catch an author in an obscure error, it makes them feel smart, which is why they probably continue reading that author.
Liked by 1 person
• J.S. Pailly says:
Yeah, it’s those basic mistakes that I really want to avoid. I also don’t want to reinforce popular misconceptions about science, so I want to make sure I don’t fall for those misconceptions myself. But I tend to fall into the trap of trying to be a perfectionist, and I need to stop doing that.
Liked by 1 person
3. I found myself coming here to find a post that sparked an idea a few weeks back, just to make sure I got the science right. What ever your reasons, I’m glad you write this blog!
Liked by 1 person
4. I can relate to this . This is the kind of thing I was talking about when I said I was feeling inspired by Ray Bradbury. It’s all about story and imagination. Who is anyone to say any kind of speculative futuristic technology is impossible to achieve? There’s a reason there’s almost always sound in space movies. It adds flourish as the reality is a little boring, even though most people know there’s no sound in space. Self-aware artistic license when it comes to the “sci” part of sci-fi, works. You just have to SOUND convincing to readers. If they buy it as plausible, I don’t believe it really matters that it bugs someone who knows better, if it served the story well. But again that’s why I’m really liking “speculative fiction” over “sci-fi” as an umbrella term for anything with futuristic/spacy elements. It robs nitpickers of their favorite gripe to toss at every single sci-fi property ever: “Well that’s not scientifically accurate. How can they call this ‘science fiction’?” I mean, look at Star Trek. There’s virtually no “real” science in it. There’s little tidbits here and there, nods to things like Dyson spheres, but it’s largely speculative fantasy. Fans ask scientists “Is the warp speed possible? Will a transporter work?” Etc. And the scientists go “Weeeeeeeell, I mean, theoretically, it is POSSIBLE, but—“ and then the fans go “Yes! Star Trek confirmed as scientifically accurate!” And they lord it over fans of “space fantasy” Star Wars. It’s all “space fantasy,” though, and it all has merit. Both sci-fantasy and hard sci-fi can be done badly. I could read Arthur C Clarke as a kid, and I didn’t have to be a scientist to do so. I understood the concepts he was putting out there. That’s because he told good stories and the scientific aspects of it served the story. They weren’t the focus, as far as I’m concerned. He knew his stuff, and that’s awesome, but it doesn’t mean that it’s more valid than say, Dune. I mean, what’s the science in Dune? It’s like He-Man for adults, lol. But it’s great, and rightfully considered a classic. I’m rambling now. My point is that the story is king and everything else exists to serve it, and if reality has to be twisted into a certain shape to fit a story, and it sounds good, I say go for it. Let your imagination fly free.🚀
Liked by 1 person
5. kutukamus says:
Really, pq ≠ qp sure sounds/looks like the [ever-changing] relationship between [the same] two people. Then again, even the mere name quantum scares me. 🙂
Liked by 1 person
That hits the nail on the head, all right. Sometimes it’s a balance between serious science and good storytelling (usually using the good storytelling to offset the lacks in science), and the successful stories strike that balance well–the reason why over-the-top adventure dramas like the Star Wars and superhero movies can get away with so much.
A writer can be good with science, or not so good; but their primary job is to be a storyteller, and they have to make sure their knowledge of science (or any subject, for that matter) doesn’t throw the reader out of their story.
Liked by 1 person
• J.S. Pailly says:
I’ve been told many times over that most readers won’t know the difference and don’t really care about scientific accuracy in science fiction. I guess that might be true for a general audience, but I’m pretty sure avid science fiction readers are a little more scientifically literate than most people.
So I think it’s a matter of knowing your audience. You don’t want your readers to lose their suspension of disbelief. With Sci-Fi readers, that means getting the science right, or at least not making a total mess of it.
Liked by 1 person
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
ad946d828b242d98 | « first day (1772 days earlier) last day (2105 days later) »
12:08 AM
@HDE226868 I so wanted one with displays on the same surface you draw on, but those puppies start at US$1000.
That's too steep for me. Especially just to learn if the tool is right for me.
@dmckee I was thinking screen-less, and I was hoping for <$200. It ended up as a question on Hardware Recommendations beta.
man I forgot how dark it gets when the weather gets cooler here
Mine is an Intuos Medium, and I paid US$187 plus tax in a store.
@dmckee There's an upload image button. Or just use imgur.
either I should grow up and not be scared of the dark or I should not live alone :p
12:14 AM
Gain some weight
not sure how that will help
People don't fuck with big guys
Fancy that. It's been there all along and I just edit it out of my visual field because I never use it. I literally had to mask off sections of the screen before I saw it.
Humans are weird.
Palm tree physics 101
12:17 AM
I've always used a tree and a bike to indicate two frames in relativity. Those are the tablet versions.
I also did a surfer, but he's a little battered.
Right after I bought the tablet the class I would have used i the most for got given to another teacher in a big rescheduling snafu we had this semester.
So I don't have as many examples as I would have expected.
I couldn't draw those with a mouse and whatever drawing program I would use.
@dmckee Impressive
I should get one so I can save all of my proofs and derivations
For proofs and stuff like that you don't actually need the pressure sensitivity of a art tablet, and there are some other choices.
Androids and windows tables with styluses.
Microsoft surface and Samsung Notes and things like that.
Not that I'm getting much out of the pressure sensitivity yet, but I convinced myself the brush tool lets me butcher japanese caligraphy much like I do in real life.
12:43 AM
I'm not buying a whole tablet
I have an iPad already
1:37 AM
1 hour later…
2:41 AM
@FenderLesPaul GR talk tonight
2:52 AM
why D:
3:06 AM
@obe what do you want a GR talk
i can do a GR talk with you
@0celo7 That would be really cool.
I have a phone now also.
I'm actually in the humanities building now
maybe I can steal a whiteboard
do a skype video call
now that would be epic
You should sleep.
uh, I don't have class until 9, mom
and even then, that's just LA
I should be asleep. Though I have data now.
3:12 AM
wait don't you start class tomorrow
For some reason it begins on monday.
Even though other classes begin tomorrow.
Reminds me, I had to do volunteer work for 3 days.
I'm only on chapter 13.
What order should I finish the rest of the book?
@0celo7 Cellular data.
brb getting shooed away
Done being shooed away?
yes, in my room now
So are we discussing GR or not
3:39 AM
Dude I have no GR to discuss, carroll ch3 remember?
I think it would be cool to listen to you and FLP discuss.
well he's a bum
3:50 AM
I never understood how one measures a wave function.
How do you do it?
you don't
Ok. The only property I know it has is $\psi^*\psi$ is the probability of finding it at a point at a particular time.
What else characterizes a wave function?
@0celo7 like for instance, if I can't measure $\psi$, why is it more fundamental than the probsbility itself? Like why not just use the probsbility distribution?
4:12 AM
@StanShunpike well for one you can construct many different matter waves (1 particle schrodinger) with the same psi squared
or rather, the same $\langle x|\psi\rangle\langle \psi|x\rangle$
@0celo7 that would be pretty cool, actually!
@NeuroFuzzy Uh, I hadn't thought of that!
That's a good point
@StanShunpike On a related note I was actually looking for reviews of this answer physics.stackexchange.com/questions/206269/…
@StanShunpike I swear to Master D.J. Trump that we've discussed this before
@0celo7 We have.
I just started playing Splinter Cell: Double Agent
4:24 AM
What is that?
old school stealth games are crazy
no tutorial, I'm sneaking into some base or something
well night y'all
@NeuroFuzzy the issue is that I'm pretty sure using my front-facing laptop cam will show everything backwards
I wonder if I can stream footage from my phone's back cam
something to figure out tomorrow
4:44 AM
@0celo7 or to mirror it!
5:05 AM
@0celo7 oh I thought it was Friday night
can we do it Friday night if you're free?
5:39 AM
@dmckee Which one is Alice and which one is Bob
5:51 AM
Q: Why is @_________ deleting his answers citing 'in order to comply to the site policy'?
user36790While I was waiting for answer for my newly posted question, I noticed one question: A confusion regarding an example in The Feynman Lectures; there @user posted an excellent answer; I cherished his answer especially for the amazing pics he used. But there it was written: Answer deleted by __...
6:45 AM
there needs to be more fields using Alice and Bob
Currently it's just QM, relativity and computer science
7:29 AM
@StanShunpike The wavefunction is a mathematical convenience, much more than a real physical object. Let's say that due to the mathematical formulation of quantum theories, hilbert spaces emerge naturally (and therefore wavefunctions, that are associated to quantum states).
A state on the other hand encodes all the information of a quantum system about measurements. To operationally "measure"/identify a state is a quite difficult task in my opinion.
In fact, a first problem is the fact that a measurement modifies the state, hence you need to be able to prepare a lot of identical states to test
A second problem is that either you are lucky enough and your state is an eigenvector of some operator with multiplicity one (so in principle measuring many times such observable would help you identify unambiguously the state, for you obtain the same measurement over and over, and such value is associated to a single state)
or you need to test many observables many times to identify it; in principle, you would need to test all the observables (that are usually infinite) an infinite number of times each, so it goes without saying that it is not an easy task.
As far as I know there is people who disputes (in research work, not on forums) the structure of some a priori well-established type of states
for example that the state of light produced by lasing is not a coherent state; but a mixed states with certain properties
8:11 AM
@yuggib Plenty of QM formalisms don't have wavefunctions
Well, maybe not plenty
But at least 1 has no equivalent
Does stochastic QM have a wavefunction?
8:25 AM
I know; but everyone of them has the Gel'fand-Neumark-Segal construction; for the observables are always assumed to form a $*$-algebra of (maybe) non-commutative objects. Therefore, even if the theory does not nedd wavefunctions (in the broad sense of Hilbert space and vectors), you can always construct such a representation
@Slereah not even QM in general, just quantum computation, and then because that's mostly computer science
Then again QM tends not to involve people very much :-P
Well, it is also used for mixed states in general
it may not be the better/most convenient one, but it is always there. The point is that to not admit wavfunctions you have to radically change the notion of observables
Well they are all equivalent to some degree in the end
@ChrisWhite I couldn't let this go: faculty get more interaction, but grad students are creepier per capita, so... I'd have to go with grad students
8:28 AM
It's not that incredible that you can recover one from the other
well, keep in mind that even states that are not pure can be represented as Hilbert space vectors in a suitable GNS construction
Well, in principle the meaning of formulating a different theory would be to predict something more than the old theory
if else there is no need for the new one, and becomes just a matter of interpretation
Well yes but the old theory predicts everything that happens
So far not much need for a different thing
Can you recover wavefunctions from the quantum logic formalism?
I am not so familiar with quantum logic; but indeed you can recover them from quantum set theory
I suspect that quantum logic is different though
not an expert anyways
Neither am I
It just seems to be pretty different from most formalism
Basically it redefine basic propositional logic
but the scope is to define a new logic inspired by the quantum theory, or the contrary?
8:33 AM
first one
if you change so radically the point of view, it becomes very tough to recover the usual mathematical results
@DavidZ haha I'll keep this in mind next time someone asks me to spend time tutoring undergrads
that are based on the ZFC theory of first-order logic
Apparently quantum logic couldn't do much and to expand it, you have to use quantum filtering
In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988. Unlike the Schrödinger equation, which describes deterministic evolution of wavefunction of a closed system (without interaction), the Belavkin equation describes stochastic evolution of a random wavefunction of an open quantum system interacting...
Which looks pretty similar to wavefunctions :p
@ChrisWhite lol
honestly, the undergrads creep on each other way more than anything else
like any college
8:36 AM
in quantum set theory you have a ZFC transfer principle, so you can mutuate (to some extent) ZFC assertions to quantum set theory
anyways, occam's razor would suggest that such a radical change is a bit far fetched seen the success of usual quantum theory and ZFC in math
if it is just for computational convenience then it may be ok
but restricted to that domain
Well everything is for computational convenience, in the end
ahhaha that may be true
long time without JD...I am bored, and I need some divertissement :-D
Time has 4 corners
9:46 AM
Phew. This could go down as the yuck username for the ages:
10:08 AM
10:41 AM
I kinda don't like the whole explanation of Hawking radiation via split pairs of virtual pairs
It's not that helpful
11:18 AM
too accurate
"One day Shizuo Kakutani was teaching a class at Yale. He wrote down a lemma on the blackboard and announced that the proof was obvious. One student timidly raised his hand and said that it wasn't obvious to him. Could Kakutani explain? After several moments' thought, Kakutani realized that he could not himself prove the lemma. He apologized, and said that he
would report back at their next class meeting. After class, Kakutani, went straight to his office. He labored for quite a time and found that he could not prove the pesky lemma. He skipped lunch and went to the library to track down the lemma. After much work, he finally found the original paper. The lemma was stated clearly and succinctly. For the proof, the author had written, 'Exercise for the reader.' The author of this 1941 paper was Kakutani."
11:34 AM
"You've earned the "Nice Question" badge (Question score of 10 or more) for "Highest symmetric non-maximally symmetric spacetime"."
@FenderLesPaul yeah
but I had time yesterday
12:02 PM
@0celo7 : no the most complicated and technical books on the market aren't popscience. But some of the stuff you believe is popscience.
@Slereah : the "given" explanation for Hawking radiation is pseudoscience nonsense. Virtual particles are field quanta, not real particles that pop into existence like magic. In addition, there are no negative-energy particles. What there is, is near-infinite gravitational time dilation, which Hawking radiation totally ignores.
hush duffield
We're talking real science.
12:23 PM
Q: Should there be a way to flag comments
@JohnDuffield you do know that's the popsci definition of Hawking radiation?
Indeed, Hawking radiation can be done within the framework of AQFT, which does not refer to particles at all.
It seems AQ has declared jihad on ISIS again.
Not very original.
So anyway
If photons are for seeing
And phonons are for hearing
Where are the smellons
And the tastons
@0celo7 I don't know about Planetscape, but Planescape is very good.
12:39 PM
>We're talking real science.
[...] 10 minutes later:
> If photons are for seeing
> And phonons are for hearing
> Where are the smellons
> And the tastons
strange definition of real science :-O
@Slereah You forgot about feelons.
@ACuriousMind like the cryon; not to be confused with the crayon
@ACuriousMind mm. Typo
@ACuriousMind Planescape is best
Particles are easy to remember
But then you have the laundry list of pseudo particles
Also fuck mesons and hadrons, way too many of them
Like half of the PDG book is mesons and hadrons
12:44 PM
@Slereah Well, uh, that's what it's there for, isn't it?
I suppose
But still
I want to know more about electrons
Not about the p48589c meson
That only appeared once in 1963
During a full moon
@Slereah The word order is off there :D
Shouldn't there be a small number of mesons, really
Aren't most of them just superpositions of basic mesons
Is the game about particles
Wrong conversation
Planescape is the best game
It is about
12:49 PM
What can change the nature of a man?
A hot enough woman
I didn't want an answer to the question, this is the question that appears over and over in the game's story
Basically you are an immortal dude
But every time you are killed, you lose some of your memories
@0celo7 : How about if I ask a question about how Hawking radiation really works, and you explain it? You can tell us all how quantum fluctuations are immune from gravitational time dilation, and how the black hole isn't really black. And isn't really a black hole. And all the other stuff you've got hard mathematical evidence for.
So you kinda have to reconstruct what your life was
12:51 PM
@ACuriousMind what are quantum fluctuations
@ACuriousMind : isn't there some gameboy website where you can ask questions like this?
Isn't that your area of expertise
@0celo7 I've never seen an explanation of that phrase that wasn't either nonsense or trivial.
@JohnDuffield are you ready to finally back up your electron Dirac belt idea
The "best" interpretation of the word "quantum fluctutation" I've found is that there is a standard deviation of observables that is not caused by classical (i.e. statistical/thermal) principles.
12:53 PM
"Quantum funkiness"
Like how quantum fluctuation of the stress energy tensor is <T²> - <T>²
But that's a trivial consequence of the non-commutativity of observables/the uncertainty principle, so it isn't really mysterious.
So you're telling me the vacuum is not a boiling sea of particles popping in and out of experience
@0celo7 I think I've also told you that before, so yes^^
@Slereah Well, oddly, not the times when you are killed during the game (this always irked me) :P
12:56 PM
@0celo7 : Ask the question. Meanwhile it's like I said. We make electrons in pair production out of electromagnetic waves. We can diffract electrons. We describe electrons with the Dirac equation, which is a wave equation. We know that in atomic orbitals electrons "exist as standing waves", and when we annihilate that electron we get an electromagnetic wave again. And I didn't make up Dirac's belt. A guy called Dirac did that.
I didn't make up the wave nature of matter either.
@ACuriousMind that was rhetorical
@JohnDuffield asking a question about your pet theory is by definition non mainstream
I'd probably VTC it myself
I know ACM would and he'd enjoy it too
@0celo7 Except for your pet's theory, right?
@0celo7 : I am. There are no particles popping in and out of existence. That's a lies-to-children non-explanation. Or as Slereah might say, it's popscience crap for kids.
@ACuriousMind Depends how badly you are killed, I think
Plus you can die for real during gameplay
Though it is pretty rare
1:01 PM
Should I play this game
You should
Is it not too old
I can only remember two occasions where you can die for real
Old games are too hard for me
And sometimes too boring
If you piss off the Lady of Pain and if you piss off the giant smith guy
I would say it is pretty good
1:02 PM
Spoiler alert
She is called the "Lady of Pain"
On principle avoid pissing her off
For some reason splinter cell double agent is locked at 720p...it's eye cancer
@0celo7 It's a different kind of old than Morrowind. The gameplay is not that fun, it's mostly about the story and the world, which is told through giant chunks of unvoiced text.
yeah, the fighting system is nothing special
Know what else is a great story but a poor game?
I have no mouth and I must scream
@ACuriousMind I think the morrowind gameplay is sleep inducing
1:04 PM
Great story, great atmosphere, reallly poor gameplay
How long have you played Morrowind
@0celo7 : re asking a question about your pet theory is by definition non mainstream. None of what I said above is my pet theory. It's all mainstream. No doubt you'll be dismissing Dirac like you dismissed Einstein, and generally trashing this website with your trollery.
I stopped playing morrowind because the walking speed is too slow
Yeah that's kind of a problem of morrowind
It starts off pretty slow
Early combat is boring
You miss most of your hits
@0celo7 For that, I would forgive you to just set your speed to 100 or something
@ACuriousMind how many mainstream authors think electrons are photons going around a loop? @Slereah
1:05 PM
It does get better after a while, though
@0celo7 Exactly 0.
I'm not even sure what that means
I seem to have missed all the books that mention that
I know plenty of weird theories about electrons, but none of those are that
@JohnDuffield sorry, it's not mainstream
1:06 PM
Very early atomic physics had the idea that electrons were rings around the atom
There was also the whole electron as spacetime defects
And I think my trashing is pretty beneficial
Electrons as black holes
Hm, what else was there
which one
Only one electron?
1:07 PM
Oh yes
@0celo7 A variant of Feynman-Wheeler where it is one electron going back and forth through time :D
It never got really made into a real theory, but some wondered if there was only one electron in the universe
@Slereah : electrons aren't black holes. You can diffract electrons. In atomic orbitals electrons exist as standing waves.
Sometimes, it emits a photon, and goes back in time as a positron
1:07 PM
Yeah @Slereah you know nothin about electron and black holes
Well I didn't say the theory panned out
@Slereah : that's Wheeler for you.
It was just an idea put forward
Well Einstein did put the idea forward of electrons as wormholes
@JohnDuffield No, their position probability distribution is the square of something that might be interpreted as a standing wave.
Wheeler didn't know the difference between curved spacetime and curved space. If he had, he wouldn't have called them geons. He would have called them...
1:10 PM
Can we change John Duffield
I think ours broke
@ACuriousMind : that's cargo-cult woo. It's quantum field theory. Not quantum point-particle theory.
He is on repeat
@JohnDuffield That is literally what you obtain from solving the Dirac or Schrödinger equations.
I think the notion of "it's a probability amplitude" is about as old as quantum theory itself
Older than Dirac certainly
And that's quantum mechanics. In quantum field theory, you can't have your standing waves or such, because you don't describe electrons or other things as solutions to the Dirac equation there.
1:13 PM
Well you can have waves still
But they are
@Slereah I wouldn't call things that depend on field configurations instead of space or spacetime "waves", really :P
Well I wouldn't call something that isn't made of water "waves", but here we are!
Bah, anything you don't know about you think is non-mainstream, and yet you believe hook line and sinker in woo peddled by popscience quacks which flatly contradicts not just Einstein/Maxwell/Dirac/etc, but the patent blatant evidence of electron diffraction etc. What planet are you guys on? Oh, and have you ever seen this movie?
@Slereah No waves in oil for you?
1:16 PM
@Slereah No, I meant the delicious stuff you get when smashing olives
Do not shake your olive oil please
But perhaps olive oil also has sinful connotations for you...
FFS. Talk about chatroom trolls. I volunteer to be a moderator.
@JohnDuffield What's up? People are allowed to chat here, and that's what this is.
1:26 PM
Q: Can jet fuel melt steel beams?
Max RuuliCommon sense suggests that steel beams should not yield under burning jet fuel without presence of other substances that produce very high temperatures when burning, such as thermite. So can jet fuel melt steel beams?
How timely
1:39 PM
I think I was chat banned.
It said "room is read only"
were you
What did I do?
did you deny Einstein and the Evidence
@0celo7 This time around, I didn't see anything banworthy.
But there is one removed message from you, I just can't remember what it said
This time around?
Are you saying I've been rightfully banned in the past?
I have to seriously disagree with that...I think. Although there might have been that one time where some idiot starred something obscene or something.
1:42 PM
@0celo7 Well, in the other cases, I at least knew for what you were banned. This time, I have no clue
I blame the astronomer for that one.
@0celo7 Not in all cases.
This is my third ban.
@0celo7 Yeah, I'm thinking of that
I can't remember what it was.
I guess it shows how much time on 4chan. I wouldn't even think of flagging something "inappropriate"
1:44 PM
Did you talk about the Tits group perhaps
Oh ffs it was my lady of pain comment
Who the hell flagged that
lol what
what did you dooo
How the hell was that even flag worthy
I said "I like my ladies to give me a bit of you know what"
I swear to god if I get banned for that again
1:46 PM
You know what rhymes with train
How do chat bans work
I can't believe someone actually flagged that
1:48 PM
@0celo7 You're banned from chatting.
ban @0celo7 to demonstrate
@ACuriousMind thanks
Do multiple people have to flag?
Ah, how you get them? I think they're either dealt out by hand by a moderator or if flags on your posts are deemed valid (either by a mod or by enough 10k users (2, probably))
@ACuriousMind you seriously thought I wanted to know what "chat ban" means
I have plenty of experience
Nah, one person flags them, and then all 10k users get a blue thingy where they can look at the flagged post and deem the flag "valid", "invalid" or "not sure"
@0celo7 No, I was just messing with you :D
1:50 PM
Did you say valid
I didn't even see a flag
Who on earth said valid??
But I was away from my PC for a while vacuuming, so I probably missed it.
That was totally not ban worthy, was it??
I'd not have thought so, but we'll never find out what exactly the thought process here was.
1:56 PM
@ACuriousMind well as long as you believe in me
2:18 PM
Q: What is the Cleanup badge awarded for?
AniketIt is written in the 'Badges' page that the 'Cleanup' badge is awarded for the "first rollback". What does this mean? I could not understand. Can anyone help?
@0celo7 tsk, tsk, tsk :P
2:36 PM
I need a new avatar.
Get one
What should it be?
I was considering that
2:44 PM
I need to figure out if I'm in the Orange or white section
I think we're checkering
a picture of Einstein dressed as Sherlock Holmes
Elementary my dear
Do it.
Who is the shoop master here
Maybe @dmckee could draw it on his fancy tablet
The redskin logo is cool.
Oh I'd probably get banned for something so offensive
2:48 PM
how about an ocelot
That's what it is right now.
The hell
I'm paranoid about bans now
This is sad
Should I flag this as not an answer perhaps : physics.stackexchange.com/questions/190222/…
@Slereah It has been flagged as NAA at least twice I think
It also already has two delete votes on it, only one more 20k user required.
2:59 PM
That is good.
|
299aa400a8e6c51b | What is light?
Light as waves
According to all physics books, light is an electromagnetic wave described by the four Maxwell equations. These equations were formulated in 1865 by James Clerk Maxwell. The solutions to these equations describe electromagnetic (EM) waves – oscillating electric and magnetic fields – which, without the need for a medium, advance at 300,000 km / sec, which is the speed of light. The confirmation of light as a wave phenomenon was already done by Thomas Young. Maxwell’s laws are the result of classical physics and are commonly considered fundamental. EM waves will – if unhindered – expand spherically in three dimensions where the energy received on a surface of a fixed size decreases with the square of the distance to the source.
Maxwell based his equations on the results of research into electrical and magnetic phenomena by, among others, Michael Faraday. The existence of electric and magnetic fields, states of empty space resulting from electrical and/or magnetic charges present therein was assumed. How that state of empty space came about was and still remains unknown. However, in quantum physics nowadays EM-fields are considered to be the result of elementary particles, photons, interacting with other matter.
Light as photons
A black body – a hollow device where the inside walls can be heated to high temperatures which will emit then EM-radiation. The hollow space is closed and cannot lose its energy to the outside and so the walls will eventually radiate and absorb equally. A negligible small hole provides access from outside to measure the intensity and frequency distribution.
However, in 1900 Max Planck saw that the EM radiation emitted by so-called black emitters should be quantized, that is EM radiation was emitted and absorbed in packets with an energy that was directly proportional to their frequency. Planck thus laid the foundation for quantum physics and at the same time a bomb under Maxwell’s chair. Planck’s hypothesis was confirmed countless times in the course of the following century. EM energy is emitted and absorbed in the form of quanta, later called photons. This was in direct contradiction with the Maxwell wave image. The energy of a photon does not decrease with the distance to the source. A light source of 1 Watt that emits all that energy as light of 500 THz (orange-yellow) emits 3.1021 photons per second. Because of these unimaginable huge numbers, the behavior of light on our normal scale of observation will be so close to that spherical expansion of energy that it can no longer be distinguished from the Maxwell EM wave. However, at the atomic level, light therefore exhibits a totally different image: energy packages, photons.
Wave-particle paradox?
According to the one view, light is therefore a wave, a fact that already seemed to be confirmed by Young’s double-slit experiment. According to the other view, light consists of particles, quanta or photons. Those two ideas cannot be combined. For instance, the energy of a photon only depends on its frequency and therefore does not decrease with the distance to its source like a wave does. We thereby experience a paradox. Now be aware that a paradox is the result of a hidden, false premise, unless we assume that nature is fundamentally paradoxical. But we don’t give up that easily. Can we find that hidden, false premise?
A metaphysical state wave
In the Copenhagen interpretation of Bohr and Heisenberg, the quantum state wave that follows from the Schrödinger equation is a probability wave that is not physical. Only upon measurement does the quantum object physically manifest itself. The quantum state wave is therefore metaphysical, something that is not physical but does affect the physical. That interpretation has been confirmed by all Bell Tests and various delayed choice experiments. What a measurement exactly is, is extremely important but nevertheless poorly defined and even controversial. But that’s not the point now.
Bosons and EM-fields
The photon is considered nowadays as an elementary particle, one of the bosons, an element of the Standard Model. Bosons are particles that represent the field forces. They make up one of the two classes of particles, the other being fermions. The mutual repulsion of two electrons is depicted in a Feynman diagram by the exchange of a photon. In the image below of that process, the location in space is displayed horizontally and the time is displayed vertically increasing from bottom to top.
Feynman diagram for the mutual repulsion of two electrons by a photon exchange.
An electron comes from the left and a second electron comes from the right. When they approach each other, up to a certain distance, they exchange a photon, which causes them to reverse direction and by that exchange, their impulse. This actually replaces the field idea with mutual interactions between particles, here two fermions (electrons) and a boson (photon). In that way we have rid physics of that misunderstood and uncomfortable field as a mysterious state of empty space. Field forces can now be explained as just particles that collide with each other exchanging their impulses, something that we understand very well. By that view the billiard ball model is back in physics and everything seems explained by direct particle interactions.
Now think about how those electrons sense each others nearness so they can decide to exchange a photon. Physicists now suppose that traveling electrons continuously eject and absorb virtual photons by which their explanation has become virtually virtual.
The photon is not physical, ever
As stated in the Copenhagen interpretation of quantum physics and later confirmed by experiments, a quantum object, which a photon is, is not physical until it is detected in the measuring instrument. Not physical means here that a physical description has no meaning and that observation is not possible. There is only a non-physical, therefore metaphysical, quantum state wave, a probability wave, that, according to current interpretations, extends from the source to the measuring instrument. Upon detection of the photon, however, the energy of the photon is instantaneously converted into an excited electron and the photon is annihilated. Did it appear physically at the same moment of its annihilation? At the source, precisely the opposite happened, an excited electron that released its energy in the form of a photon that only exists in the form of a state wave which means that it does not physically exist.
The question then arises: at what moment does the photon actually exist physically? Well, never.
The speed of light
It must have become clear that the Maxwell Equations are descriptions of large-scale behavior of billions times billions of photons but that they cease to be valid at the atomic level. Just like fluid mechanics, even though it is still valid at the macro level, it ceases to be valid at the level of the water molecules. And also just like the ideal gas law, which is the result of huge numbers of gas molecules, will loose its sense and validity at the level of the gas molecules. Viewed in this way, the paradox of light wave and photon is no longer a real paradox. Maxwell’s laws are no longer valid at the photon level.
But then the wave behavior of individual photons should be not interpreted as EM-behavior, but as the behavior of a metaphysical quantum wave. It also becomes important to realize that the fact that the speed of light is always and everywhere the same is not a result of electric and magnetic fields and their properties, but that it must be a result of metaphysical quantum wave behavior. With that, the speed of light loses the physical foundation it had in the results of the Maxwell equations which in turn provided Einstein with a foundation in setting up his special theory of relativity.
The metaphysical properties of light
Thus the experience of light is nothing more than the experience of an energy transfer from one electron to another. Regardless of their distance. As Planck stated. As Bohr had already realized with his atomic model in which he stated that the jump from the electron from one orbit to the other had to be instantaneous. This discrete package of energy, the so-called photon, is not physically on its way from source to destination. There exists only a metaphysical quantum state wave that can best be interpreted as a probability distribution in time and space. And what are probabilities? They are expectations, thoughts about the physical world. Metaphysics. No more. No less.
Maxwell’s laws are thus no longer fundamental in this view, but merely a model that only makes reasonably correct predictions on a scale that is many orders of magnitude higher than the atomic domain. With that insight, the particle-wave paradox of light has disappeared.
So, how do we explain single photon interference?
You may now ask how it is possible when light consists of particles, that is, photons, and the EM wave at atomic level is no longer a good description of the behavior of light, how it is possible that interference occurs with single photons when we send them photon by photon through a double slit. The answer is now actually relatively simple. The interference should not be seen as an effect of overlapping physical EM waves but of overlapping metaphysical state waves. If metaphysical waves do exist, they will also show interference, as this is inherent in wave behavior.
The challenge lies in accepting the idea that reality is not limited to the strictly material physical.
Proceed to next page. |
43feaa463b7f7da7 | Monday, April 30, 2012
Spring came late to Germany, but it seems it finally has arrived. The 2012 Riesling has the first leaves and the wheat is a foot high.
Lara and Gloria are now 16 months old, almost old enough so we should start counting their age in fraction of years. This month's news is Lara's first molar, and Gloria's first word:
I have been busy writing a proposal for the Swedish Research Council, which is luckily submitted now, and I also had a paper accepted for publication. Ironically, from all the papers that I wrote in the last years, it's the one that is the least original and cost me the least amount of time, yet it's the only one that smoothly went through peer review.
Besides this, I'm spending my time with the organization of a workshop, a conference, and a four-week long program. I'm also battling a recurring ant infection of our apartment, which is complicated by my hesitation to distribute toxins where the children play.
Friday, April 27, 2012
The Nerdly Painter's Blog
Wednesday, April 25, 2012
The Cosmic Ray Composition Problem
A recent arXiv paper provides an update on the cosmic ray composition problem:
First the basics: We're talking about the ultra-high energetic end of the cosmic ray spectrum, with total energies of about 106 TeV. That's the energy of the incident particles in the Earth rest frame, not the center-of-mass energy of their collision with air molecules (ie mostly nucleons), which is "only" of the order 10 TeV, and thus somewhat larger than what the LHC delivers.
After the primary collision, the incoming particles produce a cascade of secondary particles, known as a "cosmic ray shower" which can be detected on the ground. These showers are then reconstructed from the data with suitable software so that, ideally, the physics of the initial high energy collison can be extracted. For some more details on cosmic ray showers, please read this earlier post.
Cosmic ray shower, artist's impression. Source: ASPERA
The Pierre Auger Cosmic Ray Observatory is a currently running experiment that measures cosmic ray showers on the ground. One relevant quantity about the cosmic rays is the "penetration depth," that is the distance the primary particle travels through the atmosphere till it makes the first collision. The penetration depth can be reconstructed if the shower on the ground can be measured sufficiently precise, and is relatively new data.
The penetration depth depends on the probability of the primary particle to interact, and with that on the nature of the particle. While we have never actually tested the collisions at the center-of-mass energies of the highest energetic cosmic rays, we think we have a pretty good understanding of what's going on by virtue of the standard model of particle physics. All the knowledge that we have, based on measurements at lower energies, is incorporated into the numerical models. Since the collisions involve nucleons rather than elementary particles, this goes together with an extrapolation of the parton distribution function by the DGLAP equation. This sounds complicated, but since QCD is asymptotically free, it should actually get easier to understand at high energies.
Shaham and Piran in their paper argue that this extrapolation isn't working as expected, which might be a signal for new physics.
The reason is that the penetration depth data shows that at high energies the probability of the incident particles to interact peaks at a shorter depth and is also more strongly peaked than one expects for protons. Now it might be that at higher energies the cosmic rays are dominated by other primary particles, heavier ones, that are more probable to interact, thus moving the peak of the distribution to a shorter depth. However, if one adds a contribution from other constituents (heavier ions: He, Fe...) this also smears out the distribution over the depth, and thus doesn't fit the width of the observed penetration depth distribution.
This can be seen very well from the figure below (Fig 2 from Shaham and Piran's paper) which shows the data from the Pierre Auger Collaboration, and the expectation for a composition of protons and Fe nuclei. You can see that adding a second component does have the desired effect of moving the average value to a shorter depth. But it also increases the width. (And, if the individual peaks can be resolved, produces a double-peak structure.)
Fig 2 from arXiv:1204.1488. Shown is the number of events in the energy bin 1 to 1.25 x 106 TeV as a function of the penetration depth. The red dots are the data from the Pierre Auger Collaboration (arXiv:1107.4804), the solid blue line is the expectation for a combination of protons and Fe nuclei.
The authors thus argue there is no compositions for the ultra high energetic primary cosmic ray particles that fits the data well. Shaham and Piram think that this mismatch should be taken seriously. While different simulations yield slightly different results, the results are comparable and neither code fits the data. If it's not the simulation, the mismatch comes about either from the data or the physics.
"There are three possible solutions to this puzzling situation. First, the observational data might be incorrect, or it is somehow dominated by poor statistics: these results are based on about 1500 events at the lowest energy bin and about 50 at the highest one. A mistake in the shower simulations is unlikely, as different simulations give comparable results. However, the simulations depend on the extrapolations of the proton cross sections from the measured energies to the TeV range of the UHECR collisions. It is possible that this extrapolation breaks down. In particular a larger cross section than the one extrapolated from low energies can explain the shorter penetration depth. This may indicates new physics that set in at energies of several dozen TeV."
The authors are very careful not to jump to conclusions, and I won't either. To be convinced there is new physics to find here, I would first like to see a quantification of how bad the best fit from the models actually is. Unfortunately, there's no chi-square/dof in the paper that would allow such a quantification, and as illustrative as the figure above is, it's only one energy bin and might be a misleading visualization. I am also not at all sure that the different simulations are actually independent from each other. Since scientific communities exchange information rapidly and efficiently, there exists a risk for systematic bias even if several models are considered. Possibly there's just some cross-section missing or wrong. Finally, there's nothing in the paper about how the penetration depth data is obtained to begin with. Since that's not a primary observable, there must be some modeling involved too, though I agree that this isn't a likely source of error.
With these words of caution ahead, it is possible that we are looking here at the first evidence for physics beyond the standard model.
Monday, April 23, 2012
Can we probe planck-scale physics with quantum optics?
You might have read about this some weeks ago on Chad Orzel's blog or at Ars Technica: Nature published a paper by Pikovski et al on the possibility to test Planck scale physics with quantum optics. The paper is on the arXiv under arXiv:1111.1979 [quant-ph]. I left a comment at Chad's blog explaining that it is implausible the proposed experiment will test any Planck scale effects. Since I am generally supportive of everybody who cares about quantum gravity phenomenology, I'd have left it at this, and be happy that Planck scale physics made it into Nature. But then I saw that Physics Today picked it up, and before this spreads further, here's an extended explanation of my skepticism.
Igor Pikovski et al have proposed a test for Planck scale physics using recent advances in quantum optics. The framework they use is a modification of quantum mechanics, expressed by a deformation of the canonical commutation relation, that takes into account that the Planck length plays the role of a minimal length. This is one of the most promising routes to quantum gravity phenomenology, and I was excited to read the article.
In their article, the authors claim that their proposed experiment is feasible to "probe the possible effects of quantum gravity in table-top quantum optics experiment" and that it reaches a "hitherto unprecedented sensitivity in measuring Planck-scale deformations." The reason for this increased sensitivity for Planck-scale effects is, according to the authors own words, that "the deformations are enhanced in massive quantum systems."
Unfortunately, this claim is not backed up by the literature the authors refer to.
The underlying reason is that the article fails to address the question of Lorentz-invariance. The deformation used is not invariant under normal Lorentz-transformations. There are two ways to deal with that, either breaking Lorentz-invariance or deforming it. If it is broken, there exists a multitude of very strong constraints that would have to be taken into account and are not mentioned in the article. Presumably then the authors implicitly assume that Lorentz-symmetry is suitably deformed in order to keep the commutation relations invariant - and in order to test something actually new. This can in fact be done, but comes at a price. Now the momenta transform non-linearly. Consequently, a linear sum of momenta is no longer Lorentz-invariant. In the appendix however, the authors have used the normal sum of momenta to define the center-of-mass momentum. This is inconsistent. To maintain Lorentz-invariance, the modified sum must be used.
This issue cannot be ignored for the following reason. If a suitably Lorentz-invariant sum is used, it contains higher-order terms. The relevance of these terms does indeed increase with the mass. This also means that the modification of the Lorentz-transformations become more relevant with the mass. Since this is a consequence of just summing up momenta, and has nothing in particular to do with the nature of the object that is being studied, the increasing relevance of corrections prevents one from reproducing a macroscopic limit that is in agreement with our knowledge of Special Relativity. This behavior of the sum, whose use, we recall, is necessary for Lorentz-invariance, is thus highly troublesome. This is known in the literature as the "soccer ball problem." It is not mentioned in the article.
If the soccer-ball problem persists, the theory is in conflict with observation already. While several suggestions have been made how this problem can be addressed in the theory, no agreement has been reached to date. A plausible and useful ad-hoc suggestion that has been made by Magueijo and Smolin is that the relevant mass scale, the Planck mass, for N particles is rescaled to N times the Planck mass. Ie, the scale where effects become large moves away when the number of particles increases.
Now, that this ad-hoc solution is correct is not clear. What is clear however is that, if the theory makes sense at all, the effect must become less relevant for systems with many constituents. A suppression with the number of constituents is a natural expectation.
If one takes into account that for sums of momenta the relevant scale is not the Planck mass, but N times the Planck mass, the effect the authors consider is suppressed by roughly a factor 1010. This means the existing bounds (for single particles) cannot be significantly improved in this way. This is the expectation that one can have from our best current understanding of the theory.
This is not to say that the experiment should not be done. It is always good to test new parameter regions. And, who knows, all I just said could turn out to be wrong. But it does mean that based on our current knowledge, it is extremely unlikely that anything new is to be found there. And vice versa, if nothing new is found, this cannot be used to rule out a minimal length modification of quantum mechanics.
(This is not the first time btw, that somebody tried to exploit the fact that the deviations get larger with mass by using composite systems, thereby promoting a bug to a feature. In my recent review, I have a subsection dedicated to this.)
Sunday, April 22, 2012
Experimental Search for Quantum Gravity 2012
It is my great pleasure to let you know that there will be a third conference on Experimental Search for Quantum Gravity, October 22 to 25, this year, at Perimeter Institute. (A summary of the ESQG 2007 is here, and a summary from 2010 is here.) Even better is that this time it wasn't my initiative but Astrid Eichhorn's, who is also to be credited for the theme "The hard facts." The third of the organizers is Lee Smolin, who has been of great help also with the last meeting. But most important, the website of the ESQG 2012 is here.
We have an open registration with a moderate fee of CAN$ 115, which is mostly to cover catering expenses. There is a limit to the number of people we can accommodate, so if you are interested in attending, I recommend you register early. If time comes, I'll tell you some more details about the meeting.
Thursday, April 19, 2012
Schrödinger meets Newton
In January, we discussed semi-classical gravity: Classical general relativity coupled to the expectation value of quantum fields. This theory is widely considered to be only an approximation to the still looked-for fundamental theory of quantum gravity, most importantly because the measurement process messes with energy conservation if one were to take it seriously, see earlier post for details.
However, one can take the point of view that whatever the theorists think is plausible or not should still be experimentally tested. Maybe the semi-classical theory does in fact correctly describe the way a quantum wave-function creates a gravitational field; maybe gravity really is classical and the semi-classical limit exact, we just don't understand the measurement process. So what effects would such a funny coupling between the classical and the quantum theory have?
Luckily, to find out it isn't really necessary to work with full general relativity, one can instead work with Newtonian gravity. That simplifies the issue dramatically. In this limit, the equation of interest is known as the Schrödinger-Newton equation. It is the Schrödinger-equation with a potential term, and the potential term is the gravitational field of a mass distributed according to the probability density of the wave-function. This looks like this
Inserting a potential that depends on the expectation value of the wave-function makes the Schrödinger-equation non-linear and changes its properties. The gravitational interaction is always attractive and thus tends to contract pressureless matter distributions. One expects this effect to show up here by contracting the wave-packet. Now the usual non-relativistic Schrödinger equation results in a dispersion for massive particles, so that an initially focused wave-function spreads with time. The gravitational self-coupling in the Schrödinger-Newton equation acts against this spread. Which one wins, the spread from the dispersion or the gravitational attraction, depends on the initial values.
However, the gravitational interaction is very weak, and so is the effect. For typical systems in which we study quantum effects, either the mass is not large enough for a collapse, or the typical time for it to take place is too long. Or so you are lead to think if you make some analytical estimates.
The details are left to a numerical study though because the non-linearity of the Schrödinger-Newton equation spoils the attempt to find analytical solutions. And so, in 2006 Carlip and Salzmann surprised the world by claiming that according to their numerical results, the contraction caused by the Schrödinger-Newton equation might be possible to observe in molecule interferometry, many orders of magnitude off the analytical estimate.
It took five years until a check of their numerical results came out, and then two papers were published almost simultaneously:
• Schrödinger-Newton "collapse" of the wave function
J. R. van Meter
arXiv:1105.1579 [quant-ph]
• Gravitationally induced inhibitions of dispersion according to the Schrödinger-Newton Equation
Domenico Giulini and André Großardt
arXiv:1105.1921 [gr-qc]
They showed independently that Carlip and Salzmann's earlier numerical study was flawed and the accurate numerical result fits with the analytical estimate very well. Thus, the good news is one understands what's going on. The bad news is, it's about 5 orders of magnitude off today's experimental possibilities. But that's in an area of physics were progress is presently rapid, so it's not hopeless!
It is interesting what this equation does, so let me summarize the findings from the new numerical investigation. These studies, I should add, have been done by looking at the spread of a spherical symmetric Gaussian wave-packet. The most interesting features are:
• For masses smaller than some critical value, m less than ~ (ℏ2/(G σ))1/3, where σ is the width of the initial wave-packet, the entire wave-packet expands indefinitely.
• For masses larger than that critical value, the wave-packet fragments and a fraction of the probability propagates outwards to infinity, while the rest remains localized in a finite region.
• From the cases that eventually collapse, the lighter ones expand initially and then contract, the heavier ones contract immediately.
• The remnant wave function approaches a stationary state, about which it performs dampened oscillations.
That the Schrödinger-Newton equation leads to a continuous collapse might lead one to think it could play a role for the collapse of the wave-function, an idea that has been suggested already in 1984 by Lajos Diosi. However, this interpretation is questionable because it became clear later that the gravitational collapse that one finds here isn't suitable to be interpreted as a wave-function collapse to an eigenstate. For example, in this 2002 paper, it was found that two bumps of probability density, separated by some distance, will fall towards each other and meet in the middle, rather than focus on one of the two initial positions as one would expect for a wave-function collapse.
Monday, April 16, 2012
The hunt for the first exoplanet
The little prince
Today, extrasolar planets, or exoplanets for short, are all over the news. Hundreds are known, and they are cataloged in The Extrasolar Planets Encyclopaedia, accessible for everyone who is interested. Some of these extrasolar planets orbit a star in what is believed to be a habitable zone, fertile ground for the evolution of life. Planetary systems, much like ours, have turned out to be much more common results of stellar formation than had been expected.
But the scientific road to this discovery has been bumpy.
Once one knows that stars on the night sky are suns like our own, it doesn't take a big leap of imagination to think that they might be accompanied by planets. Observational evidence for exoplanets was looked for already in the 19th century, but the field had a bad start.
Beginning in the 1950s, several candidates for exoplanets made it into the popular press, yet they turned out to be data flukes. At that time, the experimental method used relied on detecting minuscule changes in the motion of the star caused by a heavy planet of Jupiter type.
If you recall the two-body problem from 1st semester: It's not that one body orbits the other, but they both orbit around their common center-of-mass, just that, if one body is much heavier than the other, it might almost look like the lighter one is orbiting the heavier one. But if a sufficiently heavy planet orbits a star, one might in principle find out by watching the star very closely because it wobbles around the center-of-mass. In the 50s, watching the star closely meant watching its distance to other stellar objects. The precision which could be achieved this way simply wasn't sufficient to reliably tell the presence of a planet.
In the early 80s, Gordon Walker and his postdoc Bruce Campbell from British Columbia, Canada, pioneered a new technique that improved the possible precision by which the motion of the star could be tracked by two orders of magnitude. Their new technique relied on measuring the star's absorption lines, whose frequency depends on the motion of the star relative to us because of the Doppler effect.
To make that method work, Walker and Campbell had to find a way to precisely compare spectral images taken at different times so they'd know how much the spectrum had shifted. They found an ingenious solution to that: They would used the, very regular and well-known, molecular absorption lines of hydrogen fluoride gas. The comb-like absorption lines of hydrogen fluoride served as a ruler relative to which they could measure the star's spectrum, allowing them to detect even smallest changes. Then, together with astronomer Stephenson Yang, they started looking at candidate stars which might be accompanied by Jupiter-like planets.
To detect the motion of the star due to the planet, they would have to record the system for several completed orbits. Our planet Jupiter needs about 12 years to orbit the sun, so they were in for a long-term project. Unfortunately, they had a hard time finding support for their research.
In his recollection “The First High-Precision Radial Velocity Search for Extra-Solar Planets” (arXiv:0812.3169), Gordon Walker recounts that it was difficult to get time for their project at observatories: “Since extra-solar planets were expected to resemble Jupiter in both mass and orbit, we were awarded only three or four two-night observing runs each year.” And though it is difficult to understand today, back then many of Walker's astronomer colleagues thought the search for exoplanets a waste of time. Walker writes:
“It is quite hard nowadays to realise the atmosphere of skepticism and indifference in the 1980s to proposed searches for extra-solar planets. Some people felt that such an undertaking was not even a legitimate part of astronomy. It was against such a background that we began our precise radial velocity survey of certain bright solar-type stars in 1980 at the Canada France Hawaii 3.6-m Telescope.”
After years of data taking, they had identified several promising candidates, but were too cautious to claim a discovery. At the 1987 meeting of the American Astronomical Society in Vancouver, Campbell announced their preliminary results. The press reported happily yet another discovery of an exoplanet, but the astronomers regarded even Walker and Campbell's cautious interpretation of the data with large skepticism. In his article “Lost world: How Canada missed its moment of glory,” Jacob Berkowitz describes the reaction of Walker and Campbell's colleagues:
“[Campbell]'s professional colleagues weren't as impressed [as the press]. One astronomer told The New York Times he wouldn't call anything a planet until he could walk on it. No one even attempted to confirm the results.”
Walker's gifted postdoc Bruce Campbell suffered most from the slow-going project that lacked appreciation and had difficulties getting continuing funding. In 1991, after more than a decade of data taking, they still had no discovery to show up with. Campbell meanwhile had reached age 42, and was still sitting on a position that was untenured, was not even tenure-track. Campbell's frustration built up to the point where he quit his job. When he left, he erased all the analyzed data in his university account. Luckily, his (both tenured) collaborators Walker and Yang could recover the data. Campbell made a radical career change and became a personal tax consultant.
But in late 1991, Walker and Yang were finally almost certain to have found sufficient evidence of an exoplanet around the star gamma Cephei, whose spectrum showed a consistent 2.5 year wobble. In a fateful coincidence, when Walker just thought they had pinned it down, one of his colleagues, Jaymie Matthews, came by his office, looked at the data and pointed out that the wobble in the data coincided with what appeared to be periods of heightened activity on the star's surface. Walker looked at the data with new eyes and, mistakenly, believed that they had been watching all the time an oscillating star rather than a periodic motion of the star's position.
Briefly after that, in early 1992, Nature reported the first confirmed discovery of an exoplanet by Wolszczan and Frail, based in the USA. Yet, the planet they found orbits a millisecond pulsar (probably a neutron star), so for many the discovery doesn't score highly because the star's collapse would have wiped out all life in that planetary system long ago.
In 1995 then, astronomers Mayor and Queloz of the University of Geneva announced the first definitive observational evidence for an exoplanet orbiting a normal star. The planet has an orbital period of a few days only, no decade long recording was necessary.
It wasn't until 2003 that the planet that Walker, Campbell and Yang had been after was finally confirmed.
There are three messages to take away from this story.
First, Berkowitz in his article points out that Canada failed to have faith in Walker and Campbell's research at the time when just a little more support would have made them first to discover an exoplanet. Funding for long-term projects is difficult to obtain and it's even more difficult if the project doesn't produce results before it's really done. That can be an unfortunate hurdle for discoveries.
Second, it is in hindsight difficult to understand why Walker and Campbell's colleagues were so unsupportive. Nobody ever really doubted that exoplanets exist, and with the precision of measurements in astronomy steadily increasing, sooner or later somebody would be able to find statistically significant evidence. It seems that a few initial false claims had a very unfortunate backlash that did exceed the reasonable.
Third, in the forest of complaints about lacking funding for basic research, especially for long-term projects, every tree is a personal tragedy.
Saturday, April 14, 2012
How to Teach Relativity to Your Dog
By Chad Orzel
Basic Books (February 28, 2012)
Thursday, April 12, 2012
Some physics-themed ngram trends
I've been playing again with Google ngram, which shows the frequency by which words appear in books that are in the Google database, normalized to the number of books. Here are some keywords from physics that I tried which I found quite interesting.
In the first graph below you see "black hole" in blue which peaks around 2002, "big bang" in red which peaks around 2000, "quantization" in green which peaks to my puzzlement around 1995, and "dark matter" in yellow which might peak or plateau around 2000. Data is shown from 1920 to 2008. Click to enlarge.
In the second graph below you see the keywords "multiverse" in blue, which increases since about 1995 but interestingly seems to have been around much before that, "grand unification" in yellow which peaks in the mid 80s and is in decline since, "theory of everything" in green which plateaus around 2000, and "dark energy" in red which appears in the late 90s and is still sharply increasing. Data is shown from 1960 to 2008. Click to enlarge.
This third figure shows "supersymmetry" in blue which peaks around 1985 and 2001, "quantum gravity" in red which might or might not have plateaued, and "string theory" in green which seems to have decoupled from supersymmetry in early 2002 and avoided to drop. Data is shown from 1970 to 2008.
A graph that got so many more hits it wasn't useful to plot it with the others: "emergence" which peaked in the late 90s. Data is shown from 1900 to 2008.
More topics of the past: "cosmic rays" in blue which was hot in the 1960s, "quarks" in green which peaks in the mid 90s, and "neutrinos" in red peak around 1990. Data is shown from 1920 to 2008.
Even quantum computing seems to have maxed (data is shown from 1985 to 2008).
So, well, then what's hot these days? See below "cold atoms" in blue, "quantum criticality" in red and "qbit" in green. Data is shown from 1970 to 2008.
So, condensed matter and cosmology seem to be the wave of the future, while particle physics is in the decline and quantum gravity doesn't really know where to go. Feel free to leave your interpretation in the comments!
Tuesday, April 10, 2012
Be careful what you wish for
And despite that, not to forget the hopes and dreams.
Mars btw has to our best current knowledge indeed two moons.
Sunday, April 08, 2012
Happy Easter!
Stefan honors the Easter tradition by coloring eggs every year. The equipment for this procedure is stored in a cardboard shoe-box labeled "Ostern" (Easter). The shoe-box dates back to the 1950s and once contained a pair of shoes produced according to the newest orthopedic research.
I had never paid much attention to the shoe-box but as Stefan pointed out to me this year, back then the perfect fit was sought after by x-raying the foot inside the shoe. The lid of the box contains an advertisement for this procedure which was apparently quite common for a while.
Click to enlarge. Well, they don't xray your feet in the shoe stores anymore, but Easter still requires coloring the eggs. And here they are:
Happy Easter everybody!
Friday, April 06, 2012
Book Review: "The Quest for the Cure" by B.R. Stockwell
By Brent R. Stockwell
Columbia University Press (June 1, 2011)
As a particle physicist, I am always amazed when I read about recent advances in biochemistry. For what I am concerned, the human body is made of ups and downs and electrons, kept together by photons and gluons - and that's pretty much it. But in biochemistry, they have all these educated sounding words. They have enzymes and aminoacids, they have proteases, peptides and kineases. They have a lot of proteins, and molecules with fancy names used to drug them. And these things do stuff. Like break up and fold and bind together. All these fancy sounding things and their interactions is what makes your body work; they decide over your health and your demise.
With all that foreign terminology however, I've found it difficult to impossible to read any paper on the topic. In most cases, I don't even understand the title. If I make an effort, I have to look up every second word. I do just fine with the popular science accounts, but these always leave me wondering just how do they know this molecule does this and how do they know this protein breaks there, fits there, and that causes cancer and that blocks some cell-function? What are the techniques they use and how do they work?
When I came across Stockwell's book "The Quest for the Cure" I thought it would help me solve some of these mysteries. Stockwell himself is a professor for biology and chemistry at Columbia university. He's a guy with many well-cited papers. He knows words like oligonucleotides and is happy to tell you how to pronounce them: oh-lig-oh-NOOK-lee-oh-tide. Phosphodiesterase: FOS-foh-dai-ESS-ter-ays. Nicotinonitrile: NIH-koh-tin-oh-NIH-trayl. Erythropoitin: eh-REETH-roh-POIY-oh-ten. As a non-native speaker I want to complain that this pronunciation help isn't of much use for a non-phonetic language; I can think of at least three ways to pronounce the syllable "lig." But then that's not what I bought the book for anyway.
The starting point of "The Quest for the Cure" is a graph showing the drop in drug approvals since 1995. Stockwell sets out to first explain what is the origin of this trend and then what can be done about it. In a nutshell, the issue is that many diseases are caused by proteins which are today considered "undruggable" which means they are folded in a way that small molecules, that are suitable for creating drugs, can't bind to the proteins' surfaces. Unfortunately, it's only a small number of proteins that can be targeted by presently known drugs:
"Here is the surprising fact: All of the 20,000 or so drug products that ever have been approved by the U.S. Food and Drug Administration interact with just 2% of the proteins found in human cells."
And fewer than 15% are considered druggable at all.
Stockwell covers a lot of ground in his book, from the early days of genetics and chemistry to today's frontier of research. The first part of the book, in which he lays out the problem of the undruggable proteins, is very accessible and well-written. Evidently, a lot of thought went into it. It comes with stories of researchers and patients who were treated with new drugs, and how our understanding of diseases has improved. In the first chapters, every word is meticulously explained or technical terms are avoided to the level that "taken orally" has been replaced by "taken by mouth."
Unfortunately, the style deteriorates somewhat thereafter. To give you an impression, it starts more reading like this
"Although sorafenib was discovered and developed as an inhibitor of RAF, because of the similarity of many kinases, it also inhibits several other kinases, including the patelet-derived growth factor, the vascular endothelia growth factor (VEGF) receptors 2 and 3, and the c-KIT receptor."
Now the book contains a glossary, but it's incomplete (eg it neither contains VEGF nor c-KIT). With the large number of technical vocabulary, at some point it doesn't matter anymore if a word was introduced, because if it's not something you deal with every day it's difficult to keep in mind the names of all sorts of drugs and molecules. It gets worse if you put down the book for a day or two. This doesn't contribute to the readability of the book and is somewhat annoying if you realize that much of the terminology is never used again and one doesn't really know why it was necessary to use to begin with.
The second part of the book deals with the possibilities to overcome the problem of the undruggable molecules. In that part of the book, the stories of researchers curing patients are replaced with stories of the pharmaceutical industry, the start-up of companies and the ups and downs of their stock price.
Stockwell's explanations left me wanting in exactly the points that I would have been interested in. He writes for example a few pages about nuclear magnetic resonance and that it's routinely used to obtain high resolution 3-d pictures of small proteins. One does not however learn how this is actually done, other than that it requires "complicated magnetic manipulations" and "extremely sophisticated NMR methods." He spends a paragraph and an image on light-directed synthesis of peptides that is vague at best, and one learns that peptides can be "stapled" together, which improves their stability, yet one has no clue how this is done.
Now the book is extremely well referenced, and I could probably go and read the respective papers in Science. But then I would have hoped that Stockwell's book saves me exactly this effort.
On the upside, Stockwell does an amazingly good job communicating the relevance of basic research and the scientific method, and in my opinion this makes up for the above shortcomings. He tells stories of unexpected breakthroughs that came about by little more than coincidence, he writes about the relevance of negative results and control experiments, and how scientific research works:
"There is a popular notion about new ideas in science springing forth from a great mind fully formed in a dazzling eureka moment. In my experience this is not accurate. There are certainly sudden insights and ideas that apear to you from time to time. Many times, of course, a little further thought makes you realize it is really an absolutely terrible idea... But even when you have an exciting new idea, it begins as a raw, unprocessed idea. Some digging around in the literature will allow you to see what has been done before, and whether this idea is novel and likely to work. If the idea survives this stage, it is still full of problems and flaws, in both the content and the style of presenting it. However, the real processing comes from discussing the idea, informally at first... Then, as it is presented in seminars, each audience gives a series of comments, suggestions, and questions that help mold the idea into a better, sharper, and more robust proposal. Finally, there is the ultimate process of submission for publication, review and revision, and finally acceptance... The scientific process is a social process, where you refine your ideas through repeated discussions and presentations."
He also writes in a moderate dose about his own research and experience with the pharmaceutical industry.
The proposals that Stockwell has how to deal with the undruggable proteins have a solid basis in today's research. He isn't offering dreams or miracle cures, but points out hopeful recent developments, for example how it might be possible to use larger molecules. The problem with large molecules is that they tend to be less stable and don't enter cells readily, but he quotes research that shows possibilities to overcome this problem. He also explains the concept of a "privileged structure," structures that have been found with slight alterations to bind to several proteins. Using such privileged structures might allow one to sort through a vast parameter space of possible molecules with a higher success rate. He also talks about using naturally occurring structures and the difficulties with that. He ends his book by emphasizing the need for more research on this important problem of the undruggable proteins.
In summary: "The Quest for the Cure" is a well-written book, but it contains too many technical expressions, and in many places scientific explanations are vague or lacking. It comes with some figures which are very helpful, but there could have been more. You don't need to read the blurb to figure out that the author isn't a science writer but a researcher. I guess he's done his best, but I also think his editor should have dramatically sorted out the vocabulary or at least have insisted on a more complete glossary. Stockwell makes up for this overdose of biochemistry lingo with communicating very well the relevance of basic research and the power of the scientific method.
I'd give this book four out of five stars because I appreciate Stockwell has taken the time to write it to begin with.
Wednesday, April 04, 2012
On the importance of being wrong
Monday, April 02, 2012
Sunday, April 01, 2012
Computer Scientists develop Software for Virtual Member of Congress
A group of computer scientists from Rutgers university have published a software intended for crowd-sourcing the ideal candidate. "We were asking ourselves: Why do we waste so much time with candidates who disagree with themselves, aren't able to recall their party's program, and whose intellectual output is inferior even to Shit Siri Says?" recalls Arthur McTrevor, who lead the project, "Today, we have software that can perform better."
McTrevor and his colleagues then started coding what they refer to as the "unopinionated artifical intelligence" of the virtual representative, the main information processing unit. The unopinionated intelligence is a virtual skeleton which comes alive by crowd-sourcing opinions from a selected group of people, for example party members. Members feed the software with opinions, which are then aggregated and reformulated to minimize objectionable statements. The result: The perfect candidate.
The virtual candidate also has a sophisticated speech assembly program, a pleasant looking face, and a fabricated private life. Visual and audial appearance can be customized. The virtual candidate has a complete and infallible command of the constitution, all published statistical data, and can reproduce quotations from memorable speeches and influential books in the blink of an eye. "80 microseconds, actually," said McTrevor. The software moreover automatically creates and feeds its own Facebook account and twitter feed.
The group from Rutgers tested the virtual representative in a trial run whose success is reported in a recent issue of Nature. In their publication, the authors point out that the virtual representative is not a referendum that aggregates the opinions of the general electorate. Rather, it serves a selected group to find and focus their identity, which can then be presented for election.
In an email conversation, McTrevor was quick to point out that the virtual candidate is made in USA, and its patent dated 2012. The candidate will be thus be eligible to run for congress at the "age" of 25, in 2037. |
a800987c46f91c0d | Condensed-phase Molecular Spectroscopy and Photophysics
• ISBN13:
• ISBN10:
• Format: Hardcover
• Copyright: 2012-12-26
• Publisher: Wiley
Purchase Benefits
• Free Shipping On Orders Over $59!
• Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $137.00 Save up to $13.70
• Rent Book $123.30
Add to Cart Free Shipping
Supplemental Materials
What is included with this book?
Highlighting the molecule-environment interactions that strongly influence spectra in condensed phases, Condensed-Phase Molecular Spectroscopy and Photophysics provides a comprehensive treatment of radiation-matter interactions for molecules in condensed phases as well as metallic and semiconductor nanostructures. Each chapter in this graduate-level molecular spectroscopy text contains problems ranging from simple through to complex. Topics unique to this text include the spectroscopy and photophysics of molecular aggregates and molecular solids, metals and semiconductors, and an emphasis on nanoscale size regimes.
Author Biography
ANNE MYERS KELLEY earned a BS in chemistry from the University of California, Riverside, in 1980 and a PhD in biophysical chemistry from the University of California, Berkeley, in 1984. Following postdoctoral work at the University of Pennsylvania, she held faculty positions at the University of Rochester (1987–1999) and Kansas State University (1999–2003) before becoming one of the founding faculty at the University of California, Merced, in 2003. Her primary research area has been resonance Raman spectroscopy, linear and nonlinear, but she has also worked in several other areas of spectroscopy including single-molecule and line-narrowed fluorescence, four-wave mixing, and time-resolved methods. She is a Fellow of the American Physical Society and the American Association for the Advancement of Science.
Table of Contents
1. Review of Time-Independent Quantum Mechanics
A. states, operators, and representations
B. eigenvalue problems and the Schrödinger equation
C. expectation values, uncertainty relations
D. particle in a box
E. harmonic oscillator
F. the hydrogen atom and angular momentum
G. approximation methods
H. electron spin
I. Born-Oppenheimer approximation
J. molecular orbitals
K. energies and time scales, separation of motions
2. Electromagnetic Radiation
A. classical description of light
B. quantum mechanical description of light
C. Fourier transform relationships between time and frequency
D. blackbody radiation
E. light sources for spectroscopy
3. Radiation-Matter Interactions
A. the time-dependent Schrödinger equation
B. time-dependent perturbation theory
C. interaction of matter with the classical radiation field
D. interaction of matter with the quantized radiation field
4. Absorption and Emission of Light by Matter
A. Einstein coefficients for absorption and emission
B. other measures of absorption strength (absorption cross-section, Beer-Lambert Law)
C. radiative lifetimes
D. oscillator strengths
E. local fields
5. System-Bath Interactions
A. phenomenological treatment of relaxation and lineshapes
B. the density matrix
C. density matrix methods in spectroscopy
D. exact density matrix solution for a 2-level system
6. Symmetry Considerations
A. qualitative aspects of molecular symmetry
B. introductory group theory
C. finding the symmetries of vibrational modes of a certain type
D. finding the symmetries of all vibrational modes
7. Molecular Vibrations and Infrared Spectroscopy
A. vibrational transitions
B. diatomic vibrations
C. anharmonicity
D. polyatomic molecular vibrations; normal modes
E. symmetry considerations
F. isotopic shifts
G. solvent effects on vibrational spectra
8. Electronic Spectroscopy
A. electronic transitions
B. spin and orbital selection rules
C. spin-orbit coupling
D. vibronic structure
E. vibronic coupling
F. the Jahn-Teller effect
G. considerations in large molecules
H. solvent effects on electronic spectra
9. Photophysical Processes
A. Jablonski diagrams
B. quantum yields and lifetimes
C. Fermi’s Golden Rule for radiationless transitions
D. internal conversion and intersystem crossing
E. intramolecular vibrational redistribution
F. energy transfer
G. polarization and molecular reorientation in solution
10. Light Scattering
A. Rayleigh scattering from particles
B. classical treatment of molecular Raman and Rayleigh scattering
C. quantum mechanical treatment of molecular Raman and Rayleigh scattering
D. nonresonant Raman scattering
E. symmetry considerations and depolarization ratios in Raman scattering
F. resonance Raman spectroscopy
11. Nonlinear and Pump-Probe Spectroscopies
A. linear and nonlinear susceptibilities
B. multiphoton absorption
C. pump-probe spectroscopy: transient absorption and stimulated emission
D. vibrational oscillations and impulsive stimulated scattering
E. harmonic and sum frequency generation
F. four-wave mixing
G. photon echoes
12. Electron Transfer Processes
A. charge-transfer transitions
B. Marcus theory
C. spectroscopy of anions and cations
13. Collections of Molecules
A. van der Waals molecules
B. dimers and aggregates
C. localized and delocalized excited states
D. conjugated polymers
14. Metals and Plasmons
A. dielectric function of a metal
B. plasmons
C. spectroscopy of metal nanoparticles
D. surface-enhanced Raman and fluorescence
15. Crystals
A. crystal lattices
B. phonons in crystals
C. infrared and Raman spectra
D. phonons in nanocrystals
16. Electronic Spectroscopy of Semiconductors
A. band structure
B. direct and indirect transitions
C. excitons
D. defects
E. semiconductor nanocrystals
A. Physical constants, unit systems and conversion factors
B. Miscellaneous mathematics review
C. Matrices and determinants
D. Character tables for point groups
E. Fourier transforms
Rewards Program
Write a Review |
369d25ce503c9511 | Schrödinger Approach and Density Gradient Model for Quantum Effects Modeling
A.Ferron1, B.Cottle2, G.Curatola3, G.Fiori3, E.Guichard1
1 Silvaco Data Systems, 55 rue Blaise Pascal, 38330 Montbonnot Saint-Martin, France
2 Silvaco International, 4701 Patrick Henry Dr., Santa Clara, CA 95054, USA
3 University of Pisa, Via Diotisalvi 2, I-56122, Pisa, Italy
We describe here two approaches to model the quantum effects that can no more be neglected in actual and future devices. These models are the Schrödinger-Poisson and Density-Gradient methods fully integrated in the device simulator ATLAS. Simulations based on such methods are compared to each other on electron concentration and C-V curves in a MOS-capacitor.
Advanced silicon technology tends towards ever thinner and shorter gate oxide resulting in significant quantum effects. The most relevant effect is the confinement of the carriers. For instance, in a Metal-Oxide-Semiconductor capacitor C-V characteristic, the threshold voltage is shifted and the apparent oxide thickness is increased compared to the C-V characteristic expected with a semi-classical approach. To model this confinement accurately in a device simulator based on a drift-diffusion approach, two methods are treated in this paper. The first one, and the most accurate, is to include the Schrödinger equation into a self-consistent computation with the Poisson equation. Unfortunately this solution, due to its non-locality, has a significant numerical cost and cannot be efficiently coupled with the continuity equations giving the current flow in practical applications. All the same this method is used in 1D as a reference: the C-V characteristic and the carrier density profiles are useful to validate simpler methods. Different simpler methods compatible with the drift-diffusion approach have been developed [1, 2]. In this paper we describe a density gradient model which introduces a quantum potential correction in the continuity equations. In the following, we present first the Schrödinger-Poisson model, then the density gradient model and the comparison to each other.
Schrödinger-Poisson Model (S-P)
The confinement effect appears in very thin oxide devices where the barrier of potential at the interface SiO2/Si is larger and deeper than a thick oxide device. This quantum confinement is well described by solving the single particle Schrödinger equation. Solved self-consistently with the Poisson equation, it provides the eigenvalues and eigenvectors along the three directions of the k-space. Considering ml, mt1 and mt2 the electron longitudinal effective mass and the electron transverse effective masses respectively, the electron density is written as:
where x is the position along a vertical slice (normal to the gate oxide), li, Eli (resp. ti, Eti) are the i-th longitudinal (resp. transverse) eigenvector and eigenvalue, kB is the Boltzmann constant, T is the temperature, h is the Planck constant and EF is the Fermi level. For the holes, a similar expression is obtained with the light and heavy holes effective masses. For a 2D device, the S-P equation is solved along a set of 1D parallel slices under the gate. At the ends of each slice an infinite potential is set as a boundary condition. As this assumption is unphysical at the SiO2/Si interface, the S-P model has been designed to include the gate oxide in the solver so that the eigenvectors and thus the carriers could penetrate in the oxide. In the silicon oxide effective masses for electrons and holes have been defined, with value 0.3 and 1.0, respectively. A full description of this S-P model is presented in [3] with the works presented in [4, 5].
To illustrate this model, one defines a MOS-capacitor with 1e18 cm-3 p-type doped substrate and a 3 nm gate oxide thickness. In inversion mode (Vgate=1.0 V), Figure 1 shows the 5 first longitudinal and transverse eigenvectors (ml=0.98, mt1=mt2=0.19 have been set). The corresponding electron concentration is depicted in Figure 2 and compared with a semi-classical profile. It shows the peak in the quantum simulation is no more at the interface (x=0 coordinate) as in the semi-classical simulation. The quantum confinement is correctly modeled.
Figure 1a. 5 first longitudinal wave functions.
Figure 1b. 5 first transverse wave functions.
Figure 2. Semi-classical (dotted line) and quantum (solid line)
electron concentration in log scale.
Density Gradient Model (DG)
The density gradient method is an approach compatible with the drift-diffusion treatment used in device simulator. Different methods have been proposed [6-8], one presents here one of these models. It applies a quantum potential correction _ in the density current expression:
(if Boltzmann statistics is assumed),
µn is the electron mobility,
is the electrostatic potential,
nie is the intrinsic carrier concentration,
m is the electron effective mass,
is a fit factor.
The factor has been introduced to adjust the quantum correction which has been obtained after a few simplifications. Discussions about its introduction can be found in [7-9]. In this way it accounts the fact only one mass is used in DG model whereas three are used in S-P model. It could also be adjusted depending on the temperature of operation and the device (bulk, SOI, double gate).
Concerning the boundary conditions, they are the same as in a semi-classical scheme. The only boundary condition is that at contacts, the quantum correction is zero.
This model is compared to S-P model in Figure 3. The same device as described in section 2 has been used, the factor has been set to 3.6 (its default value as indicated in [8] ) and 3.4 which fits better the S-P electron profile. The electron concentration is displayed in a linear scale and the x=0 coordinate corresponds to the interface. The Figure 4 is a zoom around the peak and it shows a difference between S-P and DG with g=3.4 less than 1% at the peak. It confirms the DG model is suitable to capture quantum effects.
Figure 3. S-P (solid line) and DG (dashed and dotted lines) electron profiles.
Figure 4. Electron profiles, zoom of Figure 3
around the peak, S-P in solid line, DG/_=3.4
in dashed line and DG/_=3.6 dotted lines.
Then for each approach, semi-classical, Schrödinger-Poisson and Density-Gradient, we display in Figure 5 the C-V characteristics. The device used is the same as described in section 2 and =3.4 has been set for the DG model.
Figure 5. C-V curves, semi-classical in
dashed line, S-P in dotted line and DG in solid line.
We clearly note the shift of the threshold voltage near 0.5 volt and the reduction of the quantum capacitance in inversion mode (Vg > 0.5 V). The difference observed between S-P approach and DG model in strong accumulation is explained by the fact the charge is treated in a full quantum scheme in S-P solver whereas a part of the charge should be treated semi-classically. However this small error is not really important because the more strongly doped the substrate, the less the carriers are confined [9], moreover the mode of operation of an actual MOSFET is in inversion mode, and Figure 5 shows the very good agreement between the DG model and the S-P approach in this case.
We have presented the different approaches to model quantum confinement in MOSFET implemented in the commercial device simulator ATLAS. The Schrödinger-Poisson model is suitable for any kind of 1D or 2D devices (with planar or non-planar gate oxide) in which quantum effects are important and with bias conditions not to far from equilibrium (for instance, a small bias on the drain can be applied). This solver has been developed in collaboration with the University of Pisa, and has shown excellent agreement with their in-house code. Then a density gradient model has been described and its results, based on carriers’profiles and C-V curves, have proven its capability to model correctly the quantum confinement with an adjustment of the factor.
1. W.Hänsch et al., “Carrier transport near the Si/SiO2 interface of a MOSFET”, Solid-State Electron., vol.32, p.839, 1989.
2. M.J van Dort et al., “A simple model for quantization effects in heavily-doped silicon MOSFET’s at inversion conditions”, Solid-State Electron., vol.37, p.411, 1994.
3. Simulation Standard, Volume 12, Number 11, November 2002 on http://www.silvaco.com
4. S.Gennai, G.Iannaccone, “Detailed calculation of the vertical electric field in thin oxide MOSFETs”, Electronics Letters, 35, p.1881 , 1999.
5. G.Iannaccone,.F.Crupi, B.Neri, S.Lombardo, “Suppressed shot noise in trap-assited-tunneling of metal-oxide-capacitors”, Appl. Phys. Lett. 77, pp.2876-2878, 2000.
6. M.G.Ancona, H.F.Tiersten, “Macroscopic physics of the silicon inversion layer”, Physical Review B, vol.35, 15, pp.7959-7965, 1987.
7. M.G.Ancona, “Density-gradient theory analysis of electron distributions in heterostructures”, Superlattices and Microstructures, vol.7, No.2, 1990.
8. Andreas Wettstein et al., “Quantum Device-Simulation with the Density-Gradient Model on Unstructured Grids”, IEEE Transactions On Electron Devices, vol. 48, No.2, February 2001.
9. G.Chindalore et al., “An experimental study of the effect of quantization on the effective electrical oxide thickness in MOS electron and hole accumulation layers in heavily doped Si”, IEEE Transactions On Electron Devices, vol. 47, No.3, March 2000.
Download pdf version of this article |
602445e437cf3b93 | A Black-Scholes-Schrödinger Equation
In Quantum Physics and Options I promised to discuss the essentials of quantum mechanics that are relevant for option pricing. In classical mechanics Newton's law of motion determines the position of a particle at a given time by a deterministic function. So classical mechanics can be seen as the (deterministic) evolution of a stock price with zero volatility. In contrast, in quantum mechanics the particle's evolution is random, like it is the case for a stock price with non zero volatility. In the one dimensional case the random numbers indicating the position of the particle can be all points on the real line. At some fixed time the probability of finding the quantum particle at position x is given by the product of the wave function times its conjugate complex.
The wave function more generally spoken the state vector of the quantum system is an element of a linear vector space. In quantum mechanics physically measurable quantities such as energy, position and so on are represented by Hermitian operators that map the linear vector space on itself. The Hamilton operator evolves the system in time. Whereas the Hamilton operator in quantum mechanics is hermitian (Schrödinger equation) in finance it is in general not. Let us take a look at the Black Scholes equation (used for example for option pricing with constant volatility)
Changing of the variable
leads us to a "Black-Scholes-Schrödinger" equation
with the Black Scholes Hamiltonian given by
Seen as a quantum mechanical system, the Black-Scholes equation has one degree of freedom (x). Next week we will compare properties of the Schrödinger equation with properties of our Black-Scholes Schrödinger equation. |
7c2cc1f287ce2fe0 | Photosynthesis: from the antenna to the reaction center
How a wave packet travels through a quantum electronic interferometer
Together with Christoph Kreisbeck and Rafael A Molina I have contributed a blog entry to the News and Views section of the Journal of Physics describing our most recent work on Aharonov-Bohm interferometer with an imbedded quantum dot (article, arxiv). Can you spot Schrödinger’s cat in the result?
Transition between the resistivity of the nanoring with and without embedded quantum dot. The vertical axis denotes the Fermi energy (controlled by a gate), while the horizontal axis scans through the magnetic field to induce phase differences between the pathways.
Splitting the heat: the quantum limits of thermal energy flow
Device geometry. a) Scanning electron micrograph of the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer, mode spacing is ħω = 5 meV.
Device geometry. a) Scanning electron micrograph of an the sample. The 1D waveguides with a lithographic width of 170 nm form a half-ring connected to reservoirs A-F. A global top-gate is present. Heating of reservoirs A, B is generated by applying a current Ih, thermal noise measurements are performed at contacts E, F. The reservoirs C and D are left floating. b) Device potential for the ballistic transport model with labels A∗ and E∗ denoting the joined reservoirs A+B and E+F. Harmonic waveguide network with Gaussian scatterer (indicated by arrow). Mode spacing is ħω = 5 meV. © 2016 Kramer et al. Citation: AIP Advances 6, 065306 (2016);
With ever shrinking sizes of electronic transistors, the quantum mechanical nature of electrons becomes more visible. For instance two electrons with the same spin orientation and velocities cannot be at the same location (Pauli blocking). At low temperatures, electronic waves travel many mircometers completely coherently, only reflected by the geometric of the confinement. A tight confinement leads to larger separation of quantized energy levels and restricts the lateral spread of the electrons to specific eigenmodes of a nanowire.
The distribution of the electronic current into various is then given by the geometrical scattering properties of the device interior, which are conveniently computed using wave packets. The ballistic electrons entering a nanodevice carry along charge and thermal energy. The maximum amount of thermal energy Q per time which can be transported through a single channel between two reservoirs of different temperatures is limited to Q ≤ π2 kB2 (T22-T12)/(3h) [h denotes Planck’s and kB Boltzmann’s constant]. This has implications for computing devices, since this restricts the cooling rate (Pendry 1982).
In a collaboration with the novel materials group at Humboldt University (Prof. S.F. Fischer, Dr. C. Riha, Dr. O. Chiatti, S. Buchholz) and using wafers produced in the lab of A. Wieck, D. Reuter (Bochum, Paderborn) C. Kreisbeck and I have compared theoretical expectations with experimental data for the thermal energy and charge currents in multi-terminal nanorings (AIP Advances 2016, open access). Our findings highlight the influence of the device geometry on both, charge and thermal energy transfer and demonstrate the usefulness of the time-dependent wave-packet algorithm to find eigenstates over a whole range of temperature.
Metadata analysis of 80,000 arxiv:physics/astro-ph articles reveals biased moderation
Have you ever thought of arXiv moderation in astro-ph being a problem? Did you experience a >5 months delay from submission of your pre-print to the arXiv to being publicly visible? Did this happen without any explanation or reaction from the arXiv moderators despite the same article being published after peer review in the Astrophysical Journal Letters?
Chances are high that your answer is no, to be precise the odds are 81404/81440=99.9558 percent that this did not happen to you. Lucky you! Now let me tell about the other 36/81440=0.0442043 percent. My computer based analysis of the last 80,000 deposited arxiv:astro-ph articles shows interesting results about the moderation patterns in astrophysics. To repeat the analysis
• get the arXiv metadata, which is available (good!) from the arxiv itself. I used the excellent metha tools from Martin Czygan to download all metadata from the astro-ph and quant-ph sections since 5/2014.
• parse the resulting 200 MB XML file, for instance with Mathematica. To get the delay from submission to arXiv publication, I took the time difference between the submission date stamp (oldest XMLElement[{, date}) and the arXiv identifier, which encodes the year and month of public visibility.
• Example: the article arxiv:1604.00876 went public in April 2016, 5 months after submission to the arXiv (November 5, 2015) and publication in the Astrophysical Journal Letters (there total processing time from submission to online publication, including peer review 1.5 months).
The analysis shows different patterns of moderation for the two sections I considered, quant-ph and astro-ph. It reveals problematic moderation effects in the arXiv astro-ph section:
1. Completely suitable articles are blocked, mostly peer reviewed and published for instance in the Astrophysical Journal, Astrophysical Journal Letters, Monthly Notices of the Royal Astronomical Society.
2. This might indicate a biased moderation toward specific persons and subjects. In contrast to scientific journals with their named editors, the arXiv moderation is opaque and anonymous. The metadata analysis shows that the moderation of the physics:astro-ph and physics:quant-ph use very different guidelines, with astro-ph having a strong bias to block valid contributions.
3. It makes the astro-ph arXiv less usable as a medium for rapid dissemination of cutting edge research via preprints.
4. This hurts careers, citation histories, and encourages plagiarism. New scientific findings are more easily plagiarized by other groups, since no arXiv time-stamped preprint establishes the precedence.
5. If we, the scientists, want a publicly funded arXiv we must ensure that it is operated according to scientific standards which serve the public. This excludes biased blocking of valid and publicly funded research.
6. Finally, the arXiv was not put in place to be a backup server for all journals, but rather to provide a space to share upcoming scientific publications without months of delay.
I will be happy to share comments I receive about similar cases. I am not talking about dubious articles or non-scientific theories, but about standard peer-reviewed contributions published in established physics journals, which should be on the astrophysical preprint arXiv.
Here follows the list of all articles which were delayed by more than 3 months from arxiv:physics/astro-ph (out of a total of 81,440 deposited articles) and if known where the peer reviewed article got published. I cannot exclude other factors besides moderation for the delay, but can definitely confirm incorrect moderation being the cause for the 2 cases I have experienced. Interestingly the same analysis on arxiv:physics/quant-ph did not reveal such a moderation bias of peer reviewed articles. This gives hope that the astrophysical section could recover and return to 100 percent flawless operation. Then the arXiv fulfils its own pledge on accountability and on good scientific practices (principles of the arXiv’s operation). The Astrophysical Journal Publications of Astronomical Society of Japan Journal of Astrophysics and Astronomy EPJ Web of Conferences Astrophysics and Space Sciences The Astrophysical Journal Physical Review C Monthly Notices of the Royal Astronomical Society The Astrophysical Journal Letters Journal of Statistical Mechanics: Theory and Experiment The Astrophysical Journal Letters
Predicting comets: a matter of perspective
Contrast stretched NAVCAM image of the nucleus of comet 67P/Churyumov-Gerasimenko to highlight the “jets” of dust emitted from all over the surface. CC BY-SA IGO 3.0
In general, any cometary activity is difficult to predict and many comets are known for sudden changes in brightness, break ups and simple disappearances. Fortunately, the Rosetta target comet 67P/Churyumov-Gerasiminko (67P/C-G) is much more amendable to theoretical predictions. The OSIRIS and NAVCAM images show light reflected from a highly structured dust coma within the space probe orbit (ca 20-150 km).
Is is possible to predict the dust coma and tail of comets?
Starting in 2014 we have been working on a dust forecast for 67P/C-G, see the previous blog entries. We had now the chance to check how well our predictions hold by comparing the model outcome to a image sequence from the OSIRIS camera during one rotation period of 67P/C-G on April 12, 2015, published by Vincent et al in A&A 587, A14 (2016) (arxiv version, there Fig. 13).
Comparison of Rosetta observations by Vincent et al A&A 2016 (left panels) with the homogeneous model (right panels). Taken from Kramer&Noack (ApJL 2016) Credit for (a, c): ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA
Our results appeared in Kramer & Noack, Astrophysical Journal Letters, 823, L11 (preprint, images). We obtain a surprisingly high correlation coefficient (average 80%, max 90%) between theory and observation, if we stick to the following minimal assumption model:
1. dust is emitted from the entire sunlit nucleus, not only from localized active areas. We refer to this as the “homogeneous activity model”
2. dust is entering space with a finite velocity (on average) along the surface normal. This implies that close to the surface a rapid acceleration takes place.
3. photographed “jets” are highly depending on the observing geometry:
rotateif multiple concave areas align along the line of sight, a high imaged intensity results, but is not necessarily the result of a single main emission source. As an exemplary case, we analysed the brightest points in the Rosetta image taken on April 12, 2015, 12:12 and look at all contributing factors along the line of sight (yellow line) from the camera to the comet. The observed jet is actually resulting from multiple sources and in addition from contributions from all sunlit surface areas.
What are the implications of the theoretical model?
If dust is emitted from all sunlit areas of 67P/C-G, this implies a more homogeneous surface erosion of the illuminated nucleus and leaves less room for compositional heterogeneities. And finally: it makes the dust coma much more predictable, but still allows for additional (but unpredictable) spontaneous, 20-40 min outbreak events. Interestingly, a re-analysis of the comet Halley flyby by Crifo et al (Earth, Moon, and Planets 90 227-238 (2002)) also points to a more homogeneous emission pattern as compared to localized sources.
Weathering the dust around comet 67P/Churyumov–Gerasimenko
Bradford robotic telescope image of comet 67P (30th Oct 2015)
Bradford robotic telescope image of comet 67P/Churyumov–Gerasimenko (180s exposure time, 5:43 UTC, 30-10-2015). © 2015 University of Bradford
Comet 67P/Churyumov–Gerasimenko is past its perihelion and is currently visible in telescopes in the morning hours. The picture is taken from Tenerife by the Bradford robotic telescope, where I submitted the request. The tail is extending hundred thousands kilometers into space and consists of dust particles emitted from the cometary nucleus, which measures just a few kilometers. In a recent work just published in the Astrophysical Journal Letters (arxiv version), we have explored how dust, which does not make it into space, is whirling around the cometary nucleus. The model assumes that dust particles are emitted from the porous mantle and hover over the cometary surface for some time (<6h) and then fall back on the surface, delayed by the gas drag of gas molecules moving away from the nucleus. As in the predictions for the cometary coma discussed previously, we are sticking to a minimal-assumption model with a homogeneous surface activity of gas and dust emission.
Dust trajectories reaching the Philae descent area computed from a homogeneous dust emission model. Figure from Kramer/Noack Prevailing dust-transport directions on comet 67P/Churyumov-Gerasimenko, Astrophysical Journal Letters, 813, L33 (2015)
Dust trajectories reaching the Philae descent area computed from a homogeneous dust emission model. From Kramer/Noack “Prevailing dust-transport directions on comet 67P/Churyumov-Gerasimenko”, Astrophysical Journal Letters, 813, L33 (2015).
The movements of 40,000 dust particles are tracked and the average dust transport within a volumetric grid with 300 m sized boxes is computed. Besides the gas-dust interaction, we do also incorporate the rotation of the comet, which leads to a directional transport.
The Rosetta mission dropped Philae over the small lobe of 67P/C-G and Philae took a sequence of approach images which reveal structures resembling wind-tails behind boulders on the comet. This allowed Mottola et al (Science 349.6247 (2015): aab0232) to derive information about the direction of impinging particles which hit the surface unless sheltered by the boulder. Our model predicts a dust-transport inline with the observed directions in the descent region, it will be interesting to see how wind-tails at other locations match with the prediction. We put an interactive 3d dust-stream model online to visualize the dust-flux predicted from the homogeneous surface model.
Day and night at comet 67P/Churyumov–Gerasimenko
Dusting off cometary surfaces: collimated jets despite a homogeneous emission pattern.
Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post.
Effective Gravitational potential of the comet (including the centrifugal contribution), the maximal value of the potential (red) is about 0.46 N/m, the minimal value (blue) 0.31 N/m computed with the methods described in this post. The rotation period is taken to be 12.4043 h. Image computed with the OpenCL cosim code. Image (C) Tobias Kramer (CC-BY SA 3.0 IGO).
Knowledge of GPGPU techniques is helpful for rapid model building and testing of scientific ideas. For example, the beautiful pictures taken by the ESA/Rosetta spacecraft of comet 67P/Churyumov–Gerasimenko reveal jets of dust particles emitted from the comet. Wouldn’t it be nice to have a fast method to simulate thousands of dust particles around the comet and to find out if already the peculiar shape of this space-potato influences the dust-trajectories by its gravitational potential? At the Zuse-Institut in Berlin we joined forces between the distributed algorithm and visual data analysis groups to test this idea. But first an accurate shape model of the comet 67P C-G is required. As published in his blog, Mattias Malmer has done amazing work to extract a shape-model from the published navigation camera images.
1. Starting from the shape model by Mattias Malmer, we obtain a re-meshed model with fewer triangles on the surface (we use about 20,000 triangles). The key-property of the new mesh is a homogeneous coverage of the cometary surface with almost equally sized triangle meshes. We don’t want better resolution and adaptive mesh sizes at areas with more complex features. Rather we are considering a homogeneous emission pattern without isolated activity regions. This is best modeled by mesh cells of equal area. Will this prescription yield nevertheless collimated dust jets? We’ll see…
2. To compute the gravitational potential of such a surface we follow this nice article by JT Conway. The calculation later on stays in the rotating frame anchored to the comet, thus in addition the centrifugal and Coriolis forces need to be included.
3. To accelerate the method, OpenCL comes to the rescue and lets one compute many trajectories in parallel. What is required are physical conditions for the starting positions of the dust as it flies off the surface. We put one dust-particle on the center of each triangle on the surface and set the initial velocity along the normal direction to typically 2 or 4 m/s. This ensures that most particles are able to escape and not fall back on the comet.
4. To visualize the resulting point clouds of dust particles we have programmed an OpenGL visualization tool. We compute the rotation and sunlight direction on the comet to cast shadows and add activity profiles to the comet surface to mask out dust originating from the dark side of the comet.
This is what we get for May 3, 2015. The ESA/NAVCAM image is taken verbatim from the Rosetta/blog.
Comparison of homogeneous dust model with ESA/NAVCAM Rosetta images.
Comparison of homogeneous dust mode (left panel)l with ESA/NAVCAM Rosetta images. (C) Left panel: Tobias Kramer and Matthias Noack 2015. Right panel: (C) ESA/NAVCAM team CC BY-SA 3.0 IGO, link see text.
Read more about the physics and results in our arxiv article T. Kramer et al.: Homogeneous Dust Emission and Jet Structure near Active Cometary Nuclei: The Case of 67P/Churyumov-Gerasimenko (submitted for publication) and grab the code to compute your own dust trajectories with OpenCL at
The shape of the universe
The following post is contributed by Peter Kramer.
hyperbolic dodecahedron
Shown are two faces of a hyberbolic dodecahedron.
The red line from the family of shortest lines (geodesics) connects both faces. Adapted from CRM Proceedings and Lecture Notes (2004), vol 34, p. 113, by Peter Kramer.
The new Planck data on the cosmic microwave background (CMB) has come in. For cosmic topology, the data sets contain interesting information related to the size and shape of the universe. The curvature of the three-dimensional space leads to a classification into hyperbolic, flat, or spherical cases. Sometimes in popular literature, the three cases are said to imply an inifinite (hyperbolic, flat) or finite (spherical) size of the universe. This statement is not correct. Topology supports a much wider zoo of possible universes. For instance, there are finite hyperbolic spaces, as depicted in the figure (taken from Group actions on compact hyperbolic manifolds and closed geodesics, arxiv version). The figure also shows the resulting geodesics, which is the path of light through such a hyperbolic finite sized universe. The start and end-points must be identified and lead to smooth connection.
Recent observational data seem to suggest a spherical space. Still, it does not resolve the issue of the size of the universe.
Instead of a fully filled three-sphere, already smaller parts of the sphere can be closed topologically and thus lead to a smaller sized universe. A systematic exploration of such smaller but still spherical universes is given in my recent article
Topology of Platonic Spherical Manifolds: From Homotopy to Harmonic Analysis.
In physics, it is important to give specific predictions for observations of the topology, for instance by predicting the ratio of the different angular modes of the cosmic microwave background. It is shown that this is indeed the case and for instance in a cubic (still spherical!) universe, the ratio of 4th and 6th multipole order squared are tied together in the proportion 7 : 4, see Table 5. On p. 35 of ( the Planck collaboration article) the authors call for models yielding such predictions as possible explanations for the observed anisotropy and the ratio of high and low multipole moments.
When two electrons collide. Visualizing the Pauli blockade.
The upper panel shows two (non-interacting) electrons approaching with small relative momenta, the lower panel with larger relative momenta.
The upper panel shows two electrons with small relative momenta colliding, in the lower panel with larger relative momenta.
From time to time I get asked about the implications of the Pauli exclusion principle for quantum mechanical wave-packet simulations.
I start with the simplest antisymmetric case: a two particle state given by the Slater determinant of two Gaussian wave packets with perpendicular directions of the momentum:
φa(x,y)=e-[(x-o)2+(y-o)2]/(2a2)-ikx+iky and φb(x,y)=e-[(x+o)2+(y-o)2]/(2a2)+ikx+iky
This yields the two-electron wave function
The probability to find one of the two electrons at a specific point in space is given by integrating the absolute value squared wave function over one coordinate set.
The resulting single particle density (snapshots at specific values of the displacement o) is shown in the animation for two different values of the momentum k (we assume that both electrons are in the same spin state).
For small values of k the two electrons get close in phase space (that is in momentum and position). The animation shows how the density deviates from a simple addition of the probabilities of two independent electrons.
If the two electrons differ already by a large relative momentum, the distance in phase space is large even if they get close in position space. Then, the resulting single particle density looks similar to the sum of two independent probabilities.
The probability to find the two electrons simultaneously at the same place is zero in both cases, but this is not directly visible by looking at the single particle density (which reflects the probability to find any of the electrons at a specific position).
For further reading, see this article [arxiv version].
The impact of scientific publications – some personal observations
I will resume posting about algorithm development for computational physics. To put these efforts in a more general context, I start with some observation about the current publication ranking model and explore alternatives and supplements in the next posts.
Solvey congress 19...
Solvey congress 1970, many well-known nuclear physicists are present, including Werner Heisenberg.
Working in academic institutions involves being part of hiring committees as well as being assessed by colleagues to measure the impact of my own and other’s scientific contributions.
In the internet age it has become common practice to look at various performance indices, such as the h-index, number of “first author” and “senior author” articles. Often it is the responsibility of the applicant to submit this data in electronic spreadsheet format suitable for an easy ranking of all candidates. The indices are only one consideration for the final decision, albeit in my experience an important one due to their perceived unbiased and statistical nature. Funding of whole university departments and the careers of young scientists are tied to the performance indices.
I did reflect about the usefulness of impact factors while I collected them for various reports, here are some personal observations:
1. Looking at the (very likely rather incomplete) citation count of my father I find it interesting that for instance a 49 year old contribution by P Kramer/M Moshinsky on group-theoretical methods for few-body systems gains most citations per year after almost 5 decades. This time-scale is well beyond any short-term hiring or funding decisions based on performance indices. From colleagues I hear about similar cases.
2. A high h-index can be a sign of a narrow research field, since the h-index is best built up by sticking to the same specialized topic for a long time and this encourages serialised publications. I find it interesting that on the other hand important contributions have been made by people working outside the field to which they contributed. The discovery of three-dimensional quasicrystals discussed here provides a good example. The canonical condensed matter theory did not envision this paradigmatic change, rather the study of group theoretical methods in nuclear physics provided the seeds.
3. The full-text search provided by the search engines offers fascinating options to scan through previously forgotten chapters and books, but it also bypasses the systematic classification schemes previously developed and curated by colleagues in mathematics and theoretical physics. It is interesting to note that for instance the AMS short reviews are not done anonymously and most often are of excellent quality. The non-curated search on the other hand leads to a down-ranking of books and review articles, which contain a broader and deeper exposition of a scientific topic. Libraries with real books grouped by topics are deserted these days, and online services and expert reviews did in general not gain a larger audience or expert community to write reports. One exception might be the public discussion of possible scientific misconduct and retracted publications.
4. Another side effect: searching the internet for specific topics diminishes the opportunity to accidentally stumble upon an interesting article lacking these keywords, for instance by scanning through a paper volume of a journal while searching for a specific article. I recall that many faculty members went every monday to the library and looked at all the incoming journals to stay up-to-date about the general developments in physics and chemistry. Today we get email alerts about citation counts or specific subfields, but no alert contains a suggestion what other article might pick our intellectual curiosity – and looking at the rather stupid shopping recommendations generated by online-warehouses I don’t expect this to happen anytime soon.
5. On a positive note: since all text sources are treated equally, no “high-impact journals” are preferred. In my experience as a referee for journals of all sorts of impact numbers, the interesting contributions are not necessarily published or submitted to highly ranked journals.
To sum up, the assessment of manuscripts, contribution of colleagues, and of my own articles requires humans to read them and to process them carefully – all of this takes a lot of time and consideration. It can take decades before publications become alive and well cited. Citation counts of the last 10 years can be poor indicators for the long-term importance of a contribution. Counting statistics provides some gratification by showing immediate interest and are the (less personal) substitute for the old-fashioned postcards requesting reprints. People working in theoretical physics are often closely related by collaboration distance, which provides yet another (much more fun!) factor. You can check your Erdos number (mine is 4) or Einstein number (3, thanks to working with Marcos Moshinsky) at the AMS website.
How to improve the current situation and maintain a well curated and relevant library of scientific contributions – in particular involving numerical results and methods? One possibility is to make a larger portion of the materials surrounding a publication available. In computational physics it is of interest to test and recalculate published results shown in journals. The platform is in my view a best practice case for providing supplemental information on demand and to ensure a long-term availability and usefulness of scientific results by keeping the computational tools running and updated. It is for me a pleasure and excellent experience to work with the team around nanohub to maintain our open quantum dynamics tool. Another way is to provide and test background materials in research blogs. I will try out different approaches with the next posts.
Better than Slater-determinants: center-of-mass free basis sets for few-electron quantum dots
Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states.
Error analysis of eigenenergies of the standard configuration interaction (CI) method (right black lines). The left colored lines are obtained by explicitly handling all spurious states. The arrows point out the increasing error of the CI approach with increasing center-of-mass admixing.
Solving the interacting many-body Schrödinger equation is a hard problem. Even restricting the spatial domain to a two-dimensions plane does not lead to analytic solutions, the trouble-makers are the mutual particle-particle interactions. In the following we consider electrons in a quasi two-dimensional electron gas (2DEG), which are further confined either by a magnetic field or a harmonic oscillator external confinement potential. For two electrons, this problem is solvable for specific values of the Coulomb interaction due to a hidden symmetry in the Hamiltonian, see the review by A. Turbiner and our application to the two interacting electrons in a magnetic field.
For three and more electrons (to my knowledge) no analytical solutions are known. One standard computational approach is the configuration interaction (CI) method to diagonalize the Hamiltonian in a variational trial space of Slater-determinantal states. Each Slater determinant consists of products of single-particle orbitals. Due to computer resource constraints, only a certain number of Slater determinants can be included in the basis set. One possibility is to include only trial states up to certain excitation level of the non-interacting problem.
The usage of Slater-determinants as CI basis-set introduce severe distortions in the eigenenergy spectrum due to the intrusion of spurious states, as we will discuss next. Spurious states have been extensively analyzed in the few-body problems arising in nuclear physics but have rarely been mentioned in solid-state physics, where they do arise in quantum-dot systems. The basic defect of the Slater-determinantal CI method is that it brings along center-of-mass excitations. During the diagonalization, the center-of-mass excitations occur along with the Coulomb-interaction and lead to an inflated basis size and also with a loss of precision for the eigenenergies of the excited states. Increasing the basis set does not uniformly reduce the error across the spectrum, since the enlarged CI basis set brings along states of high center-of-mass excitations. The cut-off energy then restricts the remaining basis size for the relative part.
The cleaner and leaner way is to separate the center-of-mass excitations from the relative-coordinate excitations, since the Coulomb interaction only acts along the relative coordinates. In fact, the center-of-mass part can be split off and solved analytically in many cases. The construction of the relative-coordinate basis states requires group-theoretical methods and is carried out for four electrons here Interacting electrons in a magnetic field in a center-of-mass free basis (arxiv:1410.4768). For three electrons, the importance of a spurious state free basis set was emphasized by R Laughlin and is a design principles behind the Laughlin wave function.
Slow or fast transfer: bottleneck states in light-harvesting complexes
Exciton dynamics in LHCII.
High-performance OpenCL code for modeling energy transfer in spinach
Flashback to the 80ies: filling space with the first quasicrystals
This post provides a historical and conceptional perspective for the theoretical discovery of non-periodic 3d space-fillings by Peter Kramer, later experimentally found and now called quasicrystals. See also these previous blog entries for more quasicrystal references and more background material here.
The following post is written by Peter Kramer.
Star extension of the pentagon. From Kramer 1982.
Star extension of the pentagon. Fig 1 from
Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells by Peter Kramer, Acta Cryst. (1982). A38, 257-264
When sorting out old texts and figures from 1981 of mine published in Non-periodic central space filling with icosahedral symmetry using copies of seven elementary cells, Acta Cryst. (1982). A38, 257-264), I came across the figure of a regular pentagon of edge length L, which I denoted as p(L). In the left figure its red-colored edges are star-extending up to their intersections. Straight connection of these intersection points creates a larger blue pentagon. Its edges are scaled up by τ2, with τ the golden section number, so the larger pentagon we call p(τ2 L). This blue pentagon is composed of the old red one plus ten isosceles triangles with golden proportion of their edge length. Five of them have edges t1(L): (L, τ L, τ L), five have edges t2(L): (τ L,τ L, τ2 L). We find from Fig 1 that these golden triangles may be composed face-to-face into their τ-extended copies as t1(τ L) = t1(L) + t2(L) and t2(τ L) = t1(L) + 2 t2(L).
Moreover we realize from the figure that also the pentagon p(τ2 L) can be composed from golden triangles as p(τ2 L) = t1(τ L) + 3 t2(τ L) = 4 t1(L) + 7 t2(L).
This suggests that the golden triangles t1,t2 can serve as elementary cells of a triangle tiling to cover any range of the plane and provide the building blocks of a quasicrystal. Indeed we did prove this long range property of the triangle tiling (see Planar patterns with fivefold symmetry as sections of periodic structures in 4-space).
An icosahedral tiling from star extension of the dodecahedron.
The star extension of the dodecahedron.
Star extension of the dodecahedron d(L) to the icosahedron i(τ2L) and further to d(τ3L) and i(τ5L) shown in Fig 3 of the 1982 paper. The vertices of these polyhedra are marked by filled circles; extensions of edges are shown except for d(L).
In the same paper, I generalized the star extension from the 2D pentagon to the 3D dodecahedron d(L) of edge length L in 3D (see next figure) by the following prescription:
• star extend the edges of this dodecahedron to their intersections
• connect these intersections to form an icosahedron
The next star extension produces a larger dodecahedron d(τ3L), with edges scaled by τ3. In the composition of the larger dodecahedron I found four elementary polyhedral shapes shown below. Even more amusing I also resurrected the paper models I constructed in 1981 to actually demonstrate the complete space filling!
These four polyhedra compose their copies by scaling with τ3. As for the 2D case arbitrary regions of 3D can be covered by the four tiles.
Elementary cells The paper models I built in 1981 are still around and complete enough to fill the 3D space.
The four elementary cells shown in the 1982 paper, Fig. 4. The four shapes are named dodecahedron (d) skene (s), aetos (a) and tristomos (t). The paper models from 1981 are still around in 2014 and complete enough to fill the 3D space without gaps. You can spot all shapes (d,s,a,t) in various scalings and they all systematically and gapless fill the large dodecahedron shell on the back of the table.
The only feature missing for quasicrystals is aperiodic long-range order which eventually leads to sharp diffraction patterns of 5 or 10 fold point-symmetries forbidden for the old-style crystals. In my construction shown here I strictly preserved central icosahedral symmetry. Non-periodicity then followed because full icosahedral symmetry and periodicity in 3D are incompatible.
In 1983 we found a powerful alternative construction of icosahedral tilings, independent of the assumption of central symmetry: the projection method from 6D hyperspace (On periodic and non-periodic space fillings of Em obtained by projection) This projection establishes the quasiperiodicity of the tilings, analyzed in line with the work Zur Theorie der fast periodischen Funktionen (i-iii) of Harald Bohr from 1925 , as a variant of aperiodicity (more background material here).
GPU-HEOM 2d spectra computed at nanohub
1. login on (it’s free!)
2. switch to the gpuheompop tool
3. click the Launch Tool button (java required)
You can select this preset from the Example selector.
10. Voila: your first FMO spectra appears.
GPU and cloud computing conferences in 2014
Two conferences are currently open for registration related to GPU and cloud computing. I will be attending and presenting at both, please email me if you want to get in touch at the meetings.
Oscillations in two-dimensional spectroscopy
Transition from electronic coherence to a vibrational mode.
Computational physics on GPUs: writing portable code
GPU-HEOM code comparison for various hardware.
Runtime in seconds for our GPU-HEOM code on various hardware and software platforms.
I am preparing my presentation for the simGPU meeting next week in Freudenstadt, Germany, and performed some benchmarks.
In the previous post I described how to get an OpenCL program running on a smartphone with GPU. By now Christoph Kreisbeck and I are getting ready to release our first smartphone GPU app for exciton dynamics in photosynthetic complexes, more about that in a future entry.
Getting the same OpenCL kernel running on laptop GPUs, workstation GPUs and CPUs, and smartphones/tablets is a bit tricky, due to different initialisation procedures and the differences in the optimal block sizes for the thread grid. In addition on a smartphone the local memory is even smaller than on a desktop GPU and double-precision floating point support is missing. The situation reminds me a bit of the “earlier days” of GPU programming in 2008.
Besides being a proof of concept, I see writing portable code as a sort of insurance with respect to further changes of hardware (however always with the goal to stick with the massively parallel programming paradigm). I am also amazed how fast smartphones are gaining computational power through GPUs!
Same comparison for smaller memory consumption. Note the drop in OpenCL performance for the NVIDIA K20c GPU.
Here some considerations and observations:
1. Standard CUDA code can be ported to OpenCL within a reasonable time-frame. I found the following resources helpful:
AMDs porting remarks
Matt Scarpinos OpenCL blog
2. The comparison of OpenCL vs CUDA performance for the same algorithm can reveal some surprises on NVIDIA GPUs. While on our C2050 GPU OpenCL works a bit faster for the same problem compared to the CUDA version, on a K20c system for certain problem sizes the OpenCL program can take several times longer than the CUDA code (no changes in the basic algorithm or workgroup sizes).
3. The comparison with a CPU version running on 8 cores of the Intel Xeon machine is possible and shows clearly that the GPU code is always faster, but requires a certain minimal systems size to show its full performance.
4. I am looking forward to running the same code on the Intel Xeon Phi systems now available with OpenCL drivers, see also this blog.
[Update June 22, 2013: I updated the graphs to show the 8-core results using Intels latest OpenCL SDK. This brings the CPU runtimes down by a factor of 2! Meanwhile I am eagerly awaiting the possibility to run the same code on the Xeon Phis…]
Computational physics on the smartphone GPU
Screenshot of the interacting many-body simulation on the Nexus 4 GPU.
[Update August 2013: Google has removed the OpenCL library with Android 4.3. You can find an interesting discussion here. Google seems to push for its own renderscript protocol. I will not work with renderscript since my priorities are platform independency and sticking with widely adopted standards to avoid fragmentation of my code basis.]
I recently got hold of a Nexus 4 smartphone, which features a GPU (Qualcomm Adreno 320) and conveniently ships with already installed OpenCL library. With minimal changes I got the previously discussed many-body program code related to the fractional quantum Hall effect up and running. No unrooting of the phone is required to run the code example. Please use the following recipe at your own risk, I don’t accept any liabilities. Here is what I did:
1. Download and unpack the Android SDK from google for cross-compilation (my host computer runs Mac OS X).
2. Download and unpack the Android NDK from google to build minimal C/C++ programs without Java (no real app).
3. Install the standalone toolchain from the Android NDK. I used the following command for my installation:
/home/tkramer/android-ndk-r8d/build/tools/ \
4. Put the OpenCL programs and source code in an extra directory, as described in my previous post
5. Change one line in the cl.hpp header: instead of including <GL/gl.h> change to <GLES/gl.h>. Note: I am using the “old” cl.hpp bindings 1.1, further changes might be required for the newer bindings, see for instance this helpful blog
6. Transfer the OpenCL library from the phone to a subdirectory lib/ inside your source code. To do so append the path to your SDK tools and use the adb command:
export PATH=/home/tkramer/adt-bundle-mac-x86_64-20130219/sdk/platform-tools:$PATH
adb pull /system/lib/
7. Cross compile your program. I used the following script, please feel free to provide shorter versions. Adjust the include directories and library directories for your installation.
rm plasma_disk_gpu
/home/tkramer/android-ndk-standalone/bin/arm-linux-androideabi-g++ -v -g \
-I. \
-I/home/tkramer/android-ndk-standalone/include/c++/4.6 \
-I/home/tkramer/android-ndk-r8d/platforms/android-5/arch-arm/usr/include \
-Llib \
-march=armv7-a -mfloat-abi=softfp -mfpu=neon \
-fpic -fsigned-char -fdata-sections -funwind-tables -fstack-protector \
-ffunction-sections -fdiagnostics-show-option -fPIC \
-fno-strict-aliasing -fno-omit-frame-pointer -fno-rtti \
-lOpenCL \
-o plasma_disk_gpu plasma_disk.cpp
8. Copy the executable to the data dir of your phone to be able to run it. This can be done without rooting the phone with the nice SSHDroid App, which by defaults transfers to /data . Don’t forget to copy the kernel .cl files:
scp -P 2222 root@192.168.0.NNN:
scp -P 2222 plasma_disk_gpu root@192.168.0.NNN:
9. ssh into your phone and run the GPU program:
ssh -p 2222 root@192.168.0.NNN
./plasma_disk_gpu 64 16
10. Check the resulting data files. You can copy them for example to the Download path of the storage and use the gnuplot (droidplot App) to plot them.
A short note about runtimes. On the Nexus 4 device the program runs for about 12 seconds, on a MacBook Pro with NVIDIA GT650M it completes in 2 seconds (in the example above the equations of motion for 16*64=1024 interacting particles are integrated). For larger particle numbers the phone often locks up.
An alternative way to transfer files to the device is to connect via USB cable and to install the Android Terminal Emulator app. Next
cd /data/data/jackpal.androidterm
mkdir gpu
chmod 777 gpu
On the host computer use adb to transfer the compiled program and the .cl kernel and start a shell to run the kernel
adb push /data/data/jackpal.androidterm/gpu/
adb push plasma_disk_gpu /data/data/jackpal.androidterm/gpu/
You can either run the program within the terminal emulator or use the adb shell
adb shell
cd /data/data/jackpal.androidterm/gpu/
./plasma_disk_gpu 64 16
Let’s see in how many years todays desktop GPUs can be found in smartphones and which computational physics codes can be run!
• calculating population dynamics
• tracking coherences between two eigenstates
• obtaining absorption spectra
• two-dimensional echo spectra (including excited state absorption)
and you find further references in the supporting documentation.
Wendling spectral density for FMO complex
2d spectra are smart objects
FMO spectrum calculated with GPU-HEOM
Computational physics & GPU programming: interacting many-body simulation with OpenCL
Trajectories in a two-dimensional interacting plasma simulation, reproducing the density and pair-distribution function of a Laughlin state relevant for the quantum Hall effect. Figure taken from Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system.
In the second example of my series on GPU programming for scientists, I discuss a short OpenCL program, which you can compile and run on the CPU and the GPUs of various vendors. This gives me the opportunity to perform some cross-platform benchmarks for a classical plasma simulation. You can expect dramatic (several 100 fold) speed-ups on GPUs for this type of system. This is one of the reasons why molecular dynamics code can gain quite a lot by incorporating the massively parallel-programming paradigm in the algorithmic foundations.
The Open Computing Language (OpenCL) is relatively similar to its CUDA pendant, in practice the setup of an OpenCL kernel requires some housekeeping work, which might make the code look a bit more involved. I have based my interacting electrons calculation of transport in the Hall effect on an OpenCL code. Another examples is An OpenCL implementation for the solution of the time-dependent Schrödinger equation on GPUs and CPUs (arxiv version) by C. Ó Broin and L.A.A. Nikolopoulos.
Now to the coding of a two-dimensional plasma simulation, which is inspired by Laughlin’s mapping of a many-body wave function to an interacting classical ersatz dynamics (for some context see my short review Interacting electrons in a magnetic field: mapping quantum mechanics to a classical ersatz-system on the arxiv).
Continue reading “Computational physics & GPU programming: interacting many-body simulation with OpenCL”
Computational physics & GPU programming: Solving the time-dependent Schrödinger equation
I start my series on the physics of GPU programming by a relatively simple example, which makes use of a mix of library calls and well-documented GPU kernels. The run-time of the split-step algorithm described here is about 280 seconds for the CPU version (Intel(R) Xeon(R) CPU E5420 @ 2.50GHz), vs. 10 seconds for the GPU version (NVIDIA(R) Tesla C1060 GPU), resulting in 28 fold speed-up! On a C2070 the run time is less than 5 seconds, yielding an 80 fold speedup.
autocorrelation function in a uniform force field
Autocorrelation function C(t) of a Gaussian wavepacket in a uniform force field. I compare the GPU and CPU results using the wavepacket code.
The description of coherent electron transport in quasi two-dimensional electron gases requires to solve the Schrödinger equation in the presence of a potential landscape. As discussed in my post Time to find eigenvalues without diagonalization, our approach using wavepackets allows one to obtain the scattering matrix over a wide range of energies from a single wavepacket run without the need to diagonalize a matrix. In the following I discuss the basic example of propagating a wavepacket and obtaining the autocorrelation function, which in turn determines the spectrum. I programmed the GPU code in 2008 as a first test to evaluate the potential of GPGPU programming for my research. At that time double-precision floating support was lacking and the fast Fourier transform (FFT) implementations were little developed. Starting with CUDA 3.0, the program runs fine in double precision and my group used the algorithm for calculating electron flow through nanodevices. The CPU version was used for our articles in Physica Scripta Wave packet approach to transport in mesoscopic systems and the Physical Review B Phase shifts and phase π-jumps in four-terminal waveguide Aharonov-Bohm interferometers among others.
Here, I consider a very simple example, the propagation of a Gaussian wavepacket in a uniform potential V(x,y)=-Fx, for which the autocorrelation function of the initial state
⟨x,y|ψ(t=0)⟩=1/(a√π)exp(-(x2+y2)/(2 a2))
is known in analytic form:
⟨ψ(t=0)|ψ(t)⟩=2a2m/(2a2m+iℏt)exp(-a2F2t2/(4ℏ2)-iF2t3/(24ℏ m)).
Continue reading “Computational physics & GPU programming: Solving the time-dependent Schrödinger equation”
The physics of GPU programming
GPU cluster
Me pointing at the GPU Resonance cluster at SEAS Harvard with 32x448=14336 processing cores. Just imagine how tightly integrated this setup is compared to 3584 quad-core computers. Picture courtesy of Academic Computing, SEAS Harvard.
From discussions I learn that while many physicists have heard of Graphics Processing Units as fast computers, resistance to use them is widespread. One of the reasons is that physics has been relying on computers for a long time and tons of old, well trusted codes are lying around which are not easily ported to the GPU. Interestingly, the adoption of GPUs happens much faster in biology, medical imaging, and engineering.
I view GPU computing as a great opportunity to investigate new physics and my feeling is that todays methods optimized for serial processors may need to be replaced by a different set of standard methods which scale better with massively parallel processors. In 2008 I dived into GPU programming for a couple of reasons:
1. As a “model-builder” the GPU allows me to reconsider previous limitations and simplifications of models and use the GPU power to solve the extended models.
2. The turn-around time is incredibly fast. Compared to queues in conventional clusters where I wait for days or weeks, I get back results with 10000 CPU hours compute time the very same day. This in turn further facilitates the model-building process.
3. Some people complain about the strict synchronization requirements when running GPU codes. In my view this is an advantage, since essentially no messaging overhead exists.
4. If you want to develop high-performance algorithm, it is not good enough to convert library calls to GPU library calls. You might get speed-ups of about 2-4. However, if you invest the time and develop your own know-how you can expect much higher speed-ups of around 100 times or more, as seen in the applications I discussed in this blog before.
This summer I will lecture about GPU programming at several places and thus I plan to write a series of GPU related posts. I do have a complementary background in mathematical physics and special functions, which I find very useful in relation with GPU programming since new physical models require a stringent mathematical foundation and numerical studies.
Peak oscillations in the FMO complex calculated using GPU-HEOM
The Nobel Prize 2011 in Chemistry: press releases, false balance, and lack of research in scientific writing
To get this clear from the beginning: with this posting I am not questioning the great achievement of Prof. Dan Shechtman, who discovered what is now known as quasicrystal in the lab. Shechtman clearly deserves the prize for such an important experiment demonstrating that five-fold symmetry exists in real materials.
My concern is the poor quality of research and reporting on the subject of quasicrystals starting with the press release of the Swedish Academy of Science and lessons to be learned about trusting these press releases and the reporting in scientific magazines. To provide some background: with the announcement of the Nobel prize a press release is put online by the Swedish academy which not only announces the prize winner, but also contains two PDFs with background information: one for the “popular press” and another one with for people with a more “scientific background”. Even more dangerously, the Swedish Academy has started a multimedia endeavor of pushing its views around the world in youtube channels and numerous multimedia interviews with its own members (what about asking an external expert for an interview?).
Before the internet age journalists got the names of the prize winners, but did not have immediately access to a “ready to print” explanation of the subject at hand. I remember that local journalists would call at the universities and ask a professor who is familiar with the topic for advice or get at least the phone number of somebody familiar with it. Not any more. This year showed that the background information prepared in advance by the committee is taken over by the media outlets basically unchanged. So far it looks as business as usual. But what if the story as told by the press release is not correct? Does anybody still have time and resources for some basic fact checking, for example by calling people familiar with the topic, or by consulting the archives of their newspaper/magazine to dig out what was written when the discovery was made many years ago? Should we rely on the professor who writes the press releases and trust that this person adheres to scientific and ethic standards of writing?
For me, the unfiltered and unchecked usage of press releases by the media and even by scientific magazines shows a decay in the quality of scientific reporting. It also generates a uniformity and self-referencing universe, which enters as “sources” in online encyclopedias and in the end becomes a “self-generated” truth. However it is not that difficult to break this circle, for example by
1. digging out review articles on the topic and looking up encyclopedias for the topic of quasicrystals, see for example: Pentagonal and Icosahedral Order in Rapidly Cooled Metals by David R. Nelson and Bertrand I. Halperin, Science 19 July 1985:233-238, where the authors write: “Independent of these experimental developments, mathematicians and some physicists had been exploring the consequences of the discovery by Penrose in 1974 of some remarkable, aperiodic, two-dimensional tilings with fivefold symmetry (7). Several authors suggested that these unusual tesselations of space might have some relevance to real materials (8, 9). MacKay (8) optically Fourier-transformed a two-dimensional Penrose pattern and found a tenfold symmetric diffraction pattern not unlike that shown for Al-Mn in Fig. 2. Three-dimensional generalizations of the Penrose patterns, based on the icosahedron, have been proposed (8-10). The generalization that appears to be most closely related to the experiments on Al-Mn was discovered by Kramer and Neri (11) and, independently, by Levine and Steinhardt (12).
2. identifying from step 1 experts and asking for their opinion
3. checking the newspaper and magazine archives. Maybe there exists already a well researched article?
4. correcting mistakes. After all mistakes do happen. Also in “press releases” by the Nobel committee, but there is always the option to send out a correction or to amend the published materials. See for example the letter in Science by David R. Nelson
Icosahedral Crystals in Perspective, Science 13 July 1990:111 again on the history of quasicrystals:
“[…] The threedimensional generalization of the Penrose tiling most closely related to the experiments was discovered by Peter Kramer and R. Neri (3) independently of Steinhardt and Levine (4). The paper by Kramer and Neri was submitted for publication almost a year before the paper of Shechtman et al. These are not obscure references: […]
Since I am working in theoretical physics I find it important to point out that in contrast to the story invented by the Nobel committee actually the theoretical structure of quasicrystals was published and available in the relevant journal of crystallography at the time the experimental paper got published. This sequence of events is well documented as shown above and in other review articles and books.
I am just amazed how the press release of the Nobel committee creates an alternate universe with a false history of theoretical and experimental publication records. It does give false credits for the first theoretical work on three-dimensional quasicrystals and at least in my view does not adhere to scientific and ethic standards of scientific writing.
Prof. Sven Lidin, who is the author of the two press releases of the Swedish Academy has been contacted as early as October 7 about his inaccurate and unbalanced account of the history of quasicrystals. In my view, a huge responsibility rests on the originator of the “story” which was put in the wild by Prof. Lidin, and I believe he and the committee members are aware of their power since they use actively all available electronic media channels to push their complete “press package” out. Until today no corrections or updates have been distributed. Rather you can watch on youtube the (false) story getting repeated over and over again. In my view this example shows science reporting in its worst incarnation and undermines the credibility and integrity of science.
Quasicrystals: anticipating the unexpected
The following guest entry is contributed by Peter Kramer
Dan Shechtman received the Nobel prize in Chemistry 2011 for the experimental discovery of quasicrystals. Congratulations! The press release stresses the unexpected nature of the discovery and the struggles of Dan Shechtman to convince the fellow experimentalists. To this end I want to contribute a personal perspective:
From the viewpoint of theoretical physics the existence of icosahedral quasicrystals as later discovered by Shechtman was not quite so unexpected. Beginning in 1981 with Acta Cryst A 38 (1982), pp. 257-264 and continued with Roberto Neri in Acta Cryst A 40 (1984), pp. 580-587 we worked out and published the building plan for icosahedral quasicrystals. Looking back, it is a strange and lucky coincidence that unknown to me during the same time Dan Shechtman and coworkers discovered icosahedral quasicrystals in their seminal experiments and brought the theoretical concept of three-dimensional non-periodic space-fillings to live.
More about the fascinating history of quasicrystals can be found in a short review: gateways towards quasicrystals and on my homepage.
Time to find eigenvalues without diagonalization
Aharnov-Bohm Ring conductance oscillations
|
2bebea384108713e | Open Access
Spatial non-adiabatic passage using geometric phases
EPJ Quantum Technology20174:3
Received: 8 November 2016
Accepted: 15 March 2017
Published: 28 March 2017
Quantum technologies based on adiabatic techniques can be highly effective, but often at the cost of being very slow. Here we introduce a set of experimentally realistic, non-adiabatic protocols for spatial state preparation, which yield the same fidelity as their adiabatic counterparts, but on fast timescales. In particular, we consider a charged particle in a system of three tunnel-coupled quantum wells, where the presence of a magnetic field can induce a geometric phase during the tunnelling processes. We show that this leads to the appearance of complex tunnelling amplitudes and allows for the implementation of spatial non-adiabatic passage. We demonstrate the ability of such a system to transport a particle between two different wells and to generate a delocalised superposition between the three traps with high fidelity in short times.
shortcuts to adiabaticity geometric phases complex tunnelling
1 Introduction
Adiabatic techniques are widely used for the manipulation of quantum states. They typically yield high fidelities and possess a high degree of robustness. One paradigmatic example is stimulated Raman adiabatic passage (STIRAP) in three-level atomic systems [13]. STIRAP-like techniques have been successfully applied to a wide range of problems, and in particular, to the control of the centre-of-mass states of atoms in microtraps. This spatial analogue of STIRAP is called spatial adiabatic passage (SAP) and it relies on coupling different spatial eigenstates via a controllable tunnelling interaction [4]. It has been examined for cold atoms in optical traps [512] and for electrons trapped in quantum dots [13, 14]. The ability to control the spatial degrees of freedom of trapped particles is an important goal for using these systems in future quantum technologies such as atomtronics [9, 15, 16] and quantum information processing [17]. SAP has also been suggested for a variety of tasks such as interferometry [11], creating angular momentum [12], and velocity filtering [18]. It is also applicable to the classical optics of coupled waveguides [19, 20].
However, the high fidelity and robustness of adiabatic techniques comes at the expense of requiring long operation times. This is problematic as the system will therefore also have a long time to interact with an environment leading to losses or decoherence. To avoid this problem, we will show how one can speed-up processes that control the centre-of-mass state of quantum particles and introduce a new class of techniques which we refer to as spatial non-adiabatic passage. The underlying foundation for these are shortcuts to adiabaticity (STA) techniques, which have been developed to achieve high fidelities in much shorter total times, for a review see [21, 22]. Moreover, shortcuts are known to provide the freedom to optimise against undesirable effects such as noise, systematic errors or transitions to unwanted levels [2231].
Implementing the STA techniques for spatial control requires complex tunnelling amplitudes. However, tunnelling frequencies are typically real. To solve this, we show that the application of a magnetic field to a triple well system containing a single charged particle (which could correspond to a quantum dot system [3237]) can achieve complex tunnelling frequencies through the addition of a geometric phase. This then allows one to implement a counter-diabatic driving term [21, 22, 3840] or, more generally, to design dynamics using Lewis-Riesenfeld invariants [41].
The paper is structured as follows. In the next section, we present the model we examine, namely a charged particle in a triple well ring system with a magnetic field in the centre. In Section 3, we introduce the spatial adiabatic passage technique in a three-level system and show that making one of the couplings imaginary allows the implementation of transitionless quantum driving. We then show, in Section 3.3, how to create inverse-engineering protocols in this system using Lewis-Riesenfeld invariants. Results for two such protocols, namely transport and generation of a three-trap superposition, are given in Section 4. Section 5 presents a more realistic one-dimensional continuum model for the system, where the same schemes are implemented. Finally, in Section 6, we review and summarise the results.
2 System model
We consider a charged particle trapped in a system of three localised potentials, between which the tunnel coupling can be changed in a time-dependent manner. In order to have coupling between all traps, they are assumed to be arranged along a ring and a magnetic field exists perpendicular to the plane containing the traps, see Figure 1. The particle will initially be located in one of the traps and we will show how to design spatial non-adiabatic passage protocols where a specific final state can be reached within a finite time and with high fidelity. Such a model could, for example, correspond to an electron trapped in an arrangement of quantum dots, where gate electrodes can be used to change the tunnelling between different traps [42]. Another option would be to use ion trapping systems [43], where ring configurations have been recently demonstrated [4446]. In these systems, tunnelling of an ion has already been observed (and controlled by manipulating the radial confinement), as well as the Aharonov-Bohm phase [47] acquired due to the presence of an external magnetic field [44].
Figure 1
Diagram of the system consisting of three coupled quantum wells and a localised magnetic field in the centre. The basis states and the couplings strengths used in the three-level approximation are indicated. The coordinate system for the continuous model in Section 5 is also shown. The distance between two traps along the ring is defined as l, so that the total circumference of the ring is 3l.
Let us start by considering the single-particle Schrödinger equation
$$ i\hbar\frac{\partial\psi}{\partial t} = \frac{1}{2m} (- i \hbar \nabla- q \vec{A} )^{2} \psi+ V \psi, $$
where m and q are the mass and charge of the particle, respectively, and V corresponds to the potential describing the trapping geometry. We assume that the vector potential is originating from an idealised point-like and infinitely long solenoid at the origin (creating a magnetic flux \(\Phi_{B}\)) and it is therefore given by \(\vec {A} = \frac{\Phi_{B}}{2 \pi r} \hat{e}_{\varphi}\) (for \(\vec{r} \neq0\)). Here r, φ, z are cylindrical coordinates and \(\hat{e}_{\varphi}\) is a unit vector in the φ direction.
At low energies such a system can be approximated by a three-level (3L) model, where each basis state, \(|j\rangle\), corresponds to the localised ground state in one of the trapping potentials (see Figure 1). These states are isolated when a high barrier between them exists, but when the barrier is lowered the tunnelling amplitude \(\Omega_{jk}\) between states \(|j\rangle\) and \(|k\rangle\) becomes significant.
The presence of the magnetic field leads to the particle acquiring an Aharonov-Bohm phase [47] whenever it moves (tunnels) between two different positions (traps). This phase is given by \(\phi _{j,k} = \frac{q}{\hbar}\int_{\vec{r}_{j}}^{\vec{r}_{k}}\vec{A}(\vec {r})\cdot d\vec{r}\), where \(\vec{r}_{j}\) is the position of the jth trap, and for consistency, we always chose the direction of the path of the integration to be anti-clockwise around the pole of the vector potential (at \(\vec{r} = 0\)). The effects of this phase on the tunnelling amplitudes is given through the Peierls phase factors [4850], \(\exp (i \phi_{j,k} )\), and the Hamiltonian for the 3L system can be written as
$$ H = -\frac{\hbar}{2} \begin{pmatrix} 0 & \Omega_{12}e^{i\phi_{1,2}} & \Omega_{31}e^{-i\phi_{3,1}} \\ \Omega_{12}e^{-i\phi_{1,2}} & 0 & \Omega_{23}e^{i\phi_{2,3}} \\ \Omega_{31}e^{i\phi_{3,1}} & \Omega_{23}e^{-i\phi_{2,3}} & 0 \end{pmatrix} . $$
Here the \(\Omega_{jk}\) are the coupling coefficients in the absence of any vector potential. The total phase around a closed path containing the three traps is then given by
$$ \Phi\equiv\phi_{1,2}+\phi_{2,3}+\phi_{3,1} = \frac{q}{\hbar} \oint\vec {A}(\vec{r})\cdot d\vec{l} = \frac{q}{\hbar} \Phi_{B}, $$
and is non-zero due to the pole of the vector potential A⃗ at the origin.
To simplify the Hamiltonian (2) one can use the following unitary transformation, which only employs local phases,
$$ U= \begin{pmatrix} 1 & 0 & 0\\ 0 & e^{-i\phi_{1,2}} & 0\\ 0 & 0 & e^{-i (\phi_{1,2}+\phi_{2,3} )} \end{pmatrix} , $$
and transforms the Hamiltonian as
$$ H \rightarrow U^{\dagger} H U= - \frac{\hbar}{2} \begin{pmatrix} 0 & \Omega_{12} & \Omega_{31}e^{- i \Phi}\\ \Omega_{12} & 0 & \Omega_{23}\\ \Omega_{31}e^{i \Phi} & \Omega_{23} & 0 \end{pmatrix} , $$
so that two of the tunnelling amplitudes become real-valued.
A case of particular interest is when \(\Phi=\pi/2\), i.e., when the magnetic flux is \(\Phi_{B} = \pi\hbar/ 2 q\). In this case the Hamiltonian becomes
$$ H = -\frac{\hbar}{2} (\Omega_{12} K_{1}+ \Omega_{23} K_{2}+\Omega_{31} K_{3} ) , $$
where each \(K_{j}\) is a spin 1 angular momentum operator defined as
$$ K_{1}= \begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{pmatrix} ,\qquad K_{2}= \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix} ,\qquad K_{3}= \begin{pmatrix} 0 & 0 & -i\\ 0 & 0 & 0\\ i & 0 & 0 \end{pmatrix} , $$
satisfying \([K_{j}, K_{k}] = i \epsilon_{jkl} K_{l}\) and \(\epsilon_{jkl}\) is the Levi-Civita symbol [51]. This means that the tunnel coupling between \(|3\rangle\) and \(|1\rangle\) becomes purely imaginary. We will show in the next section that this allows for the implementation of spatial non-adiabatic passage processes by either applying a transitionless quantum driving protocol or by using Lewis-Riesenfeld invariants.
3 Processes in the three-level approximation
3.1 Adiabatic methods
A series of spatial adiabatic passage (SAP) techniques have been developed in recent years, which allows one to manipulate and control the external degrees of freedom of quantum particles in localised potentials with high fidelity [4]. The standard SAP protocol for the transport of a single particle in a triple well system [5, 13] is the spatial analogue of the quantum-optical STIRAP technique [13]. It involves three linearly arranged, degenerate trapping states, \(|j\rangle\) with \(j = 1, 2\mbox{ and }3\), that can be coupled through tunnelling by either changing the distance between the traps or lowering the potential barrier between them. The system in the 3L approximation is described by the Hamiltonian
$$ H_{0} = -\frac{\hbar}{2} (\Omega_{12} K_{1}+\Omega_{23} K_{2} ), $$
which has a zero-energy eigenstate of the form
$$ |\lambda_{0}\rangle = \cos\theta|1\rangle - \sin \theta|3\rangle\quad \text{with } \tan\theta= \Omega_{12}/ \Omega_{23} . $$
This state is often called the dark state and SAP consists of adiabatically following \(|\lambda _{0}\rangle\) from \(|1\rangle\) (at \(t=0\)) to \(-|3\rangle\) (at a final time \(t=T\)), effectively transporting the particle between the outer traps one and three. This corresponds to changing θ from 0 (\(\Omega_{23} \gg \Omega_{12}\)) to \(\pi/2\) (\(\Omega_{23} \ll\Omega_{12}\)). Hence in the case of ideal adiabatic following, trap two (located in the middle) is never populated.
3.2 Transitionless quantum driving
The main drawback of SAP is that it requires the process to be carried out adiabatically and therefore slowly compared to the energy gap [4]. If this requirement is not met, unwanted excitations will lead to imperfect transport. One way to specifically cancel possible diabatic transitions in STIRAP was discussed in [52] and a general approach for recovering adiabatic dynamics in a non-adiabatic regime is to use shortcuts to adiabaticity, such as transitionless quantum driving [3840]. This technique consists of adding a counter-diabatic term to the original Hamiltonian, whose particular form is given as
$$ H_{\mathrm{CD}} = i \hbar\sum_{n} \bigl(| \partial_{t} \lambda _{n}\rangle \langle\lambda _{n}| - \langle\lambda_{n}|\partial_{t} \lambda_{n}\rangle |\lambda _{n}\rangle \langle\lambda _{n}| \bigr), $$
where the \(|\lambda_{n}\rangle\) are the eigenstates of \(H_{0}\). For the reference Hamiltonian in Eq. (8) this gives [40]
$$ H_{\mathrm{CD}} =- \frac{\hbar\Omega_{31}(t)}{2} K_{3}, \quad \text{with } \Omega_{31}(t) = 2 \dot{\theta}(t) =2 \biggl( \frac{\Omega_{23} \dot {\Omega}_{12} - \Omega_{12} \dot{\Omega}_{23}}{\Omega_{12}^{2} + \Omega _{23}^{2}} \biggr). $$
We will see in Section 4.1 how this exact same scheme can also be obtained using Lewis-Riesenfeld invariants.
Shortcuts to adiabaticity have been studied in the context of STIRAP [40, 53], i.e., population transfer between internal levels. Its spatial analogue is more challenging as it requires that the additional tunnelling coupling between sites one and three is imaginary (see the definition of \(K_{3}\) in Eq. (7)). However, the system we have presented here is ideal for this, as the system Hamiltonian Eq. (6) is already equal to the total Hamiltonian \(H_{0} + H_{\mathrm{CD}}\). Other methods to implement the imaginary coupling could be, for example, the use of artificial magnetic fields [54] or angular momentum states [55].
A heuristic but not rigorous explanation of why the coupling needs to be imaginary can be obtained by examining the two ‘paths’ the particle can take to move from trap one to trap three. The first is via SAP and leads to \(|1\rangle \to- |3\rangle\) whereas the second is via the direct coupling the shortcut introduces, which leads to \(|1\rangle \to i e^{i \Phi} |3\rangle\). One can then immediately see that for constructive interference of these two terms the phase needs to have the value \(\Phi= \pi/2\), which corresponds to the required imaginary coupling between states \(|1\rangle\) and \(|3\rangle\). It is also interesting to note that the coupling between traps one and three in the shortcut has the form of a π-pulse
$$ \int_{0}^{T} \Omega_{31}(t) \, dt = 2 \int_{0}^{T} \dot{\theta}(t) \, dt = 2 \bigl[ \theta(T) - \theta(0) \bigr] = \pi. $$
3.3 Invariant-based inverse engineering
Another method of designing shortcuts to adiabaticity is by means of inverse-engineering using Lewis-Riesenfeld (LR) invariants [41, 56]. In this section we will briefly review these methods and then apply them to our particular system to both transport the particle and create a superposition between the three wells.
A LR invariant for a Hamiltonian \(H(t)\) is a Hermitian operator \(I(t)\) satisfying [41]
$$ \frac{\partial I}{\partial t}+\frac{i}{\hbar} [H,I ]=0. $$
Since \(I(t)\) is a constant of motion it can be shown that it has time-independent eigenvalues. It can be further shown that a particular solution of the Schrödinger equation,
$$ i\hbar\partial_{t} \bigl\vert \psi(t) \bigr\rangle = H(t) \bigl\vert \psi(t) \bigr\rangle , $$
can be written as
$$ \bigl\vert \psi_{k}(t) \bigr\rangle =e^{i \alpha_{k}(t)} \bigl\vert \phi_{k}(t) \bigr\rangle , $$
where the \(|\phi_{k}(t)\rangle\) are the instantaneous eigenstates of \(H(t)\) and
$$ \alpha_{k}(t)=\frac{1}{\hbar} \int_{0}^{t} \bigl\langle \phi_{k}(s) \bigr\vert \bigl[i\hbar \partial_{s}-H(s) \bigr] \bigl\vert \phi_{k}(s) \bigr\rangle \, ds $$
are the LR phases. Hence a general solution to the Schrödinger equation can be written as
$$ \bigl\vert \psi(t) \bigr\rangle =\sum_{k} c_{k} \bigl\vert \psi _{k}(t) \bigr\rangle , $$
where the \(c_{k}\) are independent of time.
The idea behind inverse engineering using LR invariants is not to follow an instantaneous eigenstate of the \(H(t)\) as one would in the adiabatic case, but rather follow an eigenstate of \(I(t)\) (up to the LR phase). To guarantee that the eigenstates coincide at the beginning and the end of the process, it is necessary that the invariant and the Hamiltonian commute at these times, i.e.,
$$ \bigl[I(0),H(0) \bigr]= \bigl[I(T),H(T) \bigr]=0. $$
One is then free to choose how the state evolves in the intermediate time and once this is fixed, Eq. (13) determines how the Hamiltonian should vary with time to achieve those dynamics.
A LR invariant for a three-level system described by Eq. (6) can be written as
$$ I = -\sin\beta\sin\alpha K_{1}-\sin\beta\cos\alpha K_{2}+ \cos\beta K_{3} , $$
where α and β are time dependent functions which must fulfil the following relations (imposed by Eq. (13))
$$\begin{aligned}& \dot{\alpha} = \frac{\Omega_{12} \sin\alpha+ \Omega_{23} \cos \alpha }{2 \tan\beta} + \frac{\Omega_{31}}{2}, \end{aligned}$$
$$\begin{aligned}& \dot{\beta} = \frac{1}{2} (\Omega_{23} \sin \alpha- \Omega_{12} \cos \alpha). \end{aligned}$$
The eigenstates of this invariant are
$$\begin{aligned}& \bigl\vert \phi_{0}(t) \bigr\rangle = \left ( \begin{matrix} -\sin\beta\cos\alpha\\ -i\cos\beta\\ \sin\beta\sin\alpha \end{matrix} \right ), \end{aligned}$$
$$\begin{aligned}& \bigl\vert \phi_{\pm}(t) \bigr\rangle = \frac{1}{\sqrt{2}}\left ( \begin{matrix} \cos\beta\cos\alpha\pm i\sin\alpha\\ -i\sin\beta\\ -\cos\beta\sin\alpha\pm i\cos\alpha \end{matrix} \right ), \end{aligned}$$
with respective eigenvalues \(\mu_{0}=0\) and \(\mu_{\pm}=\pm1\). One solution of the time-dependent Schrödinger equation is then given by \(|\Psi(t)\rangle = |\phi_{0} (t)\rangle\) as the corresponding LR phase is zero in this case. Note that this invariant is a generalisation of the invariant considered in [57] where a third coupling \(\Omega _{31}\) was not taken into account.
After fixing the boundary conditions using Eq. (18), one is free to choose the functions \(\alpha(t)\) and \(\beta(t)\). Moreover, in this case, one is also free to directly choose the function \(\Omega_{31}\). By inverting Eqs. (20) and (21), the other coupling coefficients are then given by
$$\begin{aligned}& \Omega_{12} = 2 \dot{\alpha}\sin\alpha\tan\beta- 2 \dot{\beta}\cos \alpha- \Omega_{31} \sin\alpha\tan\beta, \end{aligned}$$
$$\begin{aligned}& \Omega_{23} = 2 \dot{\alpha}\cos\alpha\tan\beta+ 2 \dot{\beta}\sin \alpha- \Omega_{31} \cos\alpha\tan\beta. \end{aligned}$$
4 Examples of spatial non-adiabatic passage schemes
In the following we will discuss two examples of spatial non-adiabatic passage derived from LR invariant based inverse engineering in the 3L approximation. The first one is the transport between two different traps, which is shown to be equivalent to the transitionless quantum driving method from Section 3 in some cases. The second scheme will create an equal superposition of the particle in all three traps.
4.1 Transport
The first example of control we examine is the population transfer determined by
$$ \bigl\vert \Psi(0) \bigr\rangle =\vert 1 \rangle \rightarrow \vert \Psi_{\mathrm{target}} \rangle = \bigl\vert \Psi(T) \bigr\rangle =- \vert 3 \rangle, $$
which was considered in the optical regime in [40]. This can be achieved by choosing auxiliary functions that fulfil the boundary conditions
$$ \beta(0)= \beta(T)= - \frac{\pi}{2},\qquad \alpha(0)=0, \quad \text{and} \quad \alpha(T)=\frac{\pi}{2}. $$
The experimentally required tunnelling frequencies are then explicitly given by Eqs. (24) and (25).
For the special choice of \(\beta(t) = -\pi/2\), one can show that \(\langle2|\Psi(t)\rangle = 0\) for all times, i.e. trap two is never occupied during the process. This choice then results in
$$ \tan\alpha= \frac{\Omega_{12}}{\Omega_{23}} \quad \text{and}\quad \Omega_{31} = 2\dot{\alpha}. $$
By identifying α with θ (see Eq. (9)) one can immediately see that this is the same pulse as in the STA scheme derived in Section 3.2.
The transport scheme can be implemented by the choosing the counterintuitive SAP pulses \(\Omega_{12}\) and \(\Omega_{23}\) to have a Gaussian profile [4]
$$\begin{aligned}& \Omega_{12}(t) = \Omega_{0} \exp \bigl[-100 (t/T - 1/2 )^{2} \bigr], \end{aligned}$$
$$\begin{aligned}& \Omega_{23}(t) = \Omega_{0} \exp \bigl[-100 (t/T - 1/3 )^{2} \bigr], \end{aligned}$$
and then calculating \(\Omega_{31}\) from Eq. (28). The resulting pulses and associated dynamical populations are shown in Figure 2. As expected the system follows exactly the dark state, transferring the population between states \(|1\rangle\) and \(|3\rangle\) without populating state \(|2\rangle\).
Figure 2
Spatial non-adiabatic passage transport in the 3L approximation. \(T/\tau =100\) for \(\Omega _{0} \tau = 0.25\). (a) Modulus of the tunnelling amplitudes. (b) Evolution of the populations \(P_{i}=|\langle i|\Psi (t)\rangle|^{2}\). The time unit τ is defined as \(\tau = m l^{2}/\hbar \).
The fidelity of the transport process as a function of the total time and the phase Φ generated by the magnetic field is shown in Figure 3(a). Transport can be seen to occur with perfect fidelity for any value of the total time if the phase takes the appropriate value \(\Phi= \pi/2\). It can also be seen that the shortcut is successful for any value of the phase in the limit of very short or very long times. The latter one is not surprising, as \(\Omega_{31}\) can be neglected in the adiabatic limit, and hence its phase becomes irrelevant. A similar effect occurs for short total times, where the roles are reversed. In this limit \(\Omega_{31}\) is the largest of all three couplings, and hence the phase relation between it and the other couplings becomes inconsequential. As \(\Omega_{31}\) is a π pulse, perfect population transfer in this regime can be achieved regardless of the phase.
Figure 3
Transport process \(\pmb{|1\rangle \to -|3\rangle}\) in the 3L approximation. (a) Fidelity as a function of the total time and the total magnetic phase traversing the system. The green contour line is defined by \(P_{3}=99\%\). (b) Probabilities of population in each of the traps for \(T/\tau =48\) (indicated by a dashed white line in (a)) as a function of the total magnetic phase traversing the system. The dashed black line indicates the optimal value of the phase \(\Phi =\pi /2\).
However, in order to maintain this pulse area, a strong coupling is required for very short processes, as the strength of \(\Omega_{31}\) is inversely proportional to T. This sets a bound on how fast this scheme can be implemented, as any physical implementation will have a maximum tunnelling amplitude. Setting the maximum value of \(\Omega_{31}\) to \(0.25/\tau\), the minimum process times T to achieve fidelities above 99% are approximately 880τ for SAP and 100τ for the shortcut scheme. These times are similar to the ones achievable in a spin-dependent transport scheme recently presented by Masuda et al. [58], however the setup in their work requires four traps and a constant and an AC magnetic field.
It is worth noting that this system also allows for the possibility of measuring the magnetic flux \(\Phi_{B}\), as the amount of transferred population oscillates as a function of the total phase Φ, which is directly related to the magnetic flux as \(\Phi=\frac{q}{\hbar}\Phi_{B}\). As an example we show the occupation probabilities for \(T/\tau= 48\) in each trap at the end of the process as a function of the phase in Figure 3(b). One can see that the populations strongly depend on the phase and over a large range of values one can therefore determine the magnetic flux. The exact relationship between the probabilities and the magnetic flux differs for different total times T.
4.2 Creation of a three-trap superposition
The second scheme we discuss highlights the generality of the LR invariant based method. In this scheme we create an equal superposition state between the particles being in all three traps, which means that the initial and target states are
$$ \bigl\vert \Psi(0) \bigr\rangle = \vert 1 \rangle \rightarrow \vert \Psi_{\mathrm{target}} \rangle = \bigl\vert \Psi (T) \bigr\rangle = \frac{1}{\sqrt{3}} \bigl( \vert 1 \rangle- i \vert 2 \rangle - \vert 3 \rangle\bigr). $$
This can be realised by imposing the boundary conditions
$$\begin{aligned}& \beta(0)=-\frac{\pi}{2},\qquad \beta(T)=-\arctan\sqrt{2}, \end{aligned}$$
$$\begin{aligned}& \alpha(0)=0,\qquad \alpha(T)=\frac{\pi}{4}, \end{aligned}$$
on the auxiliary functions. A simple ansatz which fulfils these boundary conditions is a fourth order polynomial for \(\beta(t)\) and third order polynomials for \(\alpha (t)\) and \(\Omega_{31}(t)\). The pulses are then obtained from Eqs. (24) and (25) and their form is shown in Figure 4(a). From Figure 4(b) it can be seen that this choice creates the target state at the final time with perfect fidelity.
Figure 4
Spatial non-adiabatic superposition scheme \(\pmb{\vert 1 \rangle \to\frac{1}{\sqrt{3}} ( \vert 1 \rangle - i \vert 2 \rangle - \vert 3 \rangle)}\) in the 3L approximation. \(T/\tau =400\). Sub-figures are the same as in Figure 2 and the fidelity shown in (b) is defined as \(F=\vert \langle \Psi _{\mathrm{target}}|\Psi (t)\rangle \vert ^{2}\).
5 Spatial non-adiabatic passage in the continuum model
While the 3L approximation discussed above gives a clear picture of the physics of the system, it does not include effects such as excitations to higher energy states that can occur during the process. We will therefore in the following test the approximation by numerically integrating the full Schrödinger equation in real space. For this, we will consider traps that are narrow enough to limit the system dynamics to an effectively one-dimensional setting along the azimuthal coordinate, \(x = \varphi R\), i.e., around a circle of radius R, see Figure 1. Moreover, we will assume that the magnetic field is characterised by a vector potential in the azimuthal direction, \(\vec{A} = A \hat {e}_{\varphi}\).
We are therefore dealing with a one-dimensional system of length \(2 \pi R\) with periodic boundary conditions, whose dynamics are described by the following Schrödinger equation
$$ i \hbar\frac{\partial\psi}{\partial t} = \frac{1}{2m} \biggl(- i \hbar \frac{\partial}{\partial x} - q A \biggr)^{2} \psi+ V(x) \psi. $$
We assume a constant vector potential throughout the dynamical part of the protocols, as any time-varying vector potential would produce an unwanted force due to the electric field \(\vec{E} = - \partial_{t} \vec{A}\).
In order to be able to apply a well-defined phase we model the trapping sites as highly localised point-like potentials of depth \(\epsilon_{j}\) at the positions \(x_{j} = j l - l/2\) (see Figure 5). They are separated by square barriers of heights \(V_{jk}(t)\) (and length l), giving a total potential
$$ V(x,t) = - \sum_{j=1}^{3} \epsilon_{j}(t) \delta(x - x_{j})+ \textstyle\begin{cases} V_{31}(t) & \text{if } 0 < x < x_{1}, \\ V_{12}(t) & \text{if } x_{1} < x < x_{2}, \\ V_{23}(t) & \text{if } x_{2} < x < x_{3}, \\ V_{31}(t) & \text{if } x_{3} < x < 3l . \end{cases} $$
Since point-like potentials are difficult to implement numerically, in the simulations below they are implemented as narrow Gaussians. It is important to note that this model is not designed to give realistic estimates for the fidelities or exactly reproduce the dynamics of the 3L approximation. It is a toy model to validate the basic underlying processes and show that our schemes also make sense in the continuum.
Figure 5
Schematic of the potential used in the numerical simulations (black line) with the localised states in each trap (coloured areas). The Gaussian shape of the traps is exaggerated here for clarity.
As mentioned above, the tunnelling amplitudes \(\Omega_{jk}(t)\) in the 3L approximation are related to the barrier heights \(V_{jk}(t)\) of the continuum model, see the Appendix. However, changing the barrier heights in order to achieve tunnelling will also affect the energies of the localised states in the neighbouring traps. Therefore, in order to reproduce the resonance of the 3L approximation (where the diagonal elements of the Hamiltonian are always zero) in the continuum model, the depths of the delta potentials \(\epsilon_{j}\) have to be adjusted as the barriers heights change, see Figure 5. Finally, to map the barrier heights \(V_{jk}\) and trap depths \(\epsilon _{j}\) parameters of the continuum model to the tunnelling amplitudes \(\Omega_{jk}\) of the 3L approximation, we numerically calculate the overlaps of neighbouring delta-trap eigenstates.
Results for transport of a particle using the shortcut scheme described in Section 4.1 are shown in Figure 6 and the barrier heights and trap depths used to match the pulses given in Figure 2 are shown in Figure 6(a), (b). The probability density during the process can be seen in Figure 6(c) and the populations in each trap are given in Figure 6(d). While the process is not perfect, one can see that the particle is transported to the final trap with a fidelity of 87%. The effect of the magnetic field can be seen in Figure 6(e), (f), where we show results for the same process but with an inverted magnetic field (using a total phase of \(\Phi= -\pi/2\)). In this case the interference between the adiabatic and shortcut paths is destructive, and almost no population ends up in the final trap.
Figure 6
Spatial non-adiabatic transport process in the continuum model. \(T/\tau =100\). (a), (b) Barrier heights and trap depths obtained by mapping the couplings in Figure 2(a). (c) Evolution of the particle density \(|\psi (x,t)|^{2}\). (d) Corresponding populations \(P_{i}=\vert \langle i|\Psi (t)\rangle \vert ^{2}\) in each trap and of the target state. (e)(f) are the same as (c), (d) but with the magnetic flux flowing in the opposite direction. The width of the Gaussian traps is 10−4 l.
The results for the creation of the superposition state discussed in Section 4.2 are shown in Figure 7. The observed dynamics are very similar to the one in the 3L approximation and the process reaches a final fidelity of the target state of 91%.
Figure 7
Same as Figure 6 (a)-(d) but for the spatial non-adiabatic superposition scheme given in Eq. ( 31 ) in the continuum model. \(T/\tau =400\). \(F=\vert \langle \psi _{\mathrm{target}}|\psi (t)\rangle \vert ^{2}\) is the fidelity of the process.
Since the continuum model has many more degrees of freedom than the 3L model, it is not surprising that the fidelities obtained are lower. Nevertheless, the basic functioning of our spatial non-adiabatic techniques is clearly established from the calculations shown above. Optimising the fidelity in the continuum is an interesting task which, however, goes beyond the scope of the current work.
6 Conclusions and outlook
We have shown how complex tunnel frequencies in single-particle systems allow one to develop spatial non-adiabatic passage techniques that can lead to fast and robust processes for quantum technologies. In particular, we have discussed the case of a single, charged particle in a microtrap environment. The complex tunnelling couplings are obtained from the addition of a constant magnetic field, and have allowed us to generalise adiabatic state preparation protocols beyond the usual spatial adiabatic passage techniques [4]. This demonstrates that non-adiabatic techniques can be as efficient as their adiabatic counterparts, without requiring the long operation times.
In particular, we have discussed the implementation of the counter-diabatic term for spatial adiabatic passage transport via a direct coupling of all the traps. This was, in a second step, generalised to a flexible and robust method for preparing any state of the single-particle system by using Lewis-Riesenfeld invariants. As an example, we have shown that an equal spatial superposition state between the three wells can be created on a short time scale. Finally, we have presented numerical evidence that spatial non-adiabatic processes work also in a one-dimensional toy model by introducing a mapping between the discrete three-level approximation and a continuum model.
While in this work we have focused on a three-trap system, an interesting extension would be to investigate similar schemes in larger systems, or in different physical settings (for example, superconducting qubits [59]). Often, if the transitionless quantum driving technique is directly applied to complex quantum systems, the additional counter-adiabatic terms become very complicated, hard to implement or even unphysical. Nevertheless, the steps outlined in our work (using a few-level approximation, applying the shortcut technique, and then mapping everything back to a continuous model) can in principle be applied to any trap configuration. These steps might lead to schemes which are much easier to implement experimentally than the direct application of the transitionless quantum driving. However, each of these generalised configuration would need to be studied on an individual basis.
It would also be very interesting to see the effect of interactions in this system. For very strong interactions such that double occupancy of a site is suppressed and a single empty site is present, one might expect to observe similar dynamics but for the empty site [9]. In this case, spatial non-adiabatic ideas can be straightforwardly transferred. For intermediate interaction strengths (but stronger than the tunnelling couplings), repulsively-bound pair processes have been shown to dominate the dynamics and single-particle-like dynamics can be recovered for the pair [10, 60, 61]. In this case the presented techniques might be extended for a particle pair.
Finally, it is also worth noting that these complex tunnelling couplings we introduce can be used to implement techniques based on composite pulses [62].
This work has received financial support from Science Foundation Ireland under the International Strategic Cooperation Award Grant No. SFI/13/ISCA/2845 and the Okinawa Institute of Science and Technology Graduate University. We are grateful to David Rea for useful discussion and commenting on the manuscript.
Authors’ Affiliations
Quantum Systems Unit, Okinawa Institute of Science and Technology Graduate University
Department of Physics, University College Cork
Department of Physics, Shanghai University
1. Bergmann K, Theuer H, Shore BW. Rev Mod Phys. 1998;70:1003. ADSView ArticleGoogle Scholar
2. Bergmann K, Vitanov NV, Shore BW. J Chem Phys. 2015;142:170901. ADSView ArticleGoogle Scholar
3. Vitanov NV, Rangelov AA, Shore BW, Bergmann K. Rev Mod Phys. 2017;89:015006. View ArticleGoogle Scholar
4. Menchon-Enrich R, Benseny A, Ahufinger V, Greentree AD, Busch T, Mompart J. Rep Prog Phys. 2016;79:074401. ADSView ArticleGoogle Scholar
5. Eckert K, Lewenstein M, Corbalán R, Birkl G, Ertmer W, Mompart J. Phys Rev A. 2004;70:023606. ADSView ArticleGoogle Scholar
6. Eckert K, Mompart J, Corbalán R, Lewenstein M, Birkl G. Opt Commun. 2006;264:264. ADSView ArticleGoogle Scholar
7. McEndoo S, Croke S, Brophy J, Busch T. Phys Rev A. 2010;81:043640. ADSView ArticleGoogle Scholar
8. Gajdacz M, Opatrný T, Das KK. Phys Rev A. 2011;83:033623. ADSView ArticleGoogle Scholar
9. Benseny A, Fernández-Vidal S, Bagudà J, Corbalán R, Picón A, Roso L, Birkl G, Mompart J. Phys Rev A. 2010;82:013604. ADSView ArticleGoogle Scholar
10. Benseny A, Gillet J, Busch T. Phys Rev A. 2016;93:033629. ADSView ArticleGoogle Scholar
11. Menchon-Enrich R, McEndoo S, Busch T, Ahufinger V, Mompart J. Phys Rev A. 2014;89:053611. ADSView ArticleGoogle Scholar
12. Menchon-Enrich R, McEndoo S, Mompart J, Ahufinger V, Busch T. Phys Rev A. 2014;89:013626. ADSView ArticleGoogle Scholar
13. Greentree AD, Cole JH, Hamilton AR, Hollenberg LCL. Phys Rev B. 2004;70:235317. ADSView ArticleGoogle Scholar
14. Fountoulakis A, Paspalakisa E. J Appl Phys. 2013;113:174301. ADSView ArticleGoogle Scholar
15. Seaman BT, Krämer M, Anderson DZ, Holland MJ. Phys Rev A. 2007;75:023615. ADSView ArticleGoogle Scholar
16. Pepino RA, Cooper J, Anderson DZ, Holland MJ. Phys Rev Lett. 2009;103:140405. ADSView ArticleGoogle Scholar
17. Jaksch D, Briegel H-J, Cirac JI, Gardiner CW, Zoller P. Phys Rev Lett. 1999;82:1975. ADSView ArticleGoogle Scholar
18. Loiko Y, Ahufinger V, Menchon-Enrich R, Birkl G, Mompart J. Eur Phys J D. 2014;68:147. ADSView ArticleGoogle Scholar
19. Longhi S. Phys Rev E. 2006;73:026607. ADSView ArticleGoogle Scholar
20. Longhi S, Della Valle G, Ornigotti M, Laporta P. Phys Rev B. 2007;76:201101. ADSView ArticleGoogle Scholar
21. Torrontegui E, Ibáñez S, Martínez-Garaot S, Modugno M, del Campo A, Guéry-Odelin D, Ruschhaupt A, Chen X, Muga JG. Adv At Mol Opt Phys. 2013;62:117. ADSView ArticleGoogle Scholar
22. Ruschhaupt A, Muga JG. J Mod Opt. 2013;61:828. ADSView ArticleGoogle Scholar
23. Ruschhaupt A, Chen X, Alonso D, Muga JG. New J Phys. 2012;14:093040. View ArticleGoogle Scholar
24. Daems D, Ruschhaupt A, Sugny D, Guérin S. Phys Rev Lett. 2013;111:050404. ADSView ArticleGoogle Scholar
25. Lu XJ, Chen X, Ruschhaupt A, Alonso D, Guérin S, Muga JG. Phys Rev A. 2013;88:033406. ADSView ArticleGoogle Scholar
26. Kiely A, Ruschhaupt A. J Phys B, At Mol Opt Phys. 2014;47:115501. ADSView ArticleGoogle Scholar
27. Guéry-Odelin D, Muga JG. Phys Rev A. 2014;90:063425. ADSView ArticleGoogle Scholar
28. Lu XJ, Muga JG, Poschinger UG, Schmidt-Kaler F, Ruschhaupt A. Phys Rev A. 2014;89:063414. ADSView ArticleGoogle Scholar
29. Zhang Q, Chen X, Guéry-Odelin D. Phys Rev A. 2015;92:043410. ADSView ArticleGoogle Scholar
30. Kiely A, Benseny A, Busch T, Ruschhaupt A. J Phys B, At Mol Opt Phys. 2016;49:215003. ADSView ArticleGoogle Scholar
31. Zhang Q, Muga JG, Guéry-Odelin D, Chen X. J Phys B, At Mol Opt Phys. 2016;49:125503. ADSView ArticleGoogle Scholar
32. Hsieh C-Y, Shim Y-P, Korkusinski M, Hawrylak P. Rep Prog Phys. 2012;75:114501. View ArticleGoogle Scholar
33. Domínguez F, Platero G, Kohler S. Chem Phys. 2010;375:284. ADSView ArticleGoogle Scholar
34. Huneke J, Platero G, Kohler S. Phys Rev Lett. 2013;110:036802. ADSView ArticleGoogle Scholar
35. Jong LM, Greentree AD. Phys Rev B. 2010;81:035311. ADSView ArticleGoogle Scholar
36. Mousolou VA. Europhys Lett. 2017;117:10006. View ArticleGoogle Scholar
37. Zeng Q-B, Chen S, Lü R. arXiv:1608.00065 [quant-ph].
38. Demirplak M, Rice SA. J Phys Chem A. 2003;107:9937. View ArticleGoogle Scholar
39. Berry MV. J Phys A. 2009;42:365303. MathSciNetView ArticleGoogle Scholar
40. Chen X, Lizuain I, Ruschhaupt A, Guéry-Odelin D, Muga JG. Phys Rev Lett. 2010;105:123003. ADSView ArticleGoogle Scholar
41. Lewis HR, Riesenfeld WB. J Math Phys. 1969;10:1458. ADSView ArticleGoogle Scholar
42. Braakman FR, Barthelemy P, Reichl C, Wegscheider W, Vandersypen LMK. Nat Nanotechnol. 2013;8:432. ADSView ArticleGoogle Scholar
43. Seidelin S, Chiaverini J, Reichle R, Bollinger JJ, Leibfried D, Britton J, Wesenberg JH, Blakestad RB, Epstein RJ, Hume DB, Itano WM, Jost JD, Langer C, Ozeri R, Shiga N, Wineland DJ. Phys Rev Lett. 2006;96:253003. ADSView ArticleGoogle Scholar
44. Noguchi A, Shikano Y, Toyoda K, Urabe S. Nat Commun. 2014;5:3868. ADSView ArticleGoogle Scholar
45. Tabakov B, Benito F, Blain M, Clark CR, Clark S, Haltli RA, Maunz P, Sterk JD, Tigges C, Stick D. Phys Rev Appl. 2015;4:031001. ADSView ArticleGoogle Scholar
46. Yoshimura B, Stork M, Dadic D, Campbell WC, Freericks JK. EPJ Quantum Technol. 2015;2:2. View ArticleGoogle Scholar
47. Aharonov Y, Bohm D. Phys Rev. 1959;115:485. ADSMathSciNetView ArticleGoogle Scholar
48. Graf M, Vogl P. Phys Rev B. 1995;51:4940. ADSView ArticleGoogle Scholar
49. Ismail-Beigi S, Chang EK, Louie SG. Phys Rev Lett. 2001;87:087402. ADSView ArticleGoogle Scholar
50. Cehovin A, Canali CM, MacDonald AH. Phys Rev B. 2004;69:045411. ADSView ArticleGoogle Scholar
51. Carroll CE, Hioe FT. J Opt Soc Am B. 1988;5:1335. ADSView ArticleGoogle Scholar
52. Unanyan RG, Yatsenko LP, Bergmann K, Shore BW. Opt Commun. 1997;139:48. ADSView ArticleGoogle Scholar
53. Du YX, Liang ZT, Li YC, Yue XX, Lv QX, Huang W, Chen X, Yan H, Zhu SL. Nat Commun. 2016;7:12479. ADSView ArticleGoogle Scholar
54. Dalibard J, Gerbier F, Juzeliūnas G, Öhberg P. Rev Mod Phys. 2011;83:1523. ADSView ArticleGoogle Scholar
55. Polo J, Mompart J, Ahufinger V. Phys Rev A. 2016;93:033613. ADSView ArticleGoogle Scholar
56. Chen X, Ruschhaupt A, Schmidt S, del Campo A, Guéry-Odelin D, Muga JG. Phys Rev Lett. 2010;104:063002. ADSView ArticleGoogle Scholar
57. Chen X, Muga JG. Phys Rev A. 2012;86:033405. ADSView ArticleGoogle Scholar
58. Masuda S, Tan KY, Nakahara M. arXiv:1612.08389 [cond-mat.mes-hall].
59. Roushan P, Neill C, Megrant A, Chen Y, Babbush R, Barends R, Campbell B, Chen Z, Chiaro B, Dunsworth A, Fowler A, Jeffrey E, Kelly J, Lucero E, Mutus J, O’Malley PJJ, Neeley M, Quintana C, Sank D, Vainsencher A, Wenner J, White T, Kapit E, Neven H, Martinis J. Nat Phys 2017;13:146. View ArticleGoogle Scholar
60. Bello M, Creffield CE, Platero G. Sci Rep. 2016;6:22562. ADSView ArticleGoogle Scholar
61. Bello M, Creffield CE, Platero G. Phys Rev B. 2017;95:094303. View ArticleGoogle Scholar
62. Torosov BT, Vitanov NV. Phys Rev A. 2011;83:053420. ADSView ArticleGoogle Scholar
© The Author(s) 2017 |
92c1458fedeee11d | Determinism necessarily entails that humanity or individual humans may not change the course of the future and its events (a position known as fatalism); however, some determinists believe that the level to which human beings have influence over their future is itself merely dependent on present and past. Causal determinism is associated with, and relies upon, the ideas of materialism and causality. Some of the main philosophers who have dealt with this issue are Marcus Aurelius, Omar Khayyám, Thomas Hobbes, Baruch Spinoza, Gottfried Leibniz, David Hume, Baron d’Holbach (Paul Heinrich Dietrich), Pierre-Simon Laplace,Arthur Schopenhauer, William James, Friedrich Nietzsche, Albert Einstein, Niels Bohr, and, more recently, John Searle, Ted Honderich, and Daniel Dennett.
Mecca Chiesa notes that the probabilistic or selectionistic determinism of B.F. Skinner comprised a wholly separate conception of determinism that was not mechanistic at all. A mechanistic determinism would assume that every event has an unbroken chain of prior occurrences, but a selectionistic or probabilistic model does not.
The nature of determinism
The exact meaning of the term determinism has historically been subject to rigorous scrutiny and several interpretations. Some people, called Incompatibilists, view determinism and free will as mutually exclusive. The belief that free will is an illusion is known as Hard Determinism. Others, labeled Compatibilists, (or Soft Determinists) believe that the two ideas can be coherently reconciled. Incompatibilists who accept free willbut reject determinism are called Philosophical Libertarians — not to be confused with Political Libertarians. Some feel it refers to the metaphysical truth of independent agency, whereas others simply define it as the feeling of agency that humans experience when they act. Many will agree that determinism is the theory that human choices and actions can be determined from external causes; but free will is the theory that human choices and actions are determined by internal causes: that an individual is the prime mover of his life.
Ted Honderich, in his book How Free Are You? – The Determinism Problem gives the following summary of the theory of determinism:
In its central part, determinism is the theory that our choices and decisions and what gives rise to them are effects. What the theory comes to therefore depends on what effects are taken to be… [I]t is effects that seem fundamental to the subject of determinism and how it affects our lives.
Varieties of determinism
Causal (or nomological) determinism is the thesis that future events are necessitated by past and present events combined with the laws of nature. Such determinism is sometimes illustrated by the thought experiment of Laplace’s demon. Imagine an entity that knows all facts about the past and the present, and knows all natural laws that govern the universe. Such an entity might be able to use this knowledge to foresee the future, down to the smallest detail. Simon-Pierre Laplace’s determinist “dogma” (as described by Stephen Hawking) is generally referred to as “scientific determinism” and predicated on the supposition that all events have a cause and effect and the precise combination of events at a particular time engender a particular outcome. This causal determinism has a direct relationship with predictability. Perfect predictability implies strict determinism, but lack of predictability does not necessarily imply lack of determinism. Limitations on predictability could alternatively be caused by factors such as a lack of information or excessive complexity. An example of this could be found by looking at a bomb dropping from the air. Through mathematics, we can predict the time the bomb will take to reach the ground, and we also know what will happen once the bomb explodes. Any small errors in prediction might arise from our not measuring some factors, such as puffs of wind or variations in air temperature along the bomb’s path.
Logical determinism is the notion that all propositions, whether about the past, present or future, are either true or false. The problem of free will, in this context, is the problem of how choices can be free, given that what one does in the future is already determined as true or false in the present. This is referred to as the problem of future contingents.
Additionally, there is environmental determinism, also known as climatic or geographical determinism which holds the view that the physical environment, rather than social conditions, determines culture. Those who believe this view say that humans are strictly defined by stimulus-response (environment-behavior) and cannot deviate. Key proponents of this notion have included Ellen Churchill Semple, Ellsworth Huntington,Thomas Griffith Taylor and possibly Jared Diamond, although his status as an environmental determinist is debated.
Biological determinism is the idea that all behavior, belief, and desire are fixed by our genetic endowment. There are other theses on determinism, including cultural determinism and the narrower concept ofpsychological determinism. Combinations and syntheses of determinist theses, e.g. bio-environmental determinism, are even more common. Addiction Specialist Dr. Drew Pinski relates addiction to biological determinism:
“Absolutely. It’s a complex disorder, but it clearly has a genetic basis. In fact, in the definition of the disease, we consider genetics absolutely a crucial piece of the definition. So the definition as stated in a consensus conference that was published in the early ’90s, it’s a genetic disorder with a biological basis. The hallmark is the progressive use in the face of adverse consequence, and then finally denial.”
Theological determinism is the thesis that there is a God who determines all that humans will do, either by knowing their actions in advance, via some form of omniscience[9] or by decreeing their actions in advance. The problem of free will, in this context, is the problem of how our actions can be free, if there is a being who has determined them for us ahead of time.
Determinism with regard to ethics
Often determinism is connected with ethics as an excuse for unethical actions. Hard determinists assert morality as being caused through hereditary and environmental means. Opposition to determinism promotes that without belief in uncaused free will, humans will not have reason to behave ethically. Determinism, however, does not negate emotions and reason of a person, it simply proposes the source of what causes us to fall back on moral behavior. Anyone susceptible to immoral actions from the idea of determinism was susceptible before and does not hold strong moral judgment prior to the idea.
Determinism implies the moral differences between two people are caused by hereditary predispositions and environmental effects and events. This does not mean determinists are against punishment of people who commit crimes because the cause of a person’s morality (depending on the branch of determinism) is not necessarily themselves.
Determinism in Eastern tradition
The idea that the entire universe is a deterministic system has been articulated in both Eastern and non-Eastern religion, philosophy, and literature.
A shifting flow of probabilities for futures lies at the heart of theories associated with the Yi Jing (or I Ching, the Book of Changes). Probabilities take the center of the stage away from things and people. A kind of “divine” volition sets the fundamental rules for the working out of probabilities in the universe, and human volitions are always a factor in the ways that humans can deal with the real world situations one encounters. If one’s situation in life is surfing on a tsunami, one still has some range of choices even in that situation. One person might give up, and another person might choose to struggle and perhaps to survive. The Yi Jing mentality is much closer to the mentality of quantum physics than to that of classical physics, and also finds parallelism in voluntarist or Existentialist ideas of taking one’s life as one’s project.
This theory has also seen its use in popular culture in Japan. In an anime titled xxxHolic the term Hitsuzen is used to describe the Determinism theory although it has a more magical feel to its explanation.
The followers of the philosopher Mozi made some early discoveries in optics and other areas of physics, ideas that were consonant with deterministic ideas.
In the philosophical schools of India, the concept of precise and continual effect of laws of Karma on the existence of all sentient beings is analogous to western deterministic concept. Karma is the concept of “action” or “deed” in Indian religions. It is understood as that which causes the entire cycle of cause and effect (i.e., the cycle called saṃsāra) originating in ancient India and treated in Hindu, Jain, Sikh and Buddhist philosophies. Karma is considered predetermined and deterministic in the universe, with the exception of a human, who through free will can (somewhat) influence the future. See Karma in Hinduism.
Determinism in Western tradition
In the West, the Ancient Greek atomists Leucippus and Democritus were the first to anticipate determinism when they theorized that all processes in the world were due to the mechanical interplay of atoms, but this theory did not gain much support at the time. Determinism in the West is often associated with Newtonian physics, which depicts the physical matter of the universe as operating according to a set of fixed, knowable laws. The “billiard ball” hypothesis, a product of Newtonian physics, argues that once the initial conditions of the universe have been established, the rest of the history of the universe follows inevitably. If it were actually possible to have complete knowledge of physical matter and all of the laws governing that matter at any one time, then it would be theoretically possible to compute the time and place of every event that will ever occur (Laplace’s demon). In this sense, the basic particles of the universe operate in the same fashion as the rolling balls on a billiard table, moving and striking each other in predictable ways to produce predictable results.
Whether or not it is all-encompassing in so doing, Newtonian mechanics deals only with caused events, e.g.: If an object begins in a known position and is hit dead on by an object with some known velocity, then it will be pushed straight toward another predictable point. If it goes somewhere else, the Newtonians argue, one must question one’s measurements of the original position of the object, the exact direction of the striking object, gravitational or other fields that were inadvertently ignored, etc. Then, they maintain, repeated experiments and improvements in accuracy will always bring one’s observations closer to the theoretically predicted results. When dealing with situations on an ordinary human scale, Newtonian physics has been so enormously successful that it has no competition. But it fails spectacularly as velocities become some substantial fraction of the speed of light and when interactions at the atomic scale are studied. Before the discovery of quantum effects and other challenges to Newtonian physics, “uncertainty” was always a term that applied to the accuracy of human knowledge about causes and effects, and not to the causes and effects themselves.
Newtonian mechanics as well as any following physical theories are results of observations, and experiments and so they describe “how it all works” within a tolerance. However, old western scientists believed if there are any logical connections found between an observed cause and effect, there must be also some absolute natural laws behind. Belief in perfect natural laws driving everything, instead of just describing what we should expect, led to searching for a set of universal simple laws that rule the world. This movement significantly encouraged deterministic views in western philosophy.
Minds and bodies
Some determinists argue that materialism does not present a complete understanding of the universe, because while it can describe determinate interactions among material things, it ignores the minds or souls of conscious beings.
A number of positions can be delineated:
1. Immaterial souls exist and exert a non-deterministic causal influence on bodies. (Traditional free-will, interactionist dualism).
2. Immaterial souls exist, but are part of deterministic framework.
3. Immaterial souls exist, but exert no causal influence, free or determined (epiphenomenalism, occasionalism)
4. Immaterial souls do not exist — the mind-body problem has some other solution.
5. Immaterial souls are all that exist (Idealism).
Modern perspectives on determinism
Determinism and a first cause
Since the early twentieth century when astronomer Edwin Hubble first hypothesized that redshift shows the universe is expanding, prevailing scientific opinion has been that the current state of the universe is the result of a process described by the Big Bang. Many theists and deists claim that it therefore has a finite age, pointing out that something cannot come from nothing. The big bang does not describe from where the compressed universe came; instead it leaves the question open. Different astrophysicists hold different views about precisely how the universe originated (Cosmogony). The philosophical argument here would be that the big bang triggered every single action, and possibly mental thought, through the system of cause and effect.
Determinism and generative processes
Some proponents of emergentist or generative philosophy, cognitive sciences and evolutionary psychology, argue that free will does not exist. They suggest instead that an illusion of free will is experienced due to the generation of infinite behaviour from the interaction of finite-deterministic set of rules and parameters. Thus the unpredictability of the emerging behaviour from deterministic processes leads to a perception of free will, even though free will as an ontological entity does not exist.[14][15]
As an illustration, the strategy board-games chess and Go have rigorous rules in which no information (such as cards’ face-values) is hidden from either player and no random events (such as dice-rolling) happen within the game. Yet, chess and especially Go with its extremely simple deterministic rules, can still have an extremely large number of unpredictable moves. By this analogy, it is suggested, the experience of free will emerges from the interaction of finite rules and deterministic parameters that generate infinite and unpredictable behaviour. Yet, if all these events were accounted for, and there were a known way to evaluate these events, the seemingly unpredictable behaviour would become predictable.
Determinism in mathematical models
Many mathematical models are deterministic. This is true of most models involving differential equations (notably, those measuring rate of change over time). Mathematical models that are not deterministic because they involve randomness are called stochastic. Because of sensitive dependence on initial conditions, some deterministic models may appear to behave non-deterministically; in such cases, a deterministic interpretation of the model may not be useful due to numerical instability and a finite amount of precision in measurement. Such considerations can motivate the consideration of a stochastic model when the underlying system is accurately modeled in the abstract by deterministic equations. A truly non-deterministic event is independent of the time and observer, thus it is called intrinsic random event.
Determinism, quantum mechanics, and classical physics
Since the beginning of the 20th century, quantum mechanics has revealed previously concealed aspects of events. Newtonian physics, taken in isolation rather than as an approximation to quantum mechanics, depicts a universe in which objects move in perfectly determinative ways. At human scale levels of interaction, Newtonian mechanics makes predictions that are agreed with, within the accuracy of measurement. Poorly designed and fabricated guns and ammunition scatter their shots rather widely around the center of a target, and better guns produce tighter patterns. Absolute knowledge of the forces accelerating a bullet should produce absolutely reliable predictions of its path, or so was thought. However, knowledge is never absolute in practice and the equations of Newtonian mechanics can exhibit sensitive dependence on initial conditions, meaning small errors in knowledge of initial conditions can result in arbitrarily large deviations from predicted behavior.
At atomic scales the paths of objects can only be predicted in a probabilistic way. The paths may not be exactly specified in a full quantum description of the particles; “path” is a classical concept which quantum particles do not exactly possess. The probability arises from the measurement of the perceived path of the particle. In some cases, a quantum particle may trace an exact path, and the probability of finding the particles in that path is one. The quantum development is at least as predictable as the classical motion, but it describes wave functions that cannot be easily expressed in ordinary language. In double-slit experiments, photons are fired singly through a double-slit apparatus at a distant screen and do not arrive at a single point, nor do the photons arrive in a scattered pattern analogous to bullets fired by a fixed gun at a distant target. Instead, the light arrives in varying concentrations at widely separated points, and the distribution of its collisions with the target can be calculated reliably. In that sense the behavior of light in this apparatus is deterministic, but there is no way to predict where in the resulting interference pattern an individual photon will make its contribution (see Heisenberg Uncertainty Principle).
Some have argued that, in addition to the conditions humans can observe and the laws we can deduce, there are hidden factors or “hidden variables” that determine absolutely in which order photons reach the detector screen. They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a merely probabilistically-determinative way. In actuality, they proceed in an absolutely deterministic way. Although matters are still subject to some measure of dispute, quantum mechanics makes statisticalpredictions which would be violated if some local hidden variables existed. There have been a number of experiments to verify those predictions, and so far they do not appear to be violated, though many physicists believe better experiments are needed to conclusively settle the question. (See Bell test experiments.) It is possible, however, to augment quantum mechanics with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics.
On the macro scale it can matter very much whether a bullet arrives at a certain point at a certain time, as snipers are well aware; there are analogous quantum events that have macro- as well as quantum-level consequences. It is easy to contrive situations in which the arrival of an electron at a screen at a certain point and time would trigger one event and its arrival at another point would trigger an entirely different event. (See Schrödinger’s cat.)
Even before the laws of quantum mechanics were fully developed, the phenomenon of radioactivity posed a challenge to determinism. A gram of uranium-238, a commonly occurring radioactive substance, contains some 2.5 x 1021 atoms. By all tests known to science these atoms are identical and indistinguishable. Yet about 12600 times a second one of the atoms in that gram will decay, giving off an alpha particle. This decay does not depend on external stimulus and no extant theory of physics predicts when any given atom will decay, with realistically obtainable knowledge. The uranium found on earth is thought to have been synthesized during a supernova explosion that occurred roughly 5 billion years ago. For determinism to hold, every uranium atom must contain some internal “clock” that specifies the exact time it will decay.[citation needed] And somehow the laws of physics must specify exactly how those clocks were set as each uranium atom was formed during the supernova collapse.
Exposure to alpha radiation can cause cancer. For this to happen, at some point a specific alpha particle must alter some chemical reaction in a cell in a way that results in a mutation. Since molecules are in constant thermal motion, the exact timing of the radioactive decay that produced the fatal alpha particle matters. If probabilistically determined events do have an impact on the macro events—such as when a person who could have been historically important dies in youth of a cancer caused by a random mutation—then the course of history is not predictable from the dawn of time.
The time dependent Schrödinger equation gives the first time derivative of the quantum state. That is, it explicitly and uniquely predicts the development of the wave function with time.
\hbar\frac{\partial\psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2\psi(x,t)}{\partial x^2}+V(x)\psi
So quantum mechanics is deterministic, provided that one accepts the wave function itself as reality (rather than as probability of classical coordinates). Since we have no practical way of knowing the exact magnitudes, and especially the phases, in a full quantum mechanical description of the causes of an observable event, this turns out to be philosophically similar to the “hidden variable” doctrine.
According to some, quantum mechanics is more strongly ordered than Classical Mechanics, because while Classical Mechanics is chaotic, quantum mechanics is not. For example, the classical problem of three bodies under a force such as gravity is not integrable, while the quantum mechanical three body problem is tractable and integrable, using the Faddeev Equations. That is, the quantum mechanical problem can always be solved to a given accuracy with a large enough computer of predetermined precision, while the classical problem may require arbitrarily high precision, depending on the details of the motion. This does not mean that quantum mechanics describes the world as more deterministic, unless one already considers the wave function to be the true reality. Even so, this does not get rid of the probabilities, because we can’t do anything without using classical descriptions, but it assigns the probabilities to the classical approximation, rather than to the quantum reality.
Asserting that quantum mechanics is deterministic by treating the wave function itself as reality implies a single wave function for the entire universe, starting at the origin of the universe. Such a “wave function of everything” would carry the probabilities of not just the world we know, but every other possible world that could have evolved. For example, large voids in the distributions of galaxies are believed by many cosmologists to have originated in quantum fluctuations during the big bang. (See cosmic inflation and primordial fluctuations.) If so, the “wave function of everything” would carry the possibility that the region where our Milky Way galaxy is located could have been a void and the Earth never existed at all. (See large-scale structure of the cosmos.)
Be Sociable, Share!
About Android |
c22de74f8d1fe536 | lördag 16 april 2011
One Mind vs Many Minds in Physics
Elimination of the One Mind of Louis XVI witnessed by Many Minds: Birth of modernity.
In Dr Faustus of Modern Physics I describe how modern physics was born in the early 20th century from a deal with the Devil replacing the fundamental principles of classical physics of
• objective reality of space and time
• cause-effect: determinism: causality
• logical consistency
by the new fundamental principles of modern physics of
• relativity: subjective reality of space and time under universal invariance
• statistics: atomistic games of roulette
• duality: both wave and particle at the same time.
The deal was motivated by the following problems which appeared impossible to solve using classical continuum physics and asked for solution to maintain scientific credibility:
• 2nd law of thermodynamics (irreversibility in formally reversible systems)
• observer independent speed of light (Michelson-Morley experiment)
• blackbody radiation (ultraviolet catastrophy)
• photoelectric effect (inexplicable frequency dependence).
In Dr Faustus of Modern Physics I open a door to different resolutions of these pressing problems with less severe side effects than the relativity-statistics-duality of modern physics. I describe this new approach as
• many-minds physics: many actors/observers: many gods: no master,
as opposed to
• one-mind physics: one universal actor/observer: one God: one master,
which is the current paradigm of modern physics as a combination of
• Einstein's relativity theory based on universal invariance
• quantum mechanics based on Schrödinger's multidimensional wave equation.
I present many-minds physics in the books
The basic difference between many-minds and one-mind physics can be understood as the difference between bottom-up and top-down control of a system, in political terms as the difference between democracy and autocracy/dictatorship, or between market economy and
socialistic economy.
In many-minds relativity each observer is tied to his coordinate system and the pertinent question concerns what agreement of observations by different observers is possible, without asking for universal agreement.
In many-minds quantum mechanics each electron/particle solves its own version of the Schrödinger equation and the multidimensional wave function asking for universal agreement does not appear.
In Computational Thermodynamics and Mathematical Physics of Blackbody Radiation I show that finite precision computation can replace atomistic games of roulette as explanation of irreversibility in formally reversible systems and the 2nd law with its direction of time.
Altogether I propose different resolutions of the problems which once troubled physics, resolutions which do not require basic principles of rationality and enlightenment to be abandoned. In many-minds physics, each actor/observer uses an individual classical perspective without any need of universality, like individual actors in a market economy.
PS Any similarity in the above picture with KTH-gate, is purely coincidental.
3 kommentarer:
1. No comments so far... Does this mean that nobody understands or has anyone been censored?
2. Well, maybe Anonyms will be censored in an open society just like hiding the face under a burka may be forbidden, in an open democratic society, or?
Also visit my blog ... plan |
a9a0d2ef4cb7818d | Publication Abstracts
Canuto and Kelly 1972
Canuto, V., and D.C. Kelly, 1972: Hydrogen atom in intense magnetic field. Astrophys. Space Sci., 17, 277-291, doi:10.1007/BF00642901.
The structure of a hydrogen atom situated in an intense magnetic field is investigated. Three approaches are employed. An elementary Bohr picture establishes a crucial magnetic field strength, Ha ≃ 5×109 G. Fields in excess of Ha are intense in that they are able to modify the charac- teristic atomic scales of length and binding energy. A second approach solves the Schrödinger equation by a combination of variational methods and perturbation theory. It yields analytic expressions for the wave functions and energy eigenvalues. A third approach determines the energy eigenvalues by reducing the Schrödinger equation to a one-dimensional wave equation, which is then solved numerically. Energy eigenvalues are tabulated for field strengths of 2×1010 G and 2×1012 G. It is found that at 2×1012 G the lowest energy eigenvalue is changed from -13.6 eV to about -180 eV in agreement with previous variational computations.
Export citation: [ BibTeX ] [ RIS ]
BibTeX Citation
author={Canuto, V. and Kelly, D. C.},
title={Hydrogen atom in intense magnetic field},
journal={Astrophys. Space Sci.},
[ Close ]
RIS Citation
ID - ca04110l
AU - Canuto, V.
AU - Kelly, D. C.
PY - 1972
TI - Hydrogen atom in intense magnetic field
JA - Astrophys. Space Sci.
VL - 17
SP - 277
EP - 291
DO - 10.1007/BF00642901
ER -
[ Close ]
• Return to 1972 Publications
• Return to Publications Homepage |
e15ccf8f1029de70 | IPACS Electronic library
The Role of Unbound Wavefunctions in Energy Quantization and Quantum Bifurcation
KUO CHUNG HSUAN, Yang Ciann-Dong
The energy eigenvalues of a confined quantum system are traditionally determined by solving the time-independent Schrödinger equation for square-integrable solutions. The resulting bound solutions give rise to the well-known phenomenon of energy quantization, but the role of unbound solutions, which are not square integrable, is still unknown. In this paper, we release the square-integrable condition and consider a general solution to the Schrödinger equation, which contains both bound and unbound wavefunctions. With the participation of unbound wavefunctions, we derive universal quantization laws from the discrete change of the number of zero of the general wavefunction, and meanwhile we propose a quantum dynamic description of energy quantization, in terms of which a new phenomenon regarding the synchronization between energy quantization and quantum bifurcation is revealed.
File: download |
a31a6b3bd0bdaa52 | Journal cover Journal topic
Journal topic
OS | Articles | Volume 14, issue 3
Ocean Sci., 14, 453-470, 2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Ocean Sci., 14, 453-470, 2018
© Author(s) 2018. This work is distributed under
the Creative Commons Attribution 4.0 License.
Research article 11 Jun 2018
Research article | 11 Jun 2018
Numerical modeling of surface wave development under the action of wind
Numerical modeling of surface wave development under the action of wind
Dmitry Chalikov1,2,3 Dmitry Chalikov
• 1Shirshov Institute of Oceanology, Saint Petersburg 199053, Russia
• 2Russian State Hydrometeorological University, Saint Petersburg 195196, Russia
• 3University of Melbourne, Victoria 3010, Australia
Back to toptop
The numerical modeling of two-dimensional surface wave development under the action of wind is performed. The model is based on three-dimensional equations of potential motion with a free surface written in a surface-following nonorthogonal curvilinear coordinate system in which depth is counted from a moving surface. A three-dimensional Poisson equation for the velocity potential is solved iteratively. A Fourier transform method, a second-order accuracy approximation of vertical derivatives on a stretched vertical grid and fourth-order Runge–Kutta time stepping are used. Both the input energy to waves and dissipation of wave energy are calculated on the basis of earlier developed and validated algorithms. A one-processor version of the model for PC allows us to simulate an evolution of the wave field with thousands of degrees of freedom over thousands of wave periods. A long-time evolution of a two-dimensional wave structure is illustrated by the spectra of wave surface and the input and output of energy.
1 Introduction
Back to toptop
The phase-resolving modeling of sea waves is the mathematical modeling of surface waves including explicit simulations of surface elevation and a velocity field evolution. As compared with spectral wave modeling, phase-resolving modeling is more general since it reproduces a real visible physical process and is based on well-formulated full equations. Phase-resolving models usually operate with a large number of degrees of freedom. In general, this method is more complicated and requires more computational resources. The simplest way to model like this is to calculate wave field evolution based on linear equations. Such an approach allows the reproduction of the main effects of the linear wave transformation due to the superposition of wave modes, reflections, refractions, etc. This approach is useful for many technical applications but it cannot reproduce a nonlinear nature of waves and the transformation of wave field due to the nonlinearity. Another example of a relatively simple object is a case of the shallow-water waves. The nonlinearity can be taken into account in the more sophisticated models derived from the fundamental fluid mechanics equations with some simplifications. The most popular approach is based on a nonlinear Schrödinger equation of different orders (see Dysthe, 1979) obtained by expansion of the surface wave displacement. This approach is also used for solving the problem of freak waves. The main advantage of a simplified approach is that it allows the reduction of a three-dimensional (3-D) problem to a two-dimensional one (or 2-D problem to 1-D problem). However, it is not always clear which of the nonrealistic effects is eliminated or included in the model after simplifications. This is why the most general approach being developed over the past years is based on the initial two-dimensional or three-dimensional equations (still potential). All the tasks based on these equations can be divided into two groups: the periodic and nonperiodic problems. An assumption of periodicity considerably simplifies construction of the numerical models, though such formulation can be applied to the cases when the condition of periodicity is acceptable, for example, when domain is considered to be a small part of a large uniform area. For the limited domains with no periodicity the problem becomes more complicated since the Fourier presentation cannot be used directly.
From the point of view of physics, the problem of phase-resolving modeling can be divided into two groups: the adiabatic and nonadiabatic modeling. A simple adiabatic model assumes that the process develops with no input or output of energy. Being not completely free of limitations, such a formulation allows the investigation of the wave motion on the basis of true initial equations. Including the effects of input energy and its dissipation is always connected with the assumptions that generally contradict the assumption of potentiality, i.e., the new terms added to the equations should be referred to as pure phenomenological. This is why the treatment of a nonadiabatic approach is often based on quite different constructions.
All of the phase-resolving models use the methods of computational mathematics and inherit all their advantages and disadvantages; i.e., on the one side, there is the possibility of a detailed description of the processes, and on the other side, there are a bunch of the specific problems connected with the computational stability, space and time resolution. The mathematical modeling produces tremendous volumes of information, the processing of which can be more complicated than the modeling itself.
The phase-resolving wave modeling takes a lot of computer time since it normally uses a surface-following coordinate system, which considerably complicates the equations. The most time-consuming part of the model is an elliptic equation for the velocity potential usually solved with iterations. Luckily, for a two-dimensional problem this trouble is completely eliminated by use of the conformal coordinates, reducing the problem to a one-dimensional system of equations which can be solved with high accuracy (Chalikov and Sheinin, 1998). For a three-dimensional problem, the reduction to a two-dimensional form is evidently impossible; hence, the solution of a 3-D elliptical equation for the velocity potential becomes an essential part of the entire problem. This equation is quite similar to the equation for pressure in a nonpotential problem. It follows that the 3-D Euler equations, being more complicated, can still be solved over the acceptable computer time.
There is a large volume of papers devoted to the numerical methods developed for the investigation of wave processes over the past decades. It includes a finite-difference method (Engsig-Karup et al., 2009, 2012), a finite-volume method (Causon et al., 2010), a finite-element method (Ma and Yan, 2010; Greaves, 2010), a boundary (integral) element method (Grue and Fructus, 2010), and spectral methods (Ducroset et al., 2007, 2012, 2016; Touboul and Kharif, 2010; Bonnefoy et al., 2010). These include a smoothed-particle hydrodynamics method (Dalrymple et al., 2010), a large-eddy simulation (LES) method (Issa et al., 2010; Lubin and Caltagirone, 2010), a moving particle semi-implicit method (Kim et al., 2014), a constrained interpolation profile method (Zhao, 2016), a method of fundamental solutions (Young et al., 2010) and a meshless local Petrov–Galerkin method (Ma and Yan, 2010). A fully nonlinear model should be applied to many problems. Most of the models were designed for engineering applications such as overturning waves, broken waves, waves generated by landslides, freak waves, solitary waves, tsunamis, violent sloshing waves, an interaction of extreme waves with beaches and an interaction of steep waves with the fixed structures or with different floating structures. The references given above make up less than 1 % of the publications on those topics.
A two-dimensional approach (like a conformal method) considers a strongly idealized wave field since even monochromatic waves in the presence of lateral disturbances quickly obtain a two-dimensional structure. The difficulty arising is not a direct result of the increase in the dimension. The fundamental complication is that the problem cannot be reduced to a two-dimensional problem, and even for the case of a double-periodic wave field, the problem of solution of a Laplace-like equation for the velocity potential arises. The majority of the models designed for investigation of the three-dimensional wave dynamics are based on simplified equations such as the second-order perturbation methods in which the higher-order terms are ignored. Overall, it is unclear which effects are missing in such simplified models.
The most sophisticated method is based on the full three-dimensional equations and surface integral formulations (Beale, 2001; Xue et al., 2001; Grilli et al., 2001; Clamond and Grue, 2001; Clamond et al., 2005, 2006; Fructus et al., 2005; Guyenne et al., 2006; Fochesato et al., 2006). A fully nonlinear model of three-dimensional water waves, which extends an approach suggested by Craig and Sulem (1993), was originally given in a two-dimensional setting. The model is based upon the Hamiltonian formulation (Zakharov, 1968), which allows the reduction of the problem of surface variable computation by introducing a Dirichlet–Neumann operator, which is expressed in terms of its Taylor series expansion in homogeneous powers of surface elevation. Each term in this Taylor series can be obtained from the recursion formula and efficiently computed using a fast Fourier transform.
The main advantage of the boundary integral equation methods (BIEMs) is that they are accurate and can describe highly nonlinear waves. A method of solution of the Laplace equation is based on the use of Green's function, which allows us to reduce a 3-D water wave problem to a 2-D boundary integral problem. The surface integral method is well suited for simulation of the wave effects connected with very large steepness, specifically, for investigation of the freak wave generation. These methods can be applied both to the periodic and nonperiodic flows. The methods do not impose any limitations on the wave steepness; thus they can be used for simulation of the waves that even approach breaking (Grilli et al., 2001) when the surface obtains a non-single value shape. The method allows us to take into account the bottom topography (Grue and Fructus, 2010) and investigate an interaction of waves with the fixed structures or with the freely responding floating structures (Liu et al., 2016; Gou et al., 2016).
However, the BIEM seems to be quite complicated and time consuming when applied to the long-term evolution of a multimode wave field in large domains. The simulation of the relatively simple wave fields illustrates an application of the method, and it is unlikely that the method can be applied to the simulation of the long-term evolution of a large-scale multimode wave field with a broad spectrum. An implementation of a multipole technique for a general problem of the sea wave simulation (Fochesato et al., 2006) can solve the problem but obviously leads to considerable algorithmic difficulties.
Currently, the most popular approach in the oceanography approach is a HOS (high-order scheme) model developed by Dommermuth and Yue (1987) and West et al. (1987). The HOS model is based on a paper by Zakharov (1968) in which a convenient form of the dynamic and kinematic surface conditions was suggested. The equations used by Zakharov were not intended for modeling, but rather for investigation of stability of the finite amplitude waves. In fact, a system of coordinates in which depth is counted from the surface was used, but the Laplace equation for the velocity potential was taken in its traditional form. However, the Zakharov's followers have accepted this idea literally. They used the two coordinate systems: a curvilinear surface-fitting system for the surface conditions and the Cartesian system for calculation of the surface vertical velocity. An analytical solution for the velocity potential in the Cartesian coordinate system is known. It is based on the Fourier coefficients on a fixed level, while the true variables are the Fourier coefficients for the potential on a free surface. Here a problem of transition from one coordinate system to another arises. This problem is solved by expansion of the surface potential into the Taylor series in the vicinity of the surface. The accuracy of this method depends on that of the representation of the exponential function with a finite number of the Taylor series. For the small-amplitude waves and for a narrow wave spectrum, such accuracy is evidently satisfactory. However, for the case of a broad wave spectrum that contains many wave modes, the order of the Taylor series should be high. The problem is now that the waves with high wave numbers are superposed over the surface of larger waves. Since the amplitudes of a surface potential attenuate exponentially, the amplitude of a small wave at a positive elevation increases, and conversely, it can approach zero at negative elevations. It is clear that such a setting of the HOS model cannot reproduce high-frequency waves, which actually reduces the nonlinearity of the model. This is why such a model can be integrated for long periods using no high-frequency smoothing. In addition, an accuracy of the calculation of a vertical velocity on the surface depends on full elevation at each point. Hence, the accuracy is not uniform along the wave profile. A substantial extension of the Taylor series can definitely result in numerical instability due to the occasional amplification of modes with high wave numbers. The authors of a surface integral method share a similar point of view (Clamond et al., 2005). We should note, however, that the comparison of the HOS method based on the West et al. (1987) approach using a method of the surface integral for an idealized wave field (Clamond et al., 2006) shows quite acceptable results. It was shown in the previous paper that a method suggested by Dommermuth et al. (1987) demonstrates poorer divergence of the expansion for the vertical velocity than the method by West et al. (1987). The HOS model has been widely used (for example, Tanaka, 2001; Toffoli et al., 2010; Touboul and Kharif, 2010) and it has shown its ability to efficiently simulate the wave evolution (propagation, nonlinear wave–wave interactions, etc.) in a large-scale domain (Ducrozet et al., 2007, 2012). It is obvious that the HOS model can be used for many practical purposes. Recently, Ecole Centrale Nantes, LHEEA Laboratory (CNRÑ) announced that the nonlinear wave models based on HOS are published as an open source (, last access: 6 June 2018).
Opposite to the HOS method based on the analytical solution of the Laplace equation in Cartesian coordinates, a group of models is based on a direct solution of the equation for the velocity potential in the curvilinear coordinates (Engsig-Karup et al., 2009, 2012; Chalikov et al., 2014). The main advantage of a surface-following coordinate system is that a variable surface is mapped onto the fixed plane. Since the wave motion is very conservative, the highly accurate numerical schemes should be used for a good description of the nonlinearity and spectrum transformation. This most universal approach is being developed at the Technical University of Denmark (TUD) (see Engsig-Karup, 2009). Actually, the models ModelWave3D developed at TUD are targeted at the solution of a variety of problems, including such problems as the modeling of wave interaction with submerged objects as well as the simulation of wave regime in basins with a real shape and topography.
The model is based on the equations of a potential flow with a free surface. An effect of variable bathymetry is taken into account by using the so-called σ coordinate, (straightening out the bottom and surface). At vertical surfaces a normal derivative of the velocity potential is equal to zero. A flexible-order approximation for spatial derivatives is used. The most time-consuming part of this mode is a 3-D equation for the velocity potential. The strategy of the model development is directed at exploiting the architectural features of modern graphics processing units for the mixed precision computations. This approach is tested using a recently developed generic library for fast prototyping of PDE (partial differential equation) solvers. The new wave tool is applicable for solving and analyzing a variety of large-scale wave problems in coastal and offshore engineering. A description of the project and references can be found at the site (last access: 6 June 2018).
A comparison of a ModelWave3D with a HOS model was presented by Ducrozet et al. (2012). It was shown that both models demonstrate high accuracy, while the HOS model shows a better performance. Note that the comparison of the speed of the models in this case is irrelevant since the ModelWave3D was designed for investigation of complicated processes, taking into account the real shape of a basin, variable depth and even the presence of engineering constructions. All these features are obviously not included in the HOS model.
The development of waves under the action of wind is a process that is difficult to simulate since surface waves are very conservative and change their energy for hundreds and thousands of periods. This is why the most popular method is spectral modeling. Waves as physical objects in this approach are actually absent since an evolution of the spectral distribution of wave energy is simulated. The description of input and dissipation in this approach is not directly connected with the formulation of the problem, but rather it is adopted from other branches of wave theory in which waves are the objects of investigation. However, the spectral approach was found to be the only method capable of describing the space and time evolution of wave field in the ocean. The phase-resolving models (or “direct” models) designed for reproducing the waves themselves cannot compete with the spectral models since the typical size of the domain in such models does not exceed several kilometers. Such a domain includes just several thousands of large waves. Nevertheless, the direct wave modeling plays an ever-increasing role in geophysical fluid dynamics because it gives the possibility of investigating the processes which cannot be reproduced with spectral models. One such problem is that of an extreme wave generation (Chalikov, 2009; Chalikov and Babanin, 2016a). Direct modeling is also a perfect instrument for the development of parameterization of physical processes for spectral wave models. In addition, such models can be used for direct simulation of wave regimes of small water basins, for example, port harbors. Other approaches of direct modeling are discussed in Chalikov et al. (2014) and Chalikov (2016).
Until recently the direct modeling was used for reproduction of a quasi-stationary wave regime when the wave spectrum did not change significantly. A unique example of the direct numerical modeling of a surface wave evolution is given in Chalikov and Babanin (2014), in which the development of a wave field was calculated with the use of a two-dimensional model based on the full potential equations written in the conformal coordinates. The model included the algorithms for parameterization of the input and dissipation of energy (a description of similar algorithms is given below). The model successfully reproduced an evolution of wave spectrum under the action of wind. However, the strictly one-dimensional (unidirected) waves are not realistic; hence, a full problem of wave evolution should be formulated on the basis of the three-dimensional equations. An example of such modeling is given in the current paper.
2 Equations
Back to toptop
Let us introduce a nonstationary surface-following nonorthogonal coordinate system:
where η(x,y,t)=η(ξ,ϑ,τ) is a moving periodic wave surface given by the Fourier series
where k and l are the components of a wave number vector k, hk,l(τ) are Fourier amplitudes for elevations ηξ,ϑ,τ, Mx and My are the numbers of modes in the directions ξ and ϑ, respectively, and Θk,l are the Fourier expansion basis functions, represented as the matrix
The 3-D equations of potential waves in the system of coordinates (1) at ζ≤0 take the following form:
where Υ is the operator:
Capital fonts Φ are used for domain ζ<0 while the lower case φ refers to ζ=0. A term p in Eq. (5) described the pressure on the surface ζ=0.
It is suggested in Chalikov et al. (2014) that it is convenient to represent the velocity potential φ as a sum of two components, i.e., an analytical (linear) component Φ¯φ¯=Φ¯ξ,ϑ,0 and an arbitrary (nonlinear) component Φ̃φ̃=Φ̃ξ,ϑ,0:
The analytical component Φ¯ satisfies the Laplace equation
with a known solution:
(where k=k2+l21/2 and φ¯k,l are the Fourier coefficients of a surface analytical potential φ¯ at ζ=0). The solution satisfies the following boundary conditions:
The nonlinear component satisfies an equation:
Equation (12) is solved with the boundary conditions
The derivatives of a linear component Φ¯ in Eq. (7) are calculated analytically. The scheme combines a 2-D Fourier transform method in the “horizontal surfaces” and a second-order finite-difference approximation on a stretched staggered grid defined by the relation Δζj+1=χΔζj (Δζ is a vertical step, while j= 1 at the surface). A stretched grid provides an increase in accuracy of approximation for the exponentially decaying modes. The values of a stretching coefficient χ lie within the interval 1.01–1.20. A finite-difference second-order approximation of the vertical operators in Eq. (12) on a nonuniform vertical grid is quite straightforward. Equation (12) is solved as the Poisson equations with the iterations over the right-hand side. At each time step, the iterations start with the right-hand side calculated at the previous time step. The initial elevation was generated as a superposition of linear waves corresponding to a JONSWAP spectrum (Hasselmann et al., 1973) with random phases. The initial Fourier amplitudes for the surface potential were calculated using the formulas of the linear wave theory. A detailed description of the scheme and its validation is given in Chalikov et al. (2014) and Chalikov (2016).
Equations (4)–(6) are written in a nondimensional form by using the following scales: length L where 2πL is a (dimensional) period in the horizontal direction, time L1/2g-1/2 and the velocity potential L3/2g1/2 (g is an acceleration of gravity). The pressure is normalized by water density, so that the pressure scale is Lg. Equations (4)–(6) are self-similar to the transformation with respect to L. The dimensional size of the domain is 2πL, so the scaled size is 2π. All of the results presented in this paper are nondimensional. Note that the number of the Fourier modes can be different in the x and y directions. In this case it is assumed that the two-length scales Lx and Ly are used. The nondimensional length of the domain in the y direction remains equal to 2π and a factor r=Lx/Ly is introduced into the definition of a differential operator in the Fourier space.
3 Energy input and dissipation
Back to toptop
The energy input to waves is described by a pressure term p in a dynamic boundary condition (Eq. 5). The tangent stress on the surface cannot be taken into account in the potential formulation. The dissipation cannot be described either with use of the potential equations, but for a realistic description of wave dynamics, the dissipation of wave energy should be taken into account, i.e., we should include additional terms in Eqs. (4) and (5), which, strictly speaking, contradict an assumption of potentiality.
3.1 Energy input from wind
According to the linear theory (Miles, 1957), the Fourier components of surface pressure p are connected with those of the surface elevation through the following expression:
where hk,l,h-k,-l,βk,l,β-k,-l are real and imaginary parts of elevation η and the so-called β function (i.e., Fourier coefficients at COS and SIN, respectively); ρaρw is a ratio of the air and water densities. Equation (14) is a standard presentation of pressure above a multimode surface. It means that every wave mode with amplitude hk,l2+h-k,-l21/2 initiates the pressure mode with amplitude pk,l2+p-k,-l21/2 shifted off the phase of a wave mode by angle α=atanβ-k,-lβk,l. Both coefficients in Eq. (14) are a function of the ratio of wind velocity at half the mode of the length height λk,l/2 to the virtual phase velocity. Hence, for derivation of the shape of the β function it is necessary to simultaneously measure the wave surface elevation and the nonstatic pressure on the surface. Measurement of surface pressure is a very difficult problem since measurements should be carried out very close to a moving surface, preferably, with a surface-following sensor. Such measurements are performed quite seldom, especially, in the field. The measurements were carried out for the first time by a team of authors in both laboratory and field (Snyder et al., 1981; Hsiao and Shemdin, 1983; Hasselmann and Bösenberg, 1991; Donelan et al., 2005, 2006). The data obtained in this way allowed the construction of an imaginary part of the β function used in some versions of wave forecasting models (Rogers et al., 2012). Such measurements and their processing are quite complicated since the wave-produced pressure fluctuations are masked by the turbulent pressure fluctuations. The second method of the β function evaluation is based on the results of numerical investigations of the statistical structure of a boundary layer above waves with the use of Reynolds equations and an appropriate closure scheme. In general, this method works so well that many problems in the technical fluid mechanics are often solved using numerical models, not experimentally (Gent and Taylor, 1976; Riley et al., 1982; Al-Zanaidi and Hui, 1984). This method was being developed beginning from Chalikov (1978, 1986), followed by Chalikov and Makin (1991), Chalikov and Belevich (1992) and Chalikov (1995). The results were implemented in a WAVEWATCH model, i.e., a third-generation wave forecast model (Tolman and Chalikov, 1996), and thoroughly validated against the experimental data in the course of developing WAVEWATCH III (Tolman et al., 2014). This method was later improved on the basis of a more advanced coupled modeling of waves and boundary layer (Chalikov and Rainchik, 2010), while the β function used in WAVEWATCH III was corrected and extended up to the high frequencies. A direct calculation of the energy input to waves requires both the real and imaginary parts of the β function. The total energy input to waves depends on an imaginary part of the β function, while the moments of a higher order depend on both the imaginary and real parts of β. This is why a full approximation constructed in Chalikov and Rainchik (2010) was used in the current work. Note that in a range of relatively low frequencies the new method is very close to the scheme implemented in WAVEWATCH III.
It is a traditional suggestion that both coefficients are the functions of the virtual nondimensional frequency Ω=ωk,lUcosψ=U/ck,lcosψ (where ωk,l and U are the nondimensional radian frequency and wind speed, respectively; ck,l is a phase speed of the kth mode; ψ is an angle between the wind and wave mode directions). Most of the schemes for calculations of the β function consider a relatively narrow interval of the nondimensional frequencies Ω. In the current work, the range of frequencies cover an interval 0<Ω<10, and occasionally the values of Ω>10 can appear. This is another reason why the function derived in Chalikov and Rainchik (2010) through the coupled simulations of waves and the boundary layer is used here. The wave model is based on the potential equations for a flow with a free surface, extended with an algorithm for the breaking dissipation (see below a description of the breaking dissipation parameterization). The wave boundary layer (WBL) model is based on Reynolds equations closed with a Kε scheme; the solutions for air and water are matched through the interface. The β function was used for the evaluation of accuracy of the surface pressure p calculations. The shape of the β function connecting surface elevations and surface pressure is studied up to the high nondimensional wave frequencies in both positive and negative (i.e., for wind opposite to waves) domains. The data on the β function exhibit wide scatter, but since the volume of the data was quite large (47 long-term numerical runs allowed us to generate about 1 400 000 values of β), the shape of the β function was defined with satisfactory accuracy up to the very high nondimensional frequencies -50<Ω<50. As a result, the data on the β-function in such a broad range allow us to calculate the wave drag up to very high frequencies and to explicitly divide the fluxes of energy and momentum transferred by the pressure and molecular viscosity. This method is free of arbitrary assumptions on a drag coefficient Cd, and, conversely, such calculations allow the investigation of the nature of the wave drag (see Ting et al., 2012).
The most reliable data on β function are concentrated in the interval -10<Ω<10 (negative values of Ω correspond to the wave modes running against wind). The real and imaginary parts of the β function are shown in Fig. 1. It is a corrected version of an approximation given in Chalikov and Rainchik (2010) in which the data at negative Ω were interpreted erroneously. In the current calculations the modes running against wind are absent. The function β can be approximated by the formulas
where the coefficients are a1=0.09476,a2=-0.3718,a3=14.80,β0=-0.02,β=-148.a0=0.02277,Ω0=0.02277,Ω1=1.20,Ω2=-18.8,Ω3=21.2.
It was indicated above that an initial wave field is assigned as a superposition of linear modes whose amplitudes are calculated with a JONSWAP spectrum with an initial peak wave number kp0=100. An initial value U/cp0=6 was chosen, i.e., a ratio of the nondimensional wind speed at the height of one-half the initial peak wave length λ0/2=2π/100, and the phase speed cp0=kp0-1/2 is equal to 6. Such a high ratio corresponds to the initial stages of wave development. The wind velocity 6cp0 remains constant throughout the integration. The values of Ω for other wave numbers are calculated by assuming that the wind profile is logarithmic:
where z00 is an effective nondimensional roughness for the initial wind profile, while z0 is an actual roughness parameter that depends on the energy in a high-frequency part of the spectrum and on the wind profile. We call it “effective” since very close to the surface the wind profile is not logarithmic (Chalikov, 1995; Tolman and Chalikov, 1996; Chalikov and Rainchik, 2010). The value of this parameter depends on the wind velocity and energy in a high-wave-number interval of the wave spectrum, as well as on the length scale of the problem. All these effects are possible to include by matching the wave model with a one-dimensional WBL model (Ting et al., 2012). Here, a simplified scheme for the roughness parameter is chosen. It is well known that the roughness parameter (as well as a drag coefficient) increases with a decrease in the inverse wave age. In our case the wind speed is fixed, and dependence for a nondimensional roughness parameter is constructed on the basis of the results obtained in Chalikov and Rainchik (2010):
where z00=10-3 is an initial value of the roughness parameter. Equation (18) approximates the dependence of the effective roughness at the stage of wave development. Note that the results are not sensitive to the variation of the roughness parameter within reasonable limits.
Figure 1Real (dashed curve) and imaginary (solid curve) parts of the β function.
3.2 High-wave-number energy dissipation
A nonlinear flux of energy directed to the small wave numbers produces the downshifting of the spectrum, while an opposite flux forms the shape of the spectral tail. The second process can produce the accumulation of energy near a “cut” wave number. Both processes become more intensive with an increase in the energy input. The growth of amplitudes at high wave numbers is followed by that of the local steepness and numerical instability. This well-known phenomenon in the numerical fluid mechanics is eliminated by use of a highly selective filter simulating the nonlinear viscosity. To support stability, additional terms are included in the right-hand sides of Eqs. (4) and (5):
Ek,l and Fk,l are the Fourier amplitudes of the right-hand sides of Eqs. (4) and (5) while a factor μk,l is calculated using the formula
where k and l are components of wave number |k|, while the coefficients kd and k0 are defined by the expression:
where cm=0.1 and dm=0.75. The expressions (19)–(21) can be interpreted in a straightforward way: the value of µk,l is equal to zero inside the ellipse with semiaxes dmMx and dmMy; then it grows linearly with |k| up to the value cm and is equal to cm outside the outer ellipse. This method of filtration that we call “tail dissipation” was developed and validated with a conformal model by Chalikov and Sheinin (1998). The sensitivity of the results to the parameters in Eqs. (21)–(23) is not high. The aim of the algorithm is to support smoothness and monotonicity of the wave spectrum within a high wave number range. Since the algorithm affects the amplitudes of small modes, it actually does not reduce the total energy, though it efficiently prevents development of the numerical instability. Note that no long-term calculations can be performed without tail dissipation eliminating the development of numerical instability at high wave numbers.
3.3 Dissipation due to wave breaking
The main process of wave dissipation is wave breaking. This process is taken into account in all the spectral wave forecasting models similar to WAVEWATCH (see Tolman and Chalikov, 1996). Since there are no waves in the spectral models, no local criteria of wave breaking can be formulated. This is why the breaking dissipation is represented in spectral models in a distorted form. A real breaking occurs in relatively narrow areas of the physical space; however, a spectral image of such breaking is stretched over the entire wave spectrum, while in reality the breaking decreases height and energy of dominant waves. This contradiction occurs because the waves in spectral models are assumed to be the linear ones, while in fact the breaking occurs in the physical space with a nonlinear sharp wave, usually composed of several modes. However, progress has been gradually made in spectral wave modeling over the past decade. It became clear that state-of-the-art wave models should account for the threshold behavior of the dominant wave breaking, i.e., waves will not break unless their steepness exceeds the threshold (Alves and Banner, 2003; Babanin et al., 2010).
The mechanics of wave breaking at a developed wave spectrum differs from that in a wave field represented by the few modes normally considered in many theoretical and laboratory investigations (e.g., Alberello et al., 2018). Since the breaking in laboratory conditions is initiated by special assignment of amplitudes and phases, it cannot be similar to the breaking in natural conditions. To some degree the wave breaking is similar to the development of an extreme wave that appears suddenly with no pronounced prehistory (Chalikov and Babanin, 2016a, b). There are no signs of modulational instability in both phenomena, which suggests a process of energy consumption from other modes. The evolution leading to the breaking or “freaking” seems just opposite: the full energy of a main wave remains nearly constant while the columnar energy is focused around the crest of this wave, which becomes sharper and unstable. Probably even more frequent cases of wave breaking and extreme wave appearance can be explained by a local superposition of several modes.
The instability of interface leading to the breaking is an important and poorly developed problem of fluid mechanics. In general, this essentially nonlinear process should be investigated for a two-phase flow. Such an approach was demonstrated, for example, by Iafrati (2009). However, progress in solving this highly complicated problem is slow.
A problem of the breaking parameterization includes two points: (1) establishing of a criterion of the breaking onset and (2) development of an algorithm of the breaking parameterization. The problem of breaking is discussed in detail in Babanin (2011). Chalikov and Babanin (2012) performed a numerical investigation of the processes leading to the breaking. It was found that a clear predictor of the breaking formulated in dynamical and geometrical terms probably does not exist. The most evident criterion of the breaking is the breaking itself, i.e., the process when some part of the upper portion of a sharp wave crest is falling down. This process is usually followed by separation of the detached volume of liquid into the water and air phases. Unfortunately, there is no possibility of describing this process within the scope of the potential theory.
Some investigators suggest using a physical velocity approaching the rate of surface movement in the same direction as a criterion of the breaking onset. This is incorrect since a kinematic boundary condition suggests that these quantities are exactly equal to each other. It is quite clear that the onset of breaking can be characterized by the appearance of a non-single-value piece of surface. This stage can be investigated with a two-dimensional model, which due to a high flexibility of the conformal coordinates allows us to reproduce a surface with an inclination in the Cartesian coordinates exceeding 90 degrees. (In the conformal coordinates the dependence of elevation on a curvilinear coordinate is always a single value). The duration of this stage is extremely short, the calculations being always interrupted by the numerical instability with sharp violation of the conservation laws (constant integral invariants, i.e., full energy and volume) and strong distortion of the local structure of flow. The numerous numerical experiments with a conformal model showed that after the appearance of a non-single value the model never returns to stability. However, the introduction of a non-single surface as a criterion of the breaking instability even in a conformal model is impossible since a behavior of the model at a critical point is unpredictable, and the run is most likely to be terminated, no matter what kind of parameterization of breaking is introduced. It means that even in a precise conformal model the stabilization of the solution should be initiated prior to the breaking.
A consideration of an exact criterion for the breaking onset for the models using transformation of the coordinate type of point (1) is useless since the numerical instability in such models arises not because of the approach of breaking but because of the appearance of large local steepness. The multiple experiments with a direct 3-D wave model show that the appearance of the local steepness max(η/x,η/y) exceeding ≈2 (that corresponds to a slope of about 60 degrees) is always followed by the numerical instability but the instability can happen well before reaching this value. The decrease in a time step does not make any effect. As seen, a surface with such a slope is very far from being a vertical wall, when the real breaking starts. However, an algorithm for the breaking parameterization must prevent numerical instability. The situation is similar to the numerical modeling of turbulence (LES technique) in which a local highly selective viscosity is used to prevent the appearance of too large local gradients of the velocity. A description of the breaking in the direct wave modeling should satisfy the following conditions. (1) It should prevent the onset of instability at each point of half a million grid points over more than 100 thousand time steps. (2) It should describe in a more or less realistic way the loss of kinetic and potential energies with preservation of balance between them. (3) It should preserve the volume. It was suggested in Chalikov (2005) that an acceptable scheme can be based on a local highly selective diffusion operator with a special diffusion coefficient. Several schemes of such type were validated, and finally the following scheme was chosen:
where Fη and Fφ are the right-hand sides of Eqs. (4) and (5) including the terms introduced in terms of Fourier coefficients by Eqs. (19)–(23); Bξ and Bϑ are diffusion coefficients. It was suggested in the first versions of the scheme that a diffusion coefficient depends on a local slope; however, such a scheme did not prove to be very reliable since it did not prevent all of the events of the numerical instability. A scheme based on the calculation of the local curvilinearity ηξξ and ηϑϑ turned out to be a lot more robust. The calculations of 75 different runs were performed with a full 3-D model in Chalikov et al. (2014) over the period of t=350 (70 000 time steps). The total number of values used for the calculations of dependence in Fig. 2 (thick curve) is about 6 billion. The normal probability calculated with the same dispersion is shown by a thin curve.
It is seen that the probability of large negative values of the curvilinearity is calculated over an ensemble of linear modes by orders larger than the probability with the spectra generated by a nonlinear model.
The curvilinearity turned out to be very sensitive to the shape of surface. This is why it was chosen as a criterion of the breaking approach. Coefficients Bξ and Bϑ depend nonlinearly on the curvilinearity:
where Δξ and Δζ are the horizontal steps in the x and y directions in a grid space, and the coefficients are CB=2.0 and ηξξcr=ηϑϑcr=-50. Equations (24)–(27) do not change the volume and decrease the local potential and kinetic energy. It is assumed that the lost momentum and energy are transferred to the current and turbulence (see Chalikov and Belevich, 1992). In addition, the energy also goes to other wave modes. The choice of parameters in Eqs. (24)–(27) is based on simple considerations: a local piece of surface can closely approach the critical curvilinearity but not exceed it. The values of the coefficients are picked with reserve to provide stability for long runs.
Figure 2Probability of the curvilinearity ηξξ. The thick curve is calculated with a full 3-D model; the thin curve is the probability calculated over an ensemble of linear modes with the same spectrum.
We do not think that the suggested breaking parameterization is a final solution to the problem. Other schemes will be tested in the next version of the model. However, the results presented below show that the scheme is reliable and provides a realistic energy dissipation rate.
4 Calculations and results
Back to toptop
The elevation and surface velocity potential fields are approximated in the current calculations by Mx=256 and My=128 modes in directions x and y. The corresponding grid includes Nx×Ny=(1024×512) knots. The vertical derivatives are approximated at a vertical stretched grid dζj+1=χdζj,(j=1,2,3,Lw), where ν=1.2 and Lw=10. A small number of levels used for the solution of the equation for a nonlinear component of the velocity potential are possible because just a surface vertical derivative for the velocity potential Φ/ζζ=0 is required. The velocity potential mainly consists of an analytical component φ¯, while a nonlinear component provides only a small correction. To reach the accuracy of the solution ε=10-6 for Eq. (11), no more than two iterations were usually sufficient.
The parameters chosen were used for solution of a problem of a wave field evolution over the acceptable time (of the order of 10 days). The initial conditions were assigned on the basis of the empirical spectrum JONSWAP (Hasselmann et al., 1973) with a maximum placed at the wave number kp=100 with the angle spreading (coshψ)256. The details of the initial conditions are of no importance because an initial energy level is quite low.
The total energy of a wave motion E=Ep+Ek (Ep is a potential energy, while Ek is a kinetic energy) is calculated with the following formulas:
where a single bar denotes averaging over the ξ and ϑ coordinates, and a double bar denotes averaging over the entire volume. The derivatives in Eq. (25) are calculated according to the transformation (1). An equation of the integral energy E=Ep+Ek evolution can be represented in the following form:
where I is the integral input of energy from wind (Eqs. 14–18); Db is the rate of the energy dissipation due to the wave breaking (Eqs. 24–27); Dt is the rate of the energy dissipation due to filtration of high-wave-number modes (tail dissipation, Eqs. 19–23); N is an integral effect of the nonlinear interactions described by the right-hand side of the equations when the surface pressure p is equal to zero. The differential form for calculation of the energy transformation can be, in principle, derived from Eqs. (4)–(6), but here a more convenient and simple method was applied. Different rates of the integral energy transformations can be calculated with the help of fictitious time steps (i.e., apart from the basic calculations). For example, the value of I is calculated by the following relation:
where Et+Δt is the integral energy of a wave field obtained after one time step with the right side of Eq. (6) containing only the surface pressure calculated with Eqs. (14)–(18). For calculation of the dissipation rate due to filtration, the right-hand side of the equations contains just the terms introduced in Eqs. (19)–(23), while for calculation of the effects of breaking, only the terms introduced in Eqs. (24)–(27) are in use.
An evolution of the characteristics calculated by Eq. (30) is shown in Fig. 3. The sharp variations in all the characteristics at t<50 can probably be explained by adjustment of the initial linear fields to the nonlinearity. Up to the end of integration, the sum of all energy transition terms (tail dissipation Dt, breaking dissipation Db and energy input I) approaches zero (curve 4), and the energy growth E (curve 5) stops. Then the energy tends to decrease, but we are not sure about the nature of this effect. Such behavior can be explained by a fluctuating character of mutual adjustment of input and dissipation or simply by deterioration of the approximation because of the downshifting process. Note that opposite to a more or less monotonic behavior of the tail dissipation (curve 1), the breaking dissipation is highly intermittent, which is consistent with the common views on the wave breaking nature.
The data on the evolution of the wave spectrum are shown in Fig. 4. A 2-D wave spectrum S(k,l) 0kMx,-MylMy averaged over 13 time intervals of length equal to Δt≈100 was transferred to the polar coordinates Sp(ψ,r)-π/2ψπ/2,0rMx and then averaged over the angle ψ to obtain a 1-D spectrum Sh(r):
An angle ψ=0 coincides with the direction of wind U, Δψ=π/180.
Figure 3Evolution of integral characteristics of the solution, a rate of evolution of the integral energy multiplied by 107 due to 1 – tail dissipation Dt (Eqs. 19–23), 2 – breaking dissipation Db (Eqs. 24–27), 3 – input of energy from wind I (Eqs. 14–18) and 4 – balance of energy I+Dt+Db. Curve 5 shows the evolution of wave energy 105E. Grey vertical bars show the instantaneous values; the thick curve shows the smoothed behavior.
Figure 4The wave spectra Sh(r) integrated over angle ψ in the polar coordinates and averaged over the consequent intervals of length for about 100 units of the nondimensional time t. The spectra grow and shift from right to left.
As seen, each spectrum consists of separated peaks and holes1. This phenomenon was first observed and discussed by Chalikov et al. (2014). The repeated calculations with different resolution showed that such a structure of the 2-D spectrum is typical. It cannot be explained by a fixed combination of interacting modes since in different runs (with the same initial conditions but a different set of phases for the modes) the peaks are located at different locations in a Fourier space.
Another presentation is given in Fig. 6 in which the log10(S(ψ,r)), averaged over the successive seven-period length Δt=200, is given. The first panel with a mark of 0 refers to the initial conditions. The disturbances within the range (125<k<150) reflect the initial adjustment of input and dissipation at a high-wave-number slope of spectrum. The pictures characterize the downshifting and angle spreading of the spectrum well due to the nonlinear interactions.
Figure 5Sequence of 3-D images of lg10(S(k,l)), in which each panel corresponds to a single curve in Fig. 3. The left side refers to the wave number l-MylMy and the front side refers to k0kM. The numbers indicate the end of the time interval expressed in hundreds of nondimensional time units.
Figure 6Sequence of 2-D images of lg10(S(k,l)) averaged over the consequent seven periods of length Δt=200. The numbers indicate the period of averaging (first panel marked 0, refers to the initial conditions). The horizontal and vertical axes correspond to the wave numbers k and l, respectively.
Evolution of the integrated-over-angles ψ wave spectrum Sh(r) can be described with the equation
where Ir,Dtr,DbrandN(r) are the spectra of the input energy, tail dissipation, breaking dissipation and the rate of the nonlinear interactions, all obtained by integration over angles ψ. All of the spectra shown below were obtained by transformation of the 2-D spectra into a polar coordinate (ψ,r) and then integrated over angles ψ within the interval -π/2,π/2. The spectra can be calculated using an algorithm similar to that (Eq. 30) for integral characteristics. For example, a spectrum of the energy input I(k,l) is calculated as follows:
where Sc(kx,ky) is the spectrum of the columnar energy calculated by the relation
where the grid values of velocity components u,v and w are calculated by the relations
and uk,l,vk,landwk,l are their Fourier coefficients.
For calculation of I(k,l) the fictitious time steps Δt are made only with a term responsible for the energy input, i.e., surface pressure p. A spectrum I(k,l) was averaged over the periods Δt≈100, then transformed into a polar coordinate system and integrated in a Fourier space over angles ψ within the interval -π/2,π/2.
Evolution of the input spectra (Fig. 7) is in general similar to that of the wave spectra shown in Fig. 4. Note that the maximum of the spectra is located at the maximum of the wave spectra since the input depends mainly on the spectral density, while the dependence on frequency is less important.
An algorithm (Eq. 30) was applied for calculation of the dissipation spectra due to dumping of a high-wave-number part of spectrum (tail dissipation) and for calculation of the spectrum of the breaking dissipation. In the first case, the fictitious time step was made taking into account the terms described by Eqs. (19)–(23), while in the second case the time step was made using the terms described by Eqs. (24)–(27).
The spectra of the tail dissipation calculated similarly to the spectra I(r) are shown in Fig. 8. The dissipation occurs at the periphery of the spectrum, outside an ellipse with semiaxes dmMx and dmMy2. This is why such dissipation, averaged over angles, seems to affect the middle part of a 1-D spectrum. The tail dissipation effectively stabilizes the solution.
Figure 7The spectrum of energy input I(r) integrated over angle ψ in the polar coordinates and averaged over the consequent intervals of length for about 100 units of the nondimensional time t.
Figure 8The tail dissipation spectra Dt(r) integrated over angle ψ in the polar coordinates and averaged over the consequent intervals of length for about 100 units of the nondimensional time t.
The breaking dissipation averaged over angles is presented in Fig. 8. As seen, the breaking dissipation has a maximum at the spectral peak. This does not mean that in the vicinity of the wave peak the probability of large curvilinearity is quite high. A high rate of the breaking dissipation can be explained by high wave energy in the vicinity of the wave peak. The energy lost through the breaking, described by the diffusion mechanism, correlates with the energy of breaking waves. Opposite to the high-wave-number dissipation which regulates the shape of the spectral tail, the breaking dissipation forms the main energy-containing part of the spectrum.
The diffusion mechanism suggested in Eqs. (24)–(27) modifies an elevation and surface stream function in close vicinity of the breaking point. The amplitudes of side perturbation are small and decrease very quickly throughout the distance from the breaking point.
An example of the profile of the energy input due to the breaking Db(x) is given in Fig. 10. As seen, the energy input fluctuates around the breaking point. A diffusion operator chosen for the breaking parameterization not only decreases total energy but also redistributes the energy between Fourier modes in a Fourier space.
Figure 9The breaking dissipation spectra Db(r) integrated over angle ψ in the polar coordinates and averaged over the consequent intervals of length for about 100 units of the nondimensional time t.
In general, for the specific conditions considered in this paper, the breaking is an occasional process taking place in a small part of the domain. The kurtosis of the input energy due to the breaking Db(ξ,ϑ), i.e., the value
is of the order of 103, which corresponds to a plain function with occasional separated peaks.
The number of breaking points in terms of percentage of the total number of points is given in Fig. 11. As seen, the number of breaking events decreases to t=600 and then increases till the end of the calculations. The number of breaking events is not directly connected with the intensity of breaking, which is seen when comparing Fig. 11 and curve 2 in Fig. 3.
An integral term describing a nonlinear interaction N in Eq. (29) is small (compared with the local values of Nk,l), but the magnitude of spectrum N(r) is comparable with input I(r) and dissipation terms Dt(r) and Db(r). The presentation of term N(r) in the form shown in Figs. 6–8 is not clear. This is why the spectra 108N(r) averaged over interval Δt=100 are plotted separately in Fig. 11 for the last eight intervals (thick curves) together with a wave spectrum 106Sh(r). In general, the shapes of spectrum N(r) agree with the conclusions of the quasi-linear Hasselmann (1962) theory (Hasselmann et al., 1985). At a low-wave-number slope of spectrum the nonlinear influx of energy is positive, while at the opposite slope it is negative. This process produces the shifting of the spectrum to a lower wave number (downshifting). Opposite to the Hasselmann's theory, these results are obtained by solution of the full three-dimensional equations. It would be interesting to compare our results with the calculations of Hasselmann's integral. Unfortunately, neither of the existing programs of such a type permits calculations with the high resolution that was used in the current model. Note that the nonlinear interactions also produce widening of the spectrum.
Figure 10Example of the energy input due to the breaking Db(x).
As can be seen, the nonlinearity is quite an important property of surface waves. The contribution of nonlinearity can be estimated, for example, by comparison of the kinetic energy of a linear component El=0.5φ¯x2+φ¯y2+φ¯z2 and the total kinetic energy Ek (Fig. 13). A ratio ElEk as a function of time remains very close to 1, which proves that the nonlinear part of energy makes up just a small percentage of the total energy. It does not mean that the role of the nonlinearity is small; its influence can manifest itself over large timescales.
The time evolution of the integral spectral characteristics is presented in Fig. 14. Curve 1 corresponds to the weighted frequency ωw
where integrals are taken over the entire Fourier domain. The value ωw is not sensitive to the details of the spectrum; hence, it characterizes the position of spectrum and its shifting well. Curve 2 describes an evolution of the spectral maximum. The step shape of the curve corresponds to the fundamental property of downshifting. Opposite to the common views, the development of spectrum occurs not monotonically but by the appearance of a new maximum at a lower wave number as well as by attenuation of the previous maximum. It is interesting to note that the same phenomenon is also observed in a spectral model (Rogers et al., 2012). Curve 3 describes the change of total energy E=Ep+Ek. As seen, all three curves have a tendency to slow down the evolution rate. Then the energy tends to decrease, but we are not sure about the nature of this effect. Such behavior can be explained by a fluctuating character of mutual adjustment of input and dissipation or simply by deterioration of the approximation because of the downshifting process. The numerical experiment reproduces the case when development of wave field occurs under the action of a permanent and uniform wind. This case corresponds to a JONSWAP experiment. Despite large scatter, the data allow us to construct empirical approximations of a wave spectrum, as well as to investigate the evolution of a spectrum as a function of fetch F. In particular, it is suggested that the frequency of a spectral peak changes as F-1/3, while the full energy grows linearly with F. Neither of the dependences can be exact since they do not take into account the approach to a stationary regime. In addition, the dependence of frequency on fetch is singular at F=0.
Figure 11Evolution of a number of the wave breaking events Nb expressed in percentage of the number of grid points Nx×Ny.
The value of fetch in a periodic problem can be calculated by integration of a peak phase velocity cp=k-1/2 over time.
The JONSWAP dependencies for the frequency of a spectral peak ωp and the full energy E are shown in Fig. 14 by thin curves. Dependence ωpF1/3 is qualitatively valid. A dependence of the total energy on fetch does not look like a linear one, but it is worth noting that the JONSWAP dependence is evidently inapplicable to a very small and large fetch.
Figure 12Sequence of wave spectra Sh(r) (thick curves) and a nonlinear input term N(r) (thin curves) averaged over the eight consequent intervals of length Δt=100 starting from the sixth period.
Figure 13Time evolution of the ratio ElEk.
Figure 14Time evolution of weighted frequency ωw (1) (Eq. 34), the spectral peak frequency ωp=kp1/2 (2) and full energy E(3) (Eq. 28). Thin curves correspond to the empirical dependences for the peak wave number and energy. F is a distance passed by the spectral peak.
5 Discussion
Back to toptop
A model based on the full three-dimensional equations of a potential motion with a free surface was used for simulation of development of wave fields. The model is written in a surface-following nonstationary nonorthogonal coordinate system. The details of a numerical scheme and the results of the model validation were described in Chalikov et al. (2014). The main difference between the given model and HOS model (Ducroset et al., 2016) is that our model is based on a direct solution of the 3-D equations for the velocity potential. This approach is similar to that developed at the Technical University of Denmark (TUD; see Engsig-Karup et al., 2009). Actually, the models developed at TUD are targeted at the solution of a variety of problems including such problems as the modeling of wave interaction with submerged objects and the simulation of a wave regime in basins with real shape and topography.
In the current paper a three-dimensional model was used for simulation of the development of a wave field under the action of wind and dissipation. The input energy is described by a single term, i.e., surface pressure p in Eq. (4). It is traditionally assumed that the complex pressure amplitude in a Fourier space is linearly connected with the complex elevation amplitude through a complex coefficient called the β function. Such simple formulations can be imperfect. First, it is assumed that the wave field is represented by a superposition of linear modes with the slowly changing amplitudes and the phase velocity obeying a linear dispersive relation. This assumption is valid only for a low-frequency part of the spectrum. In reality, the amplitudes of the medium- and high-frequency modes undergo fluctuations created by reversible interactions. A solid dispersion relation does not connect their phase velocities with a wave number. In addition, it is also quite possible that a suggestion of the linearity of the connection between the pressure and elevation amplitudes is not precise, i.e., the β function can depend on the amplitudes of modes.
We are not familiar with any observational data that can be used for the formulation of a statistically provided scheme for calculation of the input energy to waves. The only method that can give more or less reliable results is the mathematical modeling of the statistical structure of a turbulent boundary layer above a curvilinear moving surface whose characteristics satisfy the kinematic conditions. The method described above is based on several millions of values of the pressure referred strictly to the surface. As a whole, the problem of a boundary layer seems even more complicated than the wave problem itself. Some early attempts to solve this problem were made on the basis of a finite-difference two-dimensional model of a boundary layer written in a simple surface following the coordinate (see review Chalikov, 1986). Waves were assigned as a superposition of linear modes with random phases, corresponding to the empirical wave spectrum. This approach was not quite accurate since it did not take into account the nonlinear properties of surface (for example, the sharpness of real waves and the absence of a dispersive relation for the waves of medium and high frequencies). The next step was the formulation of coupled models for a boundary layer and potential waves, both written in the conformal coordinates (Chalikov and Rainchik, 2010). The calculations showed that the pressure field consists mostly of random fluctuations not directly connected with the waves. A small part of these fluctuations are in phase with the surface disturbances. The calculated values of β in Eq. (13) have large dispersion. However, since the volume of data was very large, the shape of the β function was found with a high level of accuracy. Probably, the approximation of β used in the current work can be considered most adequate. We are planning additional investigations based on the coupled wind–wave models. The next step in the investigations of wave boundary layer (WBL) should use a three-dimensional LES approach. Note that even the availability of a large volume of data on the structure of WBL does not make the problem of parameterization of wind input in the spectral wave models easily solvable since the pressure is characterized by a broad continuous spectrum created by nonlinearity.
The wave breaking is obviously even more complicated than the input energy. Nevertheless, this problem can be simplified, if the common ideas used in the numerical fluid mechanics are accepted. For example, in the LES modeling a more or less artificial viscosity is introduced to prevent too large local velocity gradients. In fact the numerical instability terminating computations precedes the wave breaking. Hence, the scheme should prevent the breaking approach to preserve stability of the numerical scheme. Hence, a wave model should contain the algorithms preventing the appearance of too large slopes. A criterion of breaking is introduced not for recognizing the breaking itself, but for the choice of places where it might happen (or, unfortunately, might not happen). Finally, the algorithm should produce the local smoothing of elevation (and the surface potential). The algorithm should be highly selective so that the “breaking” could occur within narrow intervals and not affect the entire area. The exact criteria of the breaking events (most evident of them is the breaking itself) cannot be used for parameterization of breaking since in a coordinate system (1) the numerical instability occurs long before the breaking. In our opinion, the most sensitive parameter indicating potential instability is the curvilinearity (second derivative) of elevation.
In the current work, the breaking is parameterized by a diffusion algorithm with a nonlinear coefficient of diffusion providing high selectivity of the smoothing. We admit that such an approach can be realized in many different forms. The same situation is observed in a problem of the turbulence modeling for parameterization of subgrid scales. Note that the breaking dissipation in phase-resolving models is included in a more realistic manner than in spectral models. For example, the breaking is simulated in a physical space, which allows us to reduce the height and energy of the nonlinear waves composed of several modes. In spectral models the dissipation is distributed more or less arbitrarily over the entire spectrum. The spectral models sometimes include additional dissipation of short waves due to their modulation by long waves (Young and Babanin, 2006; Babanin et al., 2010). In the phase-resolving models this process has been included explicitly.
We can finally conclude that the physics included in wave models still rests on shaky ground. Nevertheless, the result of the calculations looks quite realistic, which convinces us that the approach deserves further development.
The numerical models of waves similar to that considered in this paper have a lot of important applications. First, they are a perfect tool for the development of physical parameterization schemes in spectral wave models. Second, a direct model can be used in future for the numerical simulation of wave processes in the basins of small and medium size. These investigations can be based on the HOS model (Ducrozet et al., 2016) or the model used in the current paper. However, the most universal approach seems to be developed at the Technical University of Denmark (see Engsig-Karup, 2009). Any model used for the long-term simulation of wave field evolution should include the algorithms describing transformation of energy similar to those considered in the current paper.
Data availability
Back to toptop
Data availability.
The underlying data (150 Gb) are not publicly accessible. Any number of them can be shared upon request.
Competing interests
Back to toptop
Competing interests.
The authors declare that they have no conflict of interest.
Back to toptop
The authors thank Olga Chalikova for her assistance in preparation of the paper as well as the anonymous reviewers for their constructive comments. This investigation was supported by Russian Science Foundation, project 16-17-00124.
Edited by: Neil Wells
Reviewed by: two anonymous referees
Back to toptop
Alberello, A., Chabchoub, A., Monty, J. P., Nelli, F., Lee, J. H., Elsnab, J., and Toffoli, A.: An experimental comparison of velocities underneath focused breaking waves, Ocean Eng., 155, 201–210 2018.
Alves, J. H. G. M. and Banner, M. L.: Performance of a Saturation-Based Dissipation-Rate Source Term in Modeling the Fetch-Limited Evolution of Wind Waves, J. Phys. Oceanogr., 33, 1274–1298, 2003.
Al'Zanaidi, M. A. and Hui, H. W: Turbulent airflow over water waves-a numerical study, J. Fluid Mech., 148, 225–246, 1984.
Babanin, A. V., Tsagareli, K. N., Young, I. R., and Walker, D. J.: Numerical Investigation of Spectral Evolution of Wind Waves. Part II: Dissipation Term and Evolution Tests, J. of Phys. Oceanography, 40, 667–683, 2010.
Babanin, A. V.: Breaking and Dissipation of Ocean Surface Waves, Cambridge University Press, The Edinburgh Building, Cambridge, UK, 480 pp., 2011.
Beale, J. T.: A convergent boundary integral method for three-dimensional water waves, Math. Comput., 70, 977–1029, 2001.
Bonnefoy, F., Ducrozet, G., Le Touzé, D., and Ferrant, P.: Time-domain simulation of nonlinear water waves using spectral methods, in: Advances in Numerical Simulation of Nonlinear Water Waves, Advances in Coastal and Ocean Engineering, World Scientific, 11, 129–164,, 2010.
Causon, D. M., Mingham, C. G., and Qian, L.: Developments In Multi-Fluid Finite Volume Free Surface Capturing Methods, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 397–427, 2010.
Chalikov, D.: The Parameterization of the Wave Boundary Layer, J. Phys. Oceanogr., 25, 1335–1349, 1995.
Chalikov, D.: Statistical properties of nonlinear one-dimensional wave fields, Nonlinear Proc. Geoph., 12, 1–19, 2005.
Chalikov, D.: Freak waves: their occurrence and probability, Phys. Fluids, 21, 076602,, 2009.
Chalikov, D.: Numerical modeling of sea waves, Springer International Publishing AG, Switzerland, 330 pp.,, 2016.
Chalikov, D. V.: Numerical simulation of windwave interaction, J. Fluid. Mech., 87, 561–582, 1978.
Chalikov, D. V.: Numerical simulation of the boundary layer above waves, Bound. Layer Met., 34, 63–98, 1986.
Chalikov, D. and Babanin, A. V.: Simulation of Wave Breaking in One-Dimensional Spectral Environment, J. Phys. Oceanogr., 42, 1745–1761, 2012.
Chalikov, D. and Babanin, A. V.: Simulation of one-dimensional evolution of wind waves in a deep water, Phys. Fluids, 26, 096607,, 2014.
Chalikov, D. and Babanin, A. V.: Nonlinear sharpening during superposition of surface waves, Ocean Dynam., 66, 931–937, 2016a.
Chalikov, D. and Babanin, A. V.: Comparison of linear and nonlinear extreme wave statistics, Acta Oceanol. Sin., 35, 99–105,, 2016b.
Chalikov, D. and Makin, V.: Models of the wave boundary layer, Bound. Layer Met., 56, 83–99, 1991.
Chalikov, D. and Belevich, M.: One-dimensional theory of the wave boundary layer, Bound. Layer Met., 63, 65–96, 1992.
Chalikov, D. and Rainchik, S.: Coupled Numerical Modelling of Wind and Waves and the Theory of the Wave Boundary Layer, Bound. Layer Met., 138, 1–41,, 2010.
Chalikov, D. and Sheinin, D.: Direct Modeling of One-dimensional Nonlinear Potential Waves. Nonlinear Ocean Waves, edited by: Perrie, W., Adv. Fluid Mech. Ser., 17, 207–258, 1998.
Chalikov, D., Babanin, A. V., and Sanina, E.: Numerical Modeling of Three-Dimensional Fully Nonlinear Potential Periodic Waves, Ocean. dynam., 64, 1469–1486, 2014.
Clamond, D. and Grue, J.: A fast method for fully nonlinear water wave dynamics, J. Fluid Mech. 447, 337–355, 2001.
Clamond, D., Fructus, D., Grue, J., and Krisitiansen, O.: An efficient method for three-dimensional surface wave simulations. Part II: Generation and absorption, J. Comp. Physics, 205, 686–705, 2005.
Clamond, D., Francius, M,. Grue, J., and Kharif, C: Long time interaction of envelope solitons and freak wave formations, Eur. J. Mech. B.-Fluid., 25, 536–553, 2006.
Craig, W. and Sulem C.: Numerical simulation of gravity waves, J. Comput. Phys., 108, 73–83, 1993.
Dalrymple, R. A., Gómez-Gesteira, M., Rogers, B. D., Panizzo, A., Zou, S., Crespo, A. J., Cuomo, G., and Narayanaswamy, M.: Smoothed Particle Hydrodynamics For Water Waves, Advances in Numerical Simulation of Nonlinear Water Waves, 465–495, 2010.
Dommermuth, D. and Yue, D.: A high-order spectral method for the study of nonlinear gravity Waves, J. Fluid Mech., 184, 267–288, 1987.
Donelan, M. A., Babanin, A. V., Young, I. R., Banner, M. L., and McCormick, C.: Wave follower field measurements of the wind input spectral function. Part I. Measurements and calibrations, J. Atmos. Ocean Tech., 22, 799–813, 2005.
Donelan, M. A., Babanin, A. V., Young, I. R., and Banner, M. L.: Wave follower field measurements of the wind input spectral function. Part II. Parameterization of the wind input, J. Phys. Oceanogr., 36, 1672–1688, 2006.
Dysthe, K. B.: Note on a modification to the nonlinear Schrödinger equation for application to deep water waves, Proc. R. Soc. Lond. A, 369, 105–114, 1979.
Ducrozet, G., Bonnefoy, F., Le Touzé, D., and Ferrant, P.: 3-D HOS simulations of extreme waves in open seas, Nat. Hazards Earth Syst. Sci., 7, 109–122,, 2007.
Ducrozet, G., Bingham, H. B., Engsig-Karup, A. P., Bonnefoy, F., and Ferrant, P.: A comparative study of two fast nonlinear free-surface water wave models, Int. J. Numer. Meth. Fluids, 69, 1818–1834, 2012.
Ducrozet, G., Bonnefoy, F., Le Touzé, D., and Ferrant, P.: HOS-ocean: Open-source solver for nonlinear waves in open ocean based on High-Order Spectral method, Comp. Phys. Comm., 203, 245–254,, 2016.
Engsig-Karup, A. P., Harry, B., Bingham, H. B., and Lindberg, O.: An efficient flexible-order model for 3D nonlinear water waves, J. Comput. Phys., 228, 2100–2118, 2009.
Engsig-Karup, A., Madsen, M., and Glimberg, S. A.: massively parallel GPU-accelerated mode for analysis of fully nonlinear free surface waves, Int. J. Numer. Meth. Fl., 70, 20–36, 2675,, 2012.
Fochesato, C., Dias, F., and Grilli, S.: Wave energy focusing in a three-dimensional numerical wave tank, Proc. R. Soc. A, 462, 2715–2735, 2006.
Fructus, D., Clamond, D., Grue, J., and Kristiansen, Ø.: An efficient model for three-dimensional surface wave simulations. Part I: Free space problems, J. Comput. Phys., 205, 665–68, 2005.
Gent, P. R. and Taylor, P. A.: A numerical model of the air flow above water waves, J. Fluid Mech., 77, 105–128, 1976.
Gou, Y., Teng, B., and Yoshida, S.: An Extremely Efficient Boundary Element Method for Wave Interaction with Long Cylindrical Structures Based on Free-Surface Green's function, Computation, 4, 36, 2016.
Greaves, D.: Application Of The Finite Volume Method To The Simulation Of Nonlinear Water Waves, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 357–396, 2010.
Grilli, S., Guyenne, P., and Dias, F.: A fully nonlinear model for three-dimensional overturning waves over arbitrary bottom, Int. J. Num. Methods Fluids, 35, 829–867, 2001.
Grue, J. and Fructus, D.: Model For Fully Nonlinear Ocean Wave Simulations Derived Using Fourier Inversion Of Integral Equations In 3D, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 1–42, 2010.
Guyenne, P. and Grilli, S. T.: Numerical study of three-dimensional overturning waves in shallow water, J. Fluid Mech., 547, 361–388, 2006.
Hasselmann, K.: On the non-linear energy transfer in a gravity wave spectrum, Part 1, J. Fluid Mech., 12, 481–500, 1962.
Hasselmann, D. and Bösenberg, J. : Field measurements of wave-induced pressure over wind-sea and swell, J. Fluid Mech., 230, 391–428, 1991.
Hasselmann, K., Barnett, T. P., Bouws, E., Carlson, H., Cartwright, D. E., Enke, K., Ewing, J. A., Gienapp, H., Hasselmann, D. E., Kruseman, P., Meerburg, A., Muller, P., Olbers, D. J., Richter, K., Sell, W., and Walden H.: Measurements of wind-wave growth and swell decay during the Joint Sea Wave Project (JONSWAP), Tsch. Hydrogh. Z. Suppl, A8, 1–95, 1973.
Hasselmann, S., Hasselmann, K., Allender, J. H., and Barnett, T. P.: Computations and Parameterizations of the Nonlinear Energy Transfer in a Gravity-Wave Specturm. Part II: Parameterizations of the Nonlinear Energy Transfer for Application in Wave Models, J. Phys. Oceanogr., 15, 1378–1392, 1985.
Hsiao, S. V. and Shemdin, O. H.: Measurements of wind velocity and pressure with a wave follower during MARSEN, J. Geophys. Res., 88, 9841–9849, 1983.
Iafrati, A.: Numerical Study of the Effects of the Breaking Intensity on Wave Breaking Flows, J. Fluid Mech., 622, 371–411, 2009.
Issa, R., Violeau, D., Lee E.-S., and Flament, H.: Modelling nonlinear water waves with RANS and LES SPH models, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 497–537, 2010.
Kim, K. S., Kim, M. H., and Park, J. C.: Development of MPS (Moving Particle Simulation) method for Multi-liquid-layer Sloshing, Math. Probl. Eng., 2014, 350165,, 2014.
Liu, Y., Gou, Y., Bin Teng, B., and Shigeo Yoshida, S.: An Extremely Efficient Boundary Element Method for Wave Interaction with Long Cylindrical Structures Based on Free-Surface Green's Function, Computation, 4, 36,, 2016.
Lubin, P. and Caltagirone, J.-P.: Large eddy simulation of the hydrodynamics generated by breaking waves, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 575–604, 2010.
Ma, Q. W. and Yan, S.: Qale-FEM method and its application to the simulation of free responses of floating bodies and overturning waves, Advances in Numerical Simulation of Nonlinear Water Waves, 165–202, 2010.
Miles, J. W.: On the generation of surface waves by shear flows, J. Fluid Mech., 3, 11, 02,, 1957.
Rogers, W. E., Babanin, A. V., and Wang, D. W.: Observation-consistent input and whitecapping-dissipation in a model for wind-generated surface waves: Description and simple calculations, J. Atmos. Ocean. Tech., 29, 1329–1346, 2012.
Snyder, R. L., Dobson, F. W., Elliott, J. A., and Long, R. B.: Array measurements of atmospheric pressure fluctuations above surface gravity waves, J. Fluid Mech., 102, 1–59, 1981.
Tanaka, M.: Verification of Hasselmann's energy transfer among surface gravity waves by direct numerical simulations of primitive equations, J. Fluid Mech., 444, 199–221, 2001.
Ting C.-H, Babanin, A. V., Chalikov, D., and Hsu, T.-W.: Dependence of drag coefficient on the directional spreading of ocean waves, J. Geophys. Res., 117, C00J14,, 2012.
Toffoli, A., Onorato, M., Bitner-Gregersen, E., and Monbaliu J.: Development of a bimodal structure in ocean wave spectra, J. Geophys. Res., 115, C03006,, 2010.
Tolman, H. and Chalikov, D.: On the source terms in a third-generation wind wave model, J. Phys. Oceanogr., 26, 2497–2518, 1996.
Tolman, H. L. and the WAVEWATCH III R Development Group: User manual and system documentation of WAVEWATCH III R version 4.18 Environmental Modeling Center Marine Modeling and Analysis Branch, Contribution No. 316, 2014.
Touboul, J. and Kharif, C.: Two-Dimensional Direct Numerical Simulations Of The Dynamics Of Rogue Waves Under Wind Action, Advances in Numerical Simulation of Nonlinear Water Waves, 11, 43–74, 2010.
West, B., Brueckner, K., Janda, R., Milder, M., and Milton, R.: A new numerical method for surface hydrodynamics, J. Geophys. Res., 92, 11803–11824, 1987.
Young, I. R. and Babanin, A. V.: Spectral Distribution of Energy Dissipation of Wind-Generated Waves due to Dominant Wave Breaking, J. Phys. Oceanogr., 36, 376–394, 2006.
Young, D.-L., Wu, N.-J., and Tsay, T.-K.: Method Of Fundamental Solutions For Fully Nonlinear Water Waves, Advances in Numerical Simulation of Nonlinear Water Waves, 325–355, 2010.
Xue, M., Xu, H., Liu, Y., and Yue, D. K. P.: Computations of fully nonlinear three-dimensional wave and wave–body interactions. I. Dynamics of steep three-dimensional waves, J. Fluid Mech., 438, 11, 11–39, 2001.
Zakharov, V. E.: Stability of periodic waves of finite amplitude on the surface of deep fluid, J. Appl. Mech. Tech. Phys. JETF, 2, 190–194, 1968 (in English).
Zhao, X., Liu, B.-J., Liang, S.-X., and Sun, Z.-C.: Constrained Interpolation Profile (CIP) method and its application, Chuan Bo Li Xue/Journal of Ship Mechanics, 20, 393–402,, 2016.
The wave spectrum looks more like the Sagrada Família (Gaudí) in Barcelona than the St Mary Axe (“The Gherkin”) in London.
The 2-D Fourier spectral “tail” looks like a peacock tail.
Publications Copernicus
Short summary
Waves obtain energy from wind; they grow and increase in size and speed of propagation. The structure of wave fields becomes complicated due to appearance of new wave components. Finally, the sea surface looks like a poorly organized motion consisting of quickly running large hills and hollows covered with smaller waves. This process can be successfully simulated on computers. Such investigations allow us to understand the physics of sea waves, which is important for practice.
Waves obtain energy from wind; they grow and increase in size and speed of propagation. The... |
9da22f5e9d3073ab | Laguerre polynomials
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In mathematics, the Laguerre polynomials, named after Edmond Laguerre (1834 - 1886), are solutions of Laguerre's equation:
which is a second-order linear differential equation. This equation has nonsingular solutions only if n is a non-negative integer.
Sometimes the name Laguerre polynomials is used for solutions of
where n is still a non-negative integer. Then they are also named generalized Laguerre polynomials, as will be done here (alternatively associated Laguerre polynomials or, rarely, Sonine polynomials, after their inventor[1] Nikolay Yakovlevich Sonin).
More generally, a Laguerre function is a solution when n is not necessarily a non-negative integer.
The Laguerre polynomials are also used for Gaussian quadrature to numerically compute integrals of the form
These polynomials, usually denoted L0L1, ..., are a polynomial sequence which may be defined by the Rodrigues formula,
reducing to the closed form of a following section.
They are orthogonal polynomials with respect to an inner product
The sequence of Laguerre polynomials n! Ln is a Sheffer sequence,
The Rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. Further see the Tricomi–Carlitz polynomials.
The Laguerre polynomials arise in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of oscillator systems in quantum mechanics in phase space. They further enter in the quantum mechanics of the Morse potential and of the 3D isotropic harmonic oscillator.
Physicists sometimes use a definition for the Laguerre polynomials which is larger by a factor of n! than the definition used here. (Likewise, some physicists may use somewhat different definitions of the so-called associated Laguerre polynomials.)
The first few polynomials[edit]
These are the first few Laguerre polynomials:
The first six Laguerre polynomials.
Recursive definition, closed form, and generating function[edit]
One can also define the Laguerre polynomials recursively, defining the first two polynomials as
and then using the following recurrence relation for any k ≥ 1:
In solution of some boundary value problems, the characteristic values can be useful:
The closed form is
The generating function for them likewise follows,
Polynomials of negative index can be expressed using the ones with positive index:
Generalized Laguerre polynomials[edit]
For arbitrary real α the polynomial solutions of the differential equation[2]
are called generalized Laguerre polynomials, or associated Laguerre polynomials.
One can also define the generalized Laguerre polynomials recursively, defining the first two polynomials as
The simple Laguerre polynomials are the special case α = 0 of the generalized Laguerre polynomials:
The Rodrigues formula for them is
The generating function for them is
The first few generalized Laguerre polynomials, Ln(k)(x)
Explicit examples and properties of the generalized Laguerre polynomials[edit]
is a generalized binomial coefficient. When n is an integer the function reduces to a polynomial of degree n. It has the alternative expression[4]
in terms of Kummer's function of the second kind.
• The closed form for these generalized Laguerre polynomials of degree n is[5]
derived by applying Leibniz's theorem for differentiation of a product to Rodrigues' formula.
• The first few generalized Laguerre polynomials are:
• If α is non-negative, then Ln(α) has n real, strictly positive roots (notice that is a Sturm chain), which are all in the interval [citation needed]
• The polynomials' asymptotic behaviour for large n, but fixed α and x > 0, is given by[6][7]
and summarizing by
where is the Bessel function.
As a contour integral[edit]
Given the generating function specified above, the polynomials may be expressed in terms of a contour integral
where the contour circles the origin once in a counterclockwise direction without enclosing the essential singularity at 1
Recurrence relations[edit]
The addition formula for Laguerre polynomials:[8]
Laguerre's polynomials satisfy the recurrence relations
in particular
They can be used to derive the four 3-point-rules
combined they give this additional, useful recurrence relations
Since is a monic polynomial of degree in , there is the partial fraction decomposition
The second equality follows by the following identity, valid for integer i and n and immediate from the expression of in terms of Charlier polynomials:
For the third equality apply the fourth and fifth identities of this section.
Derivatives of generalized Laguerre polynomials[edit]
Differentiating the power series representation of a generalized Laguerre polynomial k times leads to
This points to a special case (α = 0) of the formula above: for integer α = k the generalized polynomial may be written
the shift by k sometimes causing confusion with the usual parenthesis notation for a derivative.
Moreover, the following equation holds:
which generalizes with Cauchy's formula to
The derivative with respect to the second variable α has the form,[9]
This is evident from the contour integral representation below.
The generalized Laguerre polynomials obey the differential equation
which may be compared with the equation obeyed by the kth derivative of the ordinary Laguerre polynomial,
where for this equation only.
In Sturm–Liouville form the differential equation is
which shows that L(α)
is an eigenvector for the eigenvalue n.
The generalized Laguerre polynomials are orthogonal over [0, ∞) with respect to the measure with weighting function xα ex:[10]
which follows from
If denotes the Gamma distribution then the orthogonality relation can be written as
The associated, symmetric kernel polynomial has the representations (Christoffel–Darboux formula)[citation needed]
Moreover,[clarification needed Limit as n goes to infinity?]
Turán's inequalities can be derived here, which is
The following integral is needed in the quantum mechanical treatment of the hydrogen atom,
Series expansions[edit]
Let a function have the (formal) series expansion
The series converges in the associated Hilbert space L2[0, ∞) if and only if
Further examples of expansions[edit]
Monomials are represented as
while binomials have the parametrization
This leads directly to
for the exponential function. The incomplete gamma function has the representation
In quantum mechanics[edit]
In quantum mechanics the Schrodinger equation for the hydrogen-like atom is exactly solvable by separation of variables in spherical coordinates. The radial part of the wave function is a (generalized) Laguerre polynomial.[11]
Multiplication theorems[edit]
Erdélyi gives the following two multiplication theorems [12]
Relation to Hermite polynomials[edit]
The generalized Laguerre polynomials are related to the Hermite polynomials:
where the Hn(x) are the Hermite polynomials based on the weighting function exp(−x2), the so-called "physicist's version."
Because of this, the generalized Laguerre polynomials arise in the treatment of the quantum harmonic oscillator.
Relation to hypergeometric functions[edit]
The Laguerre polynomials may be defined in terms of hypergeometric functions, specifically the confluent hypergeometric functions, as
where is the Pochhammer symbol (which in this case represents the rising factorial).
Hardy-Hille formula[edit]
The generalized Laguerre polynomials satisfy the Hardy-Hille formula[13][14]
where the series on the left converges for and . Using the identity
(see generalized hypergeometric function), this can also be written as
This formula is a generalization of the Mehler kernel for Hermite polynomials, which can be recovered from it by using the relations between Laguerre and Hermite polynomials given above.
See also[edit]
• Transverse mode, an important application of Laguerre polynomials to describe the field intensity within a waveguide or laser beam profile.
1. ^ N. Sonine (1880). "Recherches sur les fonctions cylindriques et le développement des fonctions continues en séries". Math. Ann. 16 (1): 1–80. doi:10.1007/BF01459227.
2. ^ A&S p. 781
3. ^ A&S p.509
4. ^ A&S p.510
5. ^ A&S p. 775
6. ^ Szegő, p. 198.
7. ^ D. Borwein, J. M. Borwein, R. E. Crandall, "Effective Laguerre asymptotics", SIAM J. Numer. Anal., vol. 46 (2008), no. 6, pp. 3285-3312 doi:10.1137/07068031X
8. ^ A&S equation (22.12.6), p. 785
9. ^ Koepf, Wolfram (1997). "Identities for families of orthogonal polynomials and special functions". Integral Transforms and Special Functions. 5 (1–2): 69–102. CiteSeerX doi:10.1080/10652469708819127.
10. ^ "Associated Laguerre Polynomial".
11. ^ Ratner, Schatz, Mark A., George C. (2001). Quantum Mechanics in Chemistry. 0-13-895491-7: Prentice Hall. pp. 90–91.
12. ^ C. Truesdell, "On the Addition and Multiplication Theorems for the Special Functions", Proceedings of the National Academy of Sciences, Mathematics, (1950) pp.752-757.
13. ^ Szegő, p. 102.
14. ^ W. A. Al-Salam (1964), "Operational representations for Laguerre and other polynomials", Duke Math J. 31 (1): 127-142.
External links[edit] |
5f121deca5791c9f | In this post I delve into the current view of what happens to a wave function as it interacts with its environment and tell the story of how I anticipated the idea of this view around 1971 or 1972 some 10 years before a crucial paper was published in 1991. If you have a non-technical background, I hope you can skim through without too much puzzlement. In the next post I will revert to writing which is entirely non mathematical.
Back around 1970, when I first became interested in the “collapse of the wave function”, I noticed at some point while thinking about the situation, that this collapse entailed more than simply the materialization of, say, a particle in accord with its probability distribution. For the wave function is more than a probability distribution. It contains, in addition, information which allows it to be transformed into a new “representation” in which it gives a probability distribution for a different physical quantity. For example, if we have a wave function from which we can find a probability for a particle’s position, we can transform this wave function into a new form from which we can find the distribution for the particle’s energy. With the “collapse”, however, one loses the information that would allow such transformations. One loses the “phases” of the wave function. To understand what are meant by phases I need to point out that a complex number can be viewed as a little arrow, lying in a plane. The length of the arrow can represent a positive real number, i.e. a probability. The arrow, lying in its plane, can point in any 360-degree direction and the angle at which it points is called its “phase”. A wave function consists of many complex numbers, each of which can be looked upon as a little arrow with magnitude and phase. Looking at an entire collection of these little arrows, one can consider their lengths (actually length squared) as a probability distribution for one physical quantity, and the pattern of their phases as additional information about other physical quantities.
Collapse occurs when a quantum system interacts with its environment. With the “collapse”, one of the probabilities becomes realized; and ALL of the phases simply disappear from the record. The information associated with the phases’ pattern goes missing. These days people have realized something I missed back in the 1970’s: the information contained in the phases doesn’t actually go missing, but leaks into the environment where it can show up, giving us information about the quantum system of interest. People no longer talk much about collapse, concentrating on the disappearance of a system’s phase pattern, which may or may not actually be linked to collapse. The modern buzz word for this possible way-station to collapse is “decoherence”. The phase pattern is “coherent” and when it goes away, we have “quantum decoherence”. Back in 1971, long before the word “decoherence” had ever appeared in this context, I wondered if there might be a way of calculating how the phases go away as a quantum system interacts with its environment, and, through blind luck, came to realize that there was indeed the possibility of such a calculation. In reading various papers about “measurement theory” I came across an essay by Eugene Wigner, a Nobel prize winning theorist, who pointed out that a quantum expression called “the density matrix” might possibly throw some light on the whole “measurement-collapse” situation because with the density matrix phases went away. Wigner said, however, that this possibility was of no use, because the density matrix belongs not to a single quantum system, but always to an “ensemble”. An ensemble is a collection of a number of similar systems, while the “collapse” happens with a single system. So, the essay’s conclusion was: forget about the density matrix as being of any help in understanding what was going on. I noted what Wigner had said and thought no more about it until I was browsing in a quantum text by Lev Landau and Evgeny Lifshitz, translated ten or so years earlier from the Russian. There on pages 35 – 38 was a definition and discussion of the density matrix; and the definition was definitely for a single system interacting with its environment. I remembered that Lev Landau had independently defined the density matrix along with von Neumann in 1927. Perhaps Landau’s version had simply been forgotten. In any case, being defined for a single system, to me it showed great promise for calculating how wave function phases could disappear. (See Landau and Lifshitz, Quantum Mechanics: Non-Relativistic Theory, First English Edition, 1958.)
Lev Landau was still another of the geniuses associated with the development of quantum mechanics. Born in June, 1908, in Baku, Azerbaijan, of Russian parents, he was enough younger than the Pauli – Heisenberg generation that he missed out on the first 1925 – 1926 wave of the quantum revolution. By the time he was 19 or so he had caught up enough to independently define a version of the density matrix. Later he spent time in Europe, visiting the Bohr institute on several occasions between 1929 and 1931. A wonderful book about that time period is Faust in Copenhagen: A Struggle for the Soul of Physics by Geno Segrè. Dr. Segrè is a neutrino physicist who is also a talented writer. Warning! If you’re not a physics buff by now, this book might well make you into one. Geno Segrè’s uncle was Emilio Segrè, a famous member of Fermi’s group in Italy and later one of the atomic bomb developers. Talking about Landau, known by his nickname, Dau, Segrè says, “Dau, who became Russia’s greatest theoretical physicist and one of the twentieth century’s major scientific figures was never intimidated by anybody, …”. “As the Dutch physicist, Casimir remembered, ‘Landau’s was perhaps the most brilliant and quickest mind I have ever come across.’ This is high praise from someone who knew well both Heisenberg and Pauli.”
With the Landau Lifshitz definition in hand I tried to see if I could prove that the right sort of environmental interaction could make the phases of the wave function fade away. The density matrix for discrete states is a square with the real probabilities running down the main diagonal from upper left to lower right. The off-diagonal elements are complex and contain the relevant phase information. (The matrix is Hermitian, though that fact is somewhat irrelevant in the context of interest here.)
About the time I started working on the matrix there was a talented graduate student, Yashwant Shitoot from India, at Auburn who needed a thesis topic so I suggested that he work on the problem for his Master’s thesis which he did. Shitoot and I came up with somewhat different approaches to the problem. Yashwant observed that in practice the environment potentials could not be exactly specified and thus the off-diagonal elements of the matrix could be considered to be a probability distribution arising from the many unknown environmental potentials. Citing the “central limit theorem” he argued that these distributions were normal distributions and would vanish over time. (See Yashwant Anant Shitoot Theory of Measurement, M.S. Thesis, Auburn University, March, 1973.) The probabilities in Shitoot’s approach are classical probabilities arising from our ignorance; not quantum probabilities arising from the “mind of God”. In my approach I visualized the wave function in a Stern-Gerlach experiment. The classic Stern-Gerlach experiment passes a beam of silver atoms in vacuum between unsymmetrical poles of a magnet. Such poles generate a non-uniform magnetic field which exerts a force on a silver atom which has a magnetic moment due to the spin of its outer electron. A sliver atom wave function splits into a superposition of two spatially separated parts representing the two spin possibilities spin-up or spin down. (This splitting is similar to what occurs with Schrödinger’s unhappy cat.) After passing through the magnet poles the silver beam can either impinge on a barrier where it forms two spots of silver or, instead come to a barrier with a slit positioned where, say, the upper the silver dots would be. In the latter case some of the silver atoms form a dot below and others pass through the slit. The atoms that pass through the slit all have their spin up when passed through a second pole piece oriented like the first; or confirm the way that spin ½ works if the second pole piece is tilted. My interest, however, was not with the spin of the silver atoms, but instead, with a calculation of how the superposition changes as one part of it impinges on the atoms of the barrier. To attack the calculation, I considered a silver atom as the “system” and the atoms of the barrier as the “environment”. In quantum mechanics there are not only representations, but “pictures”. In the Schrödinger picture, the time dependence is carried by the wave function (state vector) while in the Heisenberg picture the time dependence is carried by the quantum mechanical operators. Furthermore, there is a third picture called the interaction picture where one ends up with the time dependence in the interaction part when a system and its environment interact. Using the interaction picture and a model potential consisting of a series of step functions to simulate the atoms of the barrier, I could easily show that the off-diagonal elements of the density matrix “gradually” went to zero. Of course, I’m being facetious in using the word “gradually” because the time involved here is of the order of 10⁻¹⁴ seconds. However, in one’s imagination one can split this time into thousands or millions of increments. Then the change is indeed gradual. Or one can imagine a different physical situation where a quantum particle traveling through an imperfect vacuum encounters the field from a stray atom from time to time. The essential point is that the quantum decoherence in not instantaneous and one can imagine situations where the time interval is experimentally significant. (See below.)
There are two problems with my approach. First, I failed to find a proof that used a realistic interaction potential. Nevertheless, what I did was highly suggestive and over the years gave me the satisfaction of feeling that I understood what was happening whenever I encountered quantum puzzles involving collapse. In particular, the model calculation showed how an interaction of one piece of a superposition would affect another piece where there was no interaction. The second problem I had at the time was how to interpret the physical situation when the off-diagonal elements of the density matrix had gone only part way to zero. In particular, what was the physical meaning of the situation when a particle passed by a weak interaction potential into an area free from interaction so that any decoherence was only partial. I kept thinking about this second difficulty over the years and at some point, an answer dawned on me. (See below.)
In spite of these difficulties, around 1973 I wrote up a paper and sent it to the Physical Review where it was summarily rejected because I had pointed out no ramifications of the calculation which could be experimentally tested. I didn’t follow up for a number of reasons: I had no answer to the second difficulty mentioned above, I was and am somewhat lazy, and my life was falling apart at the time. I left Auburn in 1974 and my only copy of the paper has disappeared.
Currently, quantum decoherence is of interest because it is highly relevant to quantum computing. In a quantum computer a collection of “qubits” which act like spin ½ particles are put into a quantum state where they carry out a calculation provided that they do not “decohere” during the time necessary for the calculation to take place. This means that the qubit collection must be as isolated as possible from any stray potentials. However, it is likely to be impossible to completely isolate the collection. What happens during a partial decoherence? Here is my answer. During an encounter with a stray potential the off-diagonal terms of the density matrix of the system are slightly smaller. One can get a handle on this situation by splitting the density matrix into a linear superposition of two density matrices, one with zero off-diagonal elements and a second with diagonal elements somewhat reduced. Let the two coefficients of the superposition be c₁ and c₂. Then c₁*c₁ is the probability that decoherence has occurred and c₂*c₂ is the probability that the calculation is OK. I have applied a probability interpretation to the situation, a satisfying idea where quantum physics is concerned. In many cases a quantum calculation seeks an answer which takes too long to find with a conventional computer, but which is easily tested if found. With a quantum computer subject to decoherence one simply repeats the calculation until the answer shows up. Provided the isolation of the system is good, this should not require many repeats. Whether or not my ideas about partial decoherence are valid, it is clear that the entire situation about quantum measurement and decoherence will become clear as quantum computers are developed.
To close this post, I want to consider my conscious motivations in talking about quantum decoherence and my engagement with it. One motivation is that this is an interesting story which goes a long way towards answering the puzzles of quantum measurement, decoherence and collapse. I believe that this history makes clear that the long-standing difficulties in this area which have led to much controversy, are puzzles in the Kuhnian sense and require no radical revolution involving quantum mechanics. A second motivation is personal. Although I certainly deserve no credit whatsoever in the story of how quantum decoherence came into being, I did have an understanding of the situation before the march of science explicated it and it gives me satisfaction to make my involvement public. A final motivation involves my hopes for this blog. I hope the story of my involvement with physics makes clear that I was a hard headed, skeptical practitioner of a basic science and that in promoting Western Zen I’m dedicated to a superstition-free insight that provides a unifying sub-structure for all of Western, and indeed, non-Western World thought.
QM 1
Before completing this post, I need to acknowledge that my goal in writing about modern physics was to create a milieu for more talking about Western Zen. However, as I’ve proceeded, the goal has somewhat changed. I want you, as a reader, to become, if you aren’t already, a physics buff, much in the way I became a history buff after finding history incredibly boring and hateful throughout high school and college. The apotheosis of my history disenchantment came at Stanford in a course taught by a highly regarded historian. The course was entitled “The High Middle Ages” and I actually took it as an elective thinking that it was likely to be fascinating. It was only gradually over the years that I realized that history at its best although based on factual evidence, consists of stories full of meaning, significance and human interest. Turning back to physics, I note that even after more than a hundred years of revolution, physics still suffers a hangover from 300 years of its classical period in which it was characterized by a supposedly passionless objectivity and a mundane view of reality. In fact, modern physics can be imagined as a scientific fantasy, a far-flung poetic construction from which equations can be deduced and the fantasy brought back to earth in experiments and in the devices of our age. When I use the word “fantasy” I do not mean to suggest any lack of rigorous or critical thinking in science. I do want to imply a new expansion of what science is about, a new awareness, hinting at a “reality” deeper than what we have ever imagined in the past. However, to me even more significant than a new reality is the fact that the Quantum Revolution showed that physics can never be considered absolute. The latest and greatest theories are always subject to a revolution which undermines the metaphysics underlying the theory. Who knows what the next revolution will bring? Judging from our understanding of the physics of our age, a new revolution will not change the feeling that we are living in a universe which is an unimaginable miracle.
In what follows I’ve included formulas and mathematics whose significance can be easily be talked about without going into the gory details. The hope is that these will be helpful in clarifying the excitement of physics and the metaphysical ideas lying behind. Of course, the condensed treatment here can be further explicated in the books I mention and in Wikipedia.
My last post, about the massive revolution in physics of the early 20th century, ended by describing the situation in early 1925 when it became abundantly clear in the words of Max Jammer (Jammer, p 196) that physics of the atom was “a lamentable hodgepodge of hypotheses, principles, theorems, and computational recipes rather than a logical consistent theory.” Metaphysically, physicists clung to classical ideas such as particles whose motion consisted of trajectories governed by differential equations and waves as material substances spread out in space and governed by partial differential equations. Clearly these ideas were logically inconsistent with experimental results, but the deep classical metaphysics, refined over 300 years could not be abandoned until there was a consistent theory which allowed something new and different.
Werner Heisenberg, born Dec 5, 1901 was 23 years old in the summer of 1925. He had been a brilliant student at Munich studying with Arnold Sommerfeld, had recently moved to Göttingen, a citadel of math and physics, and had made the acquaintance of Bohr in Copenhagen where he became totally enthralled with doing something about the quantum mess. He noted that the electron orbits of the current theory were purely theoretical constructs and could not be directly observed. Experiments could measure the wavelengths and intensity of the light atoms gave off, so following the Zeitgeist of the times as expounded by Mach and Einstein, Heisenberg decided to try make a direct theory of atomic radiation. One of the ideas of the old quantum theory that Heisenberg used was Bohr’s “Correspondence” principle which notes that as electron orbits become large along with their quantum numbers, quantum results should merge with the classical. Classical physics failed only when things became small enough that Planck’s constant h became significant. Bohr had used this idea in obtaining his formula for the hydrogen atom’s energy levels. In various “old quantum” results the Correspondence Principle was always used, but in different, creative ways for each situation. Heisenberg managed to incorporate it into his ultimate vector-matrix construction once and for all. Heisenberg’s first paper in the Fall of 1925 was jumped on by him and many others and developed into a coherent theory. The new results eliminated many slight discrepancies between theory and experiment, but more important, showed great promise during the last half of 1925 of becoming an actual logical theory.
In January, 1926, Erwin Schrödinger published his first great paper on wave mechanics. Schrödinger, working from classical mechanics, but following de Broglie’s idea of “matter waves”, and using the Correspondence Principle, came up with a wave theory of particle motion, a partial differential equation which could be solved for many systems such as the hydrogen atom, and which soon duplicated Heisenberg’s new results. Within a couple of months Schrödinger closed down a developing controversy by showing that his and Heisenberg’s approaches, though based on seemingly radically opposed ideas, were, in fact, mathematically isomorphic. Meanwhile starting in early 1926, PAM Dirac introduced an abstract algebraic operator approach that went deeper than either Heisenberg or Schrödinger. A significant aspect of Dirac’s genius was his ability to cut through mathematical clutter to a simpler expression of things. I will dare here to be specific about what I’ll call THE fundamental quantum result, hoping that the simplicity of Dirac’s notation will enable those of you without a background in advanced undergraduate mathematics to get some of the feel and flavor of QM.
In ordinary algebra a new level of mathematical abstraction is reached by using letters such as x,y,z or a,b,c to stand for specific numbers, numbers such as 1,2,3 or 3.1416. Numbers, if you think about it, are already somewhat abstract entities. If one has two apples and one orange, one has 3 objects and the “3” doesn’t care that you’re mixing apples and oranges. With algebra, If I use x to stand for a number, the “x” doesn’t care that I don’t know the number it stands for. In Dirac’s abstract scheme what he calls c-numbers are simply symbols of the ordinary algebra that one studies in high school. Along with the c-numbers (classic numbers) Dirac introduces q-numbers (quantum numbers) which are algebraic symbols that behave somewhat differently than those of ordinary algebra. Two of the most important q-numbers are p and s, where p stands for the momentum of a moving particle, mv, mass times velocity in classical physics, and s stands for the position of the particle in space. (I’ve used s instead of the usual q for position to try avoid a confusion with the q of q-number.) Taken as q-numbers, p and s satisfy
ps – sp = h/2πi
which I’ll call the Fundamental Quantum Result in which h is Planck’s constant and i the square root of -1. Actually, Dirac, observing that in most formulas or equations involving h, it occurs as h/2π, defined what is now called h bar or h slash using the symbol ħ = h/2π for the “reduced” Planck constant. If one reads about QM elsewhere (perhaps in Wikipedia) one will see ħ almost universally used. Rather than the way I’ve written the FQR above, it will appear as something like
pqqp = ħ/i
where I’ve restored the usual q for position. What this expression is saying is that in the new QM if one multiplies something first by position q and then by momentum p, the result is different from the multiplications done in the opposite order. We say these q-numbers are non-commutative, the order of multiplication matters. Boldface type is used because position and momentum are vectors and the equation actually applies to each of their 3 components. Furthermore, the FQR tells us exact size of the non-commute. In usual human sized physical units ħ is .00…001054… where there are 33 zeros before the 1054. If we can ignore the size of ħ and set it to zero, p and q, then commute, can be considered c-numbers and we’re back to classical physics. Incidentally, Heisenberg, Born and Jordan obtained the FQR using p and q as infinite matrices and it can be derived also using Schrödinger’s differential operators. It is interesting to note that by using his new abstract algebra, Dirac not only obtained the FQR but could calculate the energy levels of the hydrogen atom. Only later did physicists obtain that result using Heisenberg’s matrices. Sometimes the deep abstract leads to surprisingly concrete results.
For most physicists in 1926, the big excitement was Schrödinger’s equation. Partial differential equations were a familiar tool, while matrices were at that time known mainly to mathematicians. The “old quantum theory” had made a few forays into one or another area leaving the fundamentals of atomic physics and chemistry pretty much in the dark. With Schrödinger’s equation, light was thrown everywhere. One could calculate how two hydrogen atoms were bound in the hydrogen molecule. Then using that binding as a model one could understand various bindings of different molecules. All of chemistry became open to theoretic treatment. The helium atom with its two electrons couldn’t be dealt with at all by the old quantum theory. Using various approximation methods, the new theory could understand in detail the helium atom and other multielectron atoms. Electrons in metals could be modeled with the Schrödinger’s equation, and soon the discovery of the neutron opened up the study of the atomic nucleus. The old quantum theory was helpless in dealing with particle scattering where there were no closed orbits. Such scattering was easily accommodated by the Schrödinger equation though the detailed calculations were far from trivial. Over the years quantum theory revealed more and more practical knowledge and most physicists concentrated on experiments and theoretic calculations that led to such knowledge with little concern about what the new theory meant in terms of physical reality.
However, back in the first few years after 1925 there was a great deal of concern about what the theory meant and the question of how it should be interpreted. For example, under Schrödinger’s theory an electron was represented by a “cloud” of numbers which could travel through space or surround an atom’s nucleus. These numbers, called the wave function and typically named ψ, were complex, of the form a + ib, where i is the square root of -1. By multiplying such a number by its conjugate a – ib, one gets a positive (strictly speaking, non-negative) number which can perhaps be physically interpreted. Schrödinger himself tried to interpret this “real” cloud as a negative electric change density, a blob of negative charge. For a free electron, outside an atom, Schrödinger imagined that the electron wave could form what is called a “wave packet”, a combination of different frequencies that would appear as a small moving blob which could be interpreted as a particle. This idea definitely did not fly. There were too many situations where the waves were spread out in space, before an electron suddenly made its appearance as a particle. The question of what ψ meant was resolved by Max Born (see Wikipedia), starting with a paper in June, 1926. Born interpreted the non-negative numbers ψ*ψ (ψ* being the complex conjugate of the ψ numbers) as a probability distribution for where the electron might appear under suitable physical circumstances. What these physical circumstances are and the physical process of the appearance are still not completely resolved. Later in this or another blog post I will go into this matter in some detail. In 1926 Born’s idea made sense of experiment and resolved the wave-particle duality of the old quantum theory, but at the cost of destroying classical concepts of what a particle or wave really was. Let me try to explain.
A simple example of a classical probability distribution is that of tossing a coin and seeing if it lands heads or tails. The probability distribution in this case is the two numbers, ½ and ½, the first being the probability of heads, the second the probability of tails. The two probabilities add up to 1 which represents certainty, in probability theory. (Unlike the college students who are trying to decide whether to go drinking, go to the movies or to study, I ignore the possibility that the coin lands on its edge without falling over.) With the wave function product ψ*ψ, calculus gives us a way of adding up all the probabilities, and if they don’t add up to 1, we simply define a new ψ by dividing by the sum we obtained. (This is called “normalizing” the wave function.) Besides the complexity of the math, however, there is a profound difference between the coin and the electron. With the coin, classical mechanics tells us in theory, and perhaps in practice, precisely what the position and orientation of the coin is during every instant of its flight; and knowing about the surface the coin lands on, allows us to predict the result of the toss in advance. The classical analogy for the electron would be to imagine it is like a bb moving around inside the non-zero area of the wave function, ready to show up when conditions are propitious. With QM this analogy is false. There is no trajectory for the electron, there is no concept of it having a position, before it shows up. Actually, it is only fairly recently that the “bb in a tin can model” has been shown definitively to be false. I will discuss this matter later talking briefly about Bell’s theorem and “hidden” variable ideas. However, whether or not an electron’s position exists prior to its materialization, it was simply the concept of probability that Einstein and Schrödinger, among others, found unacceptable. As Einstein famously put it, “I can’t believe God plays dice with the universe.”
Max Born, who introduced probability into fundamental physics, was a distinguished physics professor in Göttingen and Heisenberg’s mentor after the latter first came to Göttingen from Munich in 1922. Heisenberg got the breakthrough for his theory while escaping from hay fever in the spring of 1925 walking the beaches of the bleak island of Helgoland in the North Sea off Germany. Returning to Göttingen, Heisenberg showed his work to Born who recognized the calculations as being matrix multiplication and who saw to it that Heisenberg’s first paper was immediately published. Born then recruited Pascual Jordan from the math department at Göttingen and the three wrote a famous follow-up paper, Zur Quantenmechanik II, Nov, 1925, which gave a complete treatment of the new theory from a matrix mechanics point of view. Thus, Born was well posed to come up with his idea of the nature of the wave function.
Quantum Mechanics came into being during the amazingly short interval between mid-1925 and the end of 1926. As far as the theory went, only “mopping” up operations were left. As far as the applications were concerned there was a plethora of “low hanging fruit” that could be gathered over the years with Schrödinger’s equation and Born’s interpretation. However, as 1927 dawned, Heisenberg and many others were concerned with what the theory meant, with fears that it was so revolutionary that it might render ambiguous the meaning of all the fundamental quantities on which both the new QM and old classical physics depended. In 1925 Heisenberg began his work on what became the matrix mechanics because he was skeptical about the existence of Bohr orbits in atoms, but his skepticism did not include the very concept of “space” itself. As QM developed, however, Heisenberg realized that it depended on classical variables such as position and momentum which appeared not only in the pq commutation relation but as basic variables of the Schrödinger equation. Had the meaning of “position” itself changed? Heisenberg realized that earlier with Einstein’s Special Relativity that the meaning of both position and time had indeed changed. (Newton assumed that coordinates in space and the value of time were absolutes, forming an invariable lattice in space and an absolute time which marched at an unvarying pace. Einstein’s theory was called Relativity because space and time were no longer absolutes. Space and time lost their “ideal” nature and became simply what one measured in carefully done experiments. (Curiously enough, though Einstein showed that results of measuring space and time depended on the relative motion of different observers, these quantities changed in such an odd way that measurements of the speed c of light in vacuum came out precisely the same for all observers. There was a new absolute. A simple exposition of special relativity is N. David Mermin’s Space and Time in Special Relativity.)
The result of Heisenberg’s concern and the thinking about it is called the “Uncertainty Principle”. The statement of the principle is the equation ΔqΔp = ħ. The variables q and p are the same q and p of the Fundamental Quantum Relation and, indeed, it is not difficult to derive the uncertainty principle from the FQR. The symbol delta, Δ, when placed in front of a variable means a difference, that is an interval or range of the variable. Experimentally, a measurement of a variable quantity like position q is never exact. The amount of the uncertainty is Δq. The uncertainty equation above thus says that the uncertainty of a particle’s position times the uncertainty of the same particle’s momentum is ħ. In QM what is different from an ordinary error of measurement is that the uncertainty is intrinsic to QM itself. In a way, this result is not all that surprising. We’ve seen that the wave function ψ for a particle is a cloud of numbers. Similarly, a transformed wave function for the same particle’s momentum is a similar cloud of numbers. The Δ’s are simply a measure of the size of these two clouds and the principle says that as one becomes smaller, the other gets larger in such a way that their product is h bar, whose numerical value I’ve given above.
In fact, back in 1958 when I was in Eikenberry’s QM course and we derived the uncertainty relation from the FQR, I wondered what the big deal was. I was aware that the uncertainty principle was considered rather earthshaking but didn’t see why it should be. What I missed is what Heisenberg’s paper really did. The equation I’ve written above is pure theory. Heisenberg considered the question, “What if we try to do experiments that actually measure the position and momentum. How does this theory work? What is the physics? Could experiments actually disprove the theory?” Among other experimental set-ups Heisenberg imagined a microscope that used electromagnetic rays of increasingly short wavelengths. It was well known classically by the mid-nineteenth century that the resolution of a microscope depends on the wavelength of the light it uses. Light is an electromagnetic (em) wave so one can imagine em radiation of such a short wavelength that it could view with a microscope a particle, regardless of how small, reducing Δq to as small a value as one wished. However, by 1927 it was also well known because of the Compton effect that I talked about in the last post, that such em radiation, called x-rays or gamma rays, consisted of high energy photons which would collide with the electron giving it a recoil momentum whose uncertainty, Δp, turns out to satisfy ΔqΔp = ħ. Heisenberg thus considered known physical processes which failed to overturn the theory. The sort of reasoning Heisenberg used is called a “thought” experiment because he didn’t actually try to construct an apparatus or carry out a “real” experiment. Before dismissing thought experiments as being hopelessly hypothetical, one must realize that any real experiment in physics or in any science for that matter, begins as a thought experiment. One imagines the experiment and then figures out how to build an apparatus (if appropriate) and collect data. In fact, as a science progresses, many experiments formerly expressed only in thought, turn real as the state of the art improves.
Although the uncertainty principle is earthshaking enough that it helped confirm the skepticism of two of the main architects of QM, namely, Einstein and Schrödinger, one should note that, in practice, because of the small size of ħ, the garden variety uncertainties which arise from the “apparatus” measuring position or momentum are much larger than the intrinsic quantum uncertainties. Furthermore, the principle does not apply to c-numbers such as e, the fundamental electron or proton charge, c, the speed of light in vacuum, h, Planck’s constant. There is an interesting story here about a recent (Fall, 2018) redefinition of physical units which one can read about on line. Perhaps I’ll have more to say about this subject in a later post. For now, I’ll just note that starting on May 20, 2019, Planck’s constant will be (or has been) defined as having an exact value of 6.626070150×10¯³⁴ Joule seconds. There is zero uncertainty in this new definition which may be used to define and measure the mass of the kilogram to higher accuracy and precision than possible in the past using the old standard, a platinum-iridium cylinder, kept closely guarded near Paris. In fact, there is nothing muddy or imprecise about the value of many quantities whose measurement intimately involves QM.
During the years after 1925 there was at least one more area which in QM was puzzling to say the least; namely, what has been called “the collapse of the wave function.” Involved in the intense discussions over this phenomenon and how to deal with it was another genius I’ve scarcely mentioned so far; namely Wolfgang Pauli. Pauli, a year older than Heisenberg, was a year ahead of him in Munich studying under Sommerfeld, then moved to Göttingen, leaving just before Heisenberg arrived. Pauli was responsible for the Pauli Exclusion Principle based on the concept of particle spin which he also explicated. (see Wikipedia) He was in the thick of things during the 1925 – 1927 time period. Pauli ended up as a professor in Zurich, but spent time in Copenhagen with Bohr and Heisenberg (and many others) formulating what became known as the Copenhagen interpretation of QM. Pauli was a bon vivant and had a witty sarcastic tongue, accusing Heisenberg at one point of “treason” for an idea that he (Pauli) disliked. In another anecdote Pauli was at a physics meeting during the reading of a muddy paper by another physicist. He stormed to his feet and loudly said, “This paper is outrageous. It is not even wrong!” Whether the meeting occurred at a late enough date for Pauli to have read Popper, he obviously understood that being wrong could be productive, while being meaningless could not.
Over the next few years after 1927 Bohr, Heisenberg, and Pauli explicated what came to be called “the Copenhagen interpretation of Quantum Mechanics”. It is well worth reading the superb article in Wikipedia about “The Copenhagen Interpretation.” One point the article makes is that there is no definitive statement of this interpretation. Bohr, Heisenberg, and Pauli each had slightly different ideas about exactly what the interpretation was or how it worked. However, in my opinion, things are clear enough in practice. The problem QM seems to have has been called the “collapse of the wave function.” It is most clearly seen in a double slit interference experiment with electrons or other quantum particles such as photons or even entire atoms. The experiment consists of a plate with two slits, closely enough spaced that the wave function of an approaching particle covers both slits. The spacing is also close enough that the wavelength of the particle as determined by its energy or momentum, is such that the waves passing through the slit will visibly interfere on the far side of the slit. This interference is in the form of a pattern consisting of stripes on a screen or photographic plate. These stripes show up, zebra like, on a screen or as dark, light areas on a developed photographic plate. On a photographic plate there is a black dot where a particle has shown up. The striped pattern consists of all the dots made by the individual particles when a large number of particles have passed through the apparatus. What has happened is that the wave function has “collapsed” from an area encompassing all of the stripes, to a tiny area of a single dot. One might ask at this point, “So what?” After all, for the idea of a probability distribution to have any meaning, the event for which there is a probability distribution has to actually occur. The wave function must “collapse” or the probability interpretation itself is meaningless. The problem is that QM has no theory whatever for the collapse.
One can easily try to make a quantum theory of what happens in the collapse because QM can deal with multi-particle systems such as molecules. One obtains a many particle version of QM simply by adding the coordinates of the new particles, which are to be considered, to a multi-particle version of the Schrödinger equation. In particular, one can add to the description of a particle which approaches a photographic plate, all the molecules in the first few relevant molecular layers of the plate. When one does this however, one does not get a collapse. Instead the new multi-particle wave function simply includes the molecules of the plate which are as spread out as much as the original wave function of the approaching particle. In fact, the structure of QM guarantees that as one adds new particles, these new particles themselves continue to make an increasingly spread out multi-particle wave function. This result was shown in great detail in 1929 by John von Neumann. However, the idea of von Neumann’s result was already generally realized and accepted during the years of the late 1920’s when our three heroes and many others were grappling with finding a mechanism to explain the experimental collapse. Bohr’s version of the interpretation is simplicity itself. Bohr posits two separate realms, a realm of classical physics governing large scale phenomena, and a realm of quantum physics. In a double slit experiment the photographic plate is classical; the approaching particle is quantum. When the quantum encounters the classical, the collapse occurs.
The Copenhagen interpretation explains the results of a double slit experiment and many others, and is sufficient for the practical development of atomic, molecular, solid state, nuclear and particle physics, which has occurred since the late 1920’s. However, there has been an enormous history of objections, refinements, rejections and alternate interpretations of the Copenhagen interpretation as one might well imagine. My own first reaction could be expressed as the statement, “I thought that ‘magic’ had been banned from science back in the 17th century. Now it seems to have crept back in.” (At present I take a less intemperate view.) However, one can make many obvious objections to the Copenhagen interpretation as I’ve baldly stated it above. Where, exactly, does the quantum realm become the classic realm? Is this division sharp or is there an interval of increasing complexity that slowly changes from quantum to classical? Surely, QM, like the theory of relativity, actually applies to the classical realm. Or does it?
During the 1930’s Schrödinger used the difficulties with the Copenhagen interpretation to make up the now famous thought experiment called “Schrödinger’s Cat.” Back in the early 1970’s when I became interested in the puzzle of “collapse” and first heard the phrase “Schrödinger’s Cat”, it was far from famous so, curious, I looked it up and read the original short article, puzzling out the German. In his thought experiment Schrödinger uses the theory of alpha decay. An alpha particle confined in a radioactive nucleus is forever trapped according to classical physics. QM allows the escape because the alpha particle’s wave function can actually penetrate the barrier which classically keeps it confined. Schrödinger imagines a cat imprisoned in a cage containing an infernal apparatus (hollenmaschine) which will kill the cat if triggered by an alpha decay. Applying a multi-particle Schrödinger’s equation to the alpha’s creeping wave function as it encounters the trigger of the “maschine”, its internals, and the cat, the multi-particle wave function then contains a “superposition” (i.e. a linear combination) of a dead and a live cat. Schrödinger makes no further comment leaving it to the reader to realize how ridiculous this all is. Actually, it is even worse. According to QM theory, when a person looks in the cage, the superposition spreads to the person leaving two versions, one looking at a dead cat and one looking at a live cat. But a person is connected to an environment which also splits and keeps splitting until the entire universe is involved.
What I’ve presented here is an actual alternative to the Copenhagen Interpretation called “the Many-worlds interpretation”. To quote from Wikipedia “The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual ‘world’ (or ‘universe’).” The many-worlds interpretation arose in 1957 in the Princeton University Ph.D. dissertation of Hugh Everett working under the direction of the late John Archibald Wheeler, who I mentioned in the last post. Although I am a tremendous admirer of Wheeler, I am skeptical of the many-worlds interpretation. It seems unnecessarily complicated, especially in light of ideas that have developed since I noticed them in 1972. There is no experimental evidence for the interpretation. Such evidence might involve interference effects between the two versions of the universe as the splitting occurs. Finally, if I exist in a superposition, how come I’m only conscious of the one side? Bringing in “consciousness” however, leads to all kinds of muddy nonsense about consciousness effects in wave function splitting or collapse. I’m all for consciousness studies and possibly such will be relevant for physics after another revolution in neurology or physics. At present we can understand quantum mechanics without explicitly bringing in consciousness.
In the next post I’ll go into what I noticed in 1971-72 and how this idea subsequently became developed in the greater physics community. The next post will necessarily be somewhat more mathematically specific than so far, possibly including a few gory details. I hope that the math won’t obscure the story. In subsequent posts I’ll revert to talking about physics theory without actually doing any math.
Physics, Etc.
In telling a story about physics and some of its significance for a life of awareness I’ll start with an idea of the philosopher Immanuel Kant (1724 – 1804). Kant, in my mind, is associated with impenetrable German which translates into impenetrable English. To find some clarity about Kant’s ideas one turns to Wikipedia, where the opening paragraph of the Kant entry explains his main ideas in an uncharacteristically comprehensible way. One of these ideas is that we are born into this world with our minds prepared to understand space, time, and causality. And with this kind of mental conditioning we can make sense of simple phenomena, and, indeed, pursue science. This insight predates Darwin’s theory of evolution which offers a plausible explanation for it, by some sixty-odd years, and was thus a remarkable insight on the part of Kant. Another Kant idea that is relevant to our story is his distinction between what he calls phenomena and noumena. Quoting from Wikipedia, “… our experience of things is always of the phenomenal world as conveyed by our senses: we do not have direct access to things in themselves, the so-called noumenal world.” Of course, this is only one aspect of Kant’s thought, but the aspect that seems to me most relevant to what might be meant by physical reality. Kant was a philosopher’s philosopher, totally dedicated to deepening our understanding of what we may comprehend about the world and morality by purely rational thought. He was born in Königsberg, East-Prussia, at the time a principality on the Baltic coast east of Denmark and north of Poland-Lithuania; and died there 80 years later. Legend has it that during his entire life he never traveled more than 10 miles from his home. The Wikipedia article refutes this slander: Kant actually traveled on occasion some 90.1 miles from Königsberg.
The massive extent of Kant’s philosophy leaves me somewhat appalled, particularly since I understand little of it and because what I perhaps do understand seems dubious at best and meaningless at worst. What Kant may not have realized is the idea that the extent and nature of the noumenal world is relative to the times in which one lives. Kant was born 3 years before Isaac Newton died, so by the date of his birth the stage was well set for the age of classical physics. During his life classical mechanics was developed largely by two great mathematicians, Joseph-Louis Lagrange (1736 – 1813) and Pierre-Simon Laplace (1747 – 1849). Looking back from Kant’s time to the ancient world one sees an incredible growth of the phenomenal world, with the Copernican revolution, a deepening understanding of planetary motion, and Newton’s Laws of mechanics. In the time since Kant lived laws of electricity and magnetism, statistical mechanics, quantum mechanics, and most of present-day science were developed. This advance raises a question. Does the growth of the phenomenal world entail a corresponding decrease in the noumenal world or are phenomena and noumena entirely independent of one another? Of course, I’d like to have it both ways, and can do so by imagining two senses of noumena. To get an idea of the first original idea, I will tell a brief story. In the early 1970’s we were visited at Auburn University by the great physicist, John Archibald Wheeler, who led a discussion in our faculty meeting room. I was very impressed by Dr. Wheeler. To me he seemed a “tiger”, totally dedicated to physics, his students, and to an awareness of what lay beyond our comprehension. At one point he pointed to the tiles on the floor and said to us physicists, something like, “Let each one of you write your favorite physics laws on one of these tiles. And after you’ve all done that, ask the tiles with their equations to get up and fly. They will just lie there; but the universe flies.” Wheeler had doubtless used this example on many prior occasions, but it was new to me and seems to get at the meaning of noumena as a realm independent of anything science can ever discover. On the other hand, as the realm of phenomena that we do understand has grown, we can regard noumena simply as a “blank” in our knowledge, a blank which can be filled in as science, so to speak, peels back the layers of an “onion” revealing the understanding of a larger world, and at the same time, exposing a new layer of ignorance to attack. This second sense of the word in no way diminishes the ultimate mystery of the universe. In fact, it appears to me that the quest for ultimate understanding in the face of the great mystery is what gives physics (and science) a compulsive, even addictive, fascination for its practitioners. Like compulsive gamblers experimental physicists work far into the night and theorists endlessly torture thought. Certainly, the idea that we could conceivably uncover ever more specifics into the mystery of ultimate being is what drew me to the area. That, as well as the idea that if one wants to understand “everything”, physics is a good place to start.
In my understanding, the story of physics during my lifetime and the 30 years preceding my birth is the story of a massive, earthshaking revolution. Thomas Kuhn’s The Structure of Scientific Revolutions, mentioned in earlier posts is a story of many shifts in scientific perception which he calls revolutions. In his terms what I’m talking about here is a “super-duper-revolution”, a massive shift in understanding whose import is still not fully realized in our society at large at the present time. Most of the ”revolutions” that Kuhn uses as examples affect only scientists in a particular field. For example, the fall of the phlogiston theory and the rise of Oxygen in understanding fire and burning was a major revolution for chemistry, but had little effect on the culture of society at large. Similarly, in ancient times the rise of Ptolemaic astronomy mostly concerned philosophers and intellectuals. The larger society was content with the idea that gods or God controlled what went on in the heavens as well as on earth. The Copernican revolution, on the other hand, was earth shaking (super-duper) for the entire society, mainly because it called into question theories of how God ran the universe and because it became the underpinning of an entirely new idea of what was “real”. Likewise, the scientific revolution of the 16th and 17th centuries was earthshaking to the entire society, which, however, as time wore on into the 18th and 19th centuries became accustomed to it and assumed that the classical, Newtonian “clockworks” universe was here to stay forever however uncomfortable it might be to artists and writers, who hoped to live in a different, more meaningful world of their own experience, rejecting scientific “reality” as something which mattered little in a spiritual sense. Who could have believed that in the mid 1890’s after 300 years (1590 – 1890, say) of continued, mostly harmonious development the entire underpinning of scientific reality was about to be overturned by what might be called the quantum revolution. Yet that is what happened in the next forty years (1895 – 1935) with continuing advances and consolidation up to the present day. (From now on I’ll use the abbreviation QM for Quantum Mechanics, the centerpiece of this revolution.) Of course, as with any great revolution, all has not been smooth. Many of the greatest scientists of our times, most notably Albert Einstein and Erwin Schrödinger, found the tenets of the new physics totally unacceptable and fought them tooth and nail. In fact, there is at least one remaining QM puzzle epitomized by “Schrödinger’s Cat” about which I hope to have my say at some point.
It is my hope that readers of this blog will find excitement in the open possibilities that an understanding of the revolutionary physical “reality” we currently live in suggests. In talking about it I certainly don’t want to try “reinvent the wheel” since many able and brilliant writers have told portions of the story. What I can do is give references to various books and URL’s that are with few exceptions (which I’ll note) great reading. I’ll have comments to make about many of these and hope that with their underpinning, I can tell this story and illuminate its relevance for what I’ve called Western Zen.
The first book to delve into is The Quantum Moment: How Planck, Bohr, Einstein, and Heisenberg Taught us to Love Uncertainty by Robert P. Crease and Alfred Scharff Goldhaber. Robert Crease is a philosopher specializing in science and Alfred Goldhaber is a physicist. The book, which I’ll abbreviate as TQM, tells the history of Quantum Mechanics from its very beginning in December, 1900, to very near the present day. Copyrighted by W.W. Norton in 2014 it is quite recent, today as I write being early November, 2018. The story this book tells goes beyond an exposition of QM itself to give many examples of the effects that this new reality has had so far in our society. It is very entertaining and well written though, on occasion it does get slightly mathematical in a well-judged way in making quantum mechanics clearer. A welcome aspect of the book for me was the many references to another book, The Conceptual Development of Quantum Mechanics by Max Jammer. Jammer’s book (1966) is out of print and is definitely not light reading with its exhaustive references to the original literature and its full deployment of advanced math. Auburn University had Jammer in its library and I studied it extensively while there. I was glad to see the many footnotes to it in TQM, showing that Jammer is still considered authoritative and that there is no more recent book detailing this history. Recently, I felt that I would like to own a copy of Jammer so found one, falling to pieces, on Amazon for fifty odd dollars. If you are a hotshot mathematician and fascinated by the history of QM, you will doubtless find Jammer in any university library.
The quantum revolution occurred in two great waves. The first wave, called the “old quantum theory” started with Planck’s December, 1900, paper on black body radiation and ended in 1925 with Heisenberg’s paper on Quantum Mechanics proper. From 1925 through about 1932, QM was developed by about 8 or so geniuses bringing the subject to a point equivalent to Newton’s Principia for classical mechanics development. Besides the four physicists of the Quantum Moment title, I’ll mention Louis de Broglie, Wolfgang Pauli, PAM Dirac, Max Born, Erwin Schrödinger. And there were many others.
A point worth mentioning is that The Quantum Moment concentrates on what might be called the quantum weirdness of both the old quantum theory and the new QM. This concentration is appropriate because it is this weirdness that has most affected our cultural awareness, the main subject of the book. However, to the physicists of the period 1895 – 1932, the weirdness, annoying and troubling as it was, was in a way a distraction from the most exciting physics going on at the time; namely, the discovery that atoms really exist and have a substructure which can be understood, an understanding that led to a massive increase in practical applications as well as theoretical knowledge. Without this incredible success in understanding the material world the “weirdness” might have well have doomed QM. As we will mention below most physicists ignore the weirdness and concentrate on the “physics” that leads to practical advances. Two examples of these “advances” are the atomic bomb and the smart phone in your pocket. In the next few paragraphs I will fill in some of this history of atomic physics with its intimate connection to QM.
The discovery of the atom and its properties began in 1897 as J.J. Thomson made a definitive breakthrough in identifying the first sub-atomic particle, the lightweight, negatively charged electron (see Wikipedia). Until 1905, however, many scientists disbelieved in the “reality” of atoms in spite of their usefulness as a conceptual tool in understanding chemistry. In the “miracle year” 1905 Albert Einstein published four papers, each one totally revolutionary in a different field. The paper of interest here is about Brownian motion, a jiggling of small particles, as seen through a microscope. As a child I had a very nice full laboratory Bausch and Lomb microscope, given by my parents when I was about 7 years old. In the 9th grade I happened to put a drop of tincture of Benzoin in water and looked at it through the microscope, seeing hundreds of dancing particles that just didn’t behave like anything alive. I asked my biology teacher about it and after consulting her husband, a professor at the university, she told me it was Brownian motion, discovered by Robert Brown in 1827. I learned later that the motion was caused because the tiny moving particles are small enough that molecules striking them are unbalanced by others, causing a random motion. I had no idea at time how crucial for atomic theory this phenomenon was. It turns out that the motion had been characterized by careful observation and that Einstein showed in his paper how molecules striking the small particles could account for the motion. Also, by this time studies of radioactivity had shown emitted alpha and beta particles were clearly sub-atomic, beta particles being identical with the newly discovered electrons and the charged alpha particles turning into electrically neutral helium as they slowed and captured stray electrons.
Einstein’s other 1905 papers were two on special relativity and one on the photoelectric effect. As strange as special relativity seems with its contraction of moving measuring sticks, slowing of moving clocks, simultaneity dependent upon the observer to say nothing of E = mc², this theory ended up fitting comfortably with classical Newtonian physics. Not so with the photoelectric effect.
In December, 1900, Max Planck started the quantum revolution by finding a physical basis for a formula he had guessed earlier relating the radiated energy of a glowing “black body” to its temperature and the frequencies of its radiation. A “black body” is made of an ideal substance that is totally efficient in radiating electro-magnetic waves. Such a body could be simulated experimentally with high accuracy by measuring what came out of a small hole in the side of an enclosed oven. To find the “physics” behind his formula Planck had turned to statistical mechanics, which involves counting numbers of discrete states to find the probability distribution of the states. In order to do the counting Planck had artificially (he thought) broken up the continuous energy of electromagnetic waves into chunks of energy, hν, ν being the frequency of the wave, denoted historically by the Greek letter nu. (Remember: the frequency is associated with light’s color, and thus the color of the glow when a heated body gives off radiation) Planck’s plan was to let the “artificial” fudge-factor h go to zero in the final formula so that the waves would regain their continuity. Planck found his formula, but when he set h = 0, he got the classical Raleigh-Jeans formula for the radiation with its “ultra-violet catastrophe”. The latter term refers to the Raleigh-Jeans formula’s infinite energy radiated as the frequency goes higher. Another formula, guessed by Wien, gave the correct experimental results at high frequencies but was off at lower frequencies where the Raleigh-Jeans formula worked just fine. To his dismay what Planck found was that if he set h equal to a very small finite value, his formula worked perfectly for both low and high frequencies. This was a triumph but at the same time, a disaster. Neither Planck nor anyone else believed that these hν bundles could “really” be real. Maybe the packets came off in bundles which quickly merged to form the electromagnetic wave. True, Newton had thought light consisted of a stream of tiny particles, but over the years since his time numerous experiments showed that light really was a wave phenomenon, with all kinds of wave interference effects. Also, in the 19th century physicists, notably Fraunhofer, invented the diffraction grating and with it the ability to measure the actual wave length of the waves. The Quantum Moment (TQM) has a wonderfully complete detailed story of Planck’s momentous breakthrough in its chapter “Interlude: Max Planck Introduces the Quantum”. TQM is structured with clear general expositions followed by more detailed “Interludes” which can be skipped without interrupting the story.
Einstein’s 1905 photoelectric effect paper assumed that the hν quanta were real and light actually acted like little bullets, slamming into a metal surface, penetrating, colliding with an atomic electron and bouncing it out of the metal where it could be detected. It takes a certain energy to bounce an electron out of its atom and then past the surface of the metal. What was experimentally found (after some tribulations) was that energy of the emerging electrons depended only on the frequency of the light hitting the surface. If the light frequency was too low, no matter how intense the light, nothing much happened. At higher frequencies, increasing the intensity of the light resulted in more electrons coming out but did not increase their energy. As the light frequency increased the emitted electrons were more energetic. It was primarily for this paper that Einstein received his Nobel Prize in 1921.
A huge breakthrough in atomic theory was Ernest Rutherford’s discovery of the atomic nucleus in the early years of the 20th century. Rather than a diffuse cloud of electrically positive matter with the negatively charged electrons distributed in it like raisins (the “plum pudding” model of the atom) Rutherford found by scattering alpha particles off gold foil that the positive charge of the atom was in a tiny nucleus with the electrons circling at a great distance (the “fly in the cathedral model”). There was a little problem however. The “plum pudding” model might possibly be stable under Newtonian classical physics, while the “fly in the cathedral” model was utterly unstable. (Note: Rutherford’s experiment, though designed by him, was actually carried out between 1908 and 1913 by Hans Geiger and Ernest Marsden at Rutherford’s Manchester lab.) Ignoring the impossibility of the Rutherford atom physics plowed ahead. In 1913 the young Dane Niels Bohr made a huge breakthrough by assuming quantum packets were real and could be applied to understanding the hydrogen atom, the simplest of all atoms with its single electron circling its nucleus. Bohr’s model with its discrete electron orbits and energy levels explained the spectral lines of glowing hydrogen which had earlier been discovered and measured with a Fraunhofer diffraction grating. At Rutherford’s lab it was quickly realized that energy levels were a feature of all atoms, and the young genius physicist, Henry Moseley, using a self-built X-ray tube to excite different atoms refined the idea of the atomic number, removing several anomalies in the periodic table of the time, while predicting 4 new chemical elements in the process. At this point World War I intervened and Moseley volunteered for the Royal Engineers. One among the innumerable tragedies of the Great War was the death of Moseley August 10, 1915, aged 27, in Gallipoli, killed by a sniper.
Brief Interlude: It is enlightening to understand the milieu in which the quantum revolution and the Great War occurred. A good read is The Fall of the Dynasties – The Collapse of the Old Order: 1905 – 1922 by Edmond Taylor. Originally published in 1963, the book was reissued in 2015. The book begins with the story of the immediate cause of the war, an assassination in Sarajevo, Bosnia, part of the dual monarchy Austria-Hungary empire; then fills in the history of the various dynasties, countries and empires involved. One imagines what it would be like to live in those times and becomes appalled by the nationalistic passions of the day. While explicating the seemingly mainstream experience of living in the late 19th and early 20th century, and the incredible political changes entailed by the fall of the monarchies and the Great War, the aspects of the times, which we think of, these days, as equally revolutionary are barely mentioned. These were modern art with its demonstration that aesthetic depth lay in realms beyond pure representation, the modern novel and poetry, the philosophy of Wittgenstein which I’ve discussed above and perhaps most revolutionary of all, the fall of classic physics and rise of the new “reality” of modern physics which we are talking about in this post. (With his deep command of the relevant historical detail for his story the author does, however, get one thing wrong when he briefly mentions science. He chooses Einstein’s relativity of 1905 but calls it “General Relativity” putting in an adjective which makes it sound possibly more exciting than plain “relativity”. The correct phrase is “Special Relativity” which indeed was quite exciting enough. General Relativity didn’t happen until 1915.)
Unlike the second world war the first was not a total war and research in fundament physics went on. The mathematician turned physicist Arnold Sommerfeld in Munich generalized Bohr’s quantum rules by imagining the discrete electron orbits as elliptical rather than circular and taking their tilt into account, giving rise to new labels (called quantum numbers) for these orbits. The light spectra given off by atoms verified these new numbers with a few discrepancies which were later removed by QM. During this time and after the war ended, physicists became concerned about the contradiction between the wave and particle theories of light. This subject is well covered in TQM. (See the chapter “Sharks and Tigers: Schizophrenia”. It is easy to see the problem. If one has surfed or even just looked at the ocean, one feels or sees that a wave carries energy along a wide front, this energy being released as the wave breaks. This kind of energy distribution is characteristic of all waves, not just ocean waves. On the other hand, a bullet or billiard ball carries its energy and momentum in a compact volume. Waves can interfere with each other, reinforcing or canceling out their amplitudes. So, what is one to make of light which makes interference patterns when shined through a single or double slit but acts like a particle in the photoelectric effect or, even more clearly, like a billiard ball collision when a light quantum, called a photon, collides with an electron, an effect discovered by Arthur Compton in 1923. To muddy the waters still further, in 1922 the French physicist Louis de Broglie reasoned that if light can act like either a particle or wave depending on circumstances, by analogy, an electron, regarded hitherto as strictly a particle, could perhaps under the right conditions act like a wave. Although there was no direct evidence for electron waves at the time, there was suggestive evidence. For example, with the Bohr model of the hydrogen atom if one assumed the lowest, “ground state” orbit was a single electron wave length, one could deduce the entire Bohr theory in a new, simple way. By 1924 it was clear to physicists that the “old” quantum mechanics just wouldn’t do. This theory kept classical mechanics and classical wave theory and restricted their generality by imposing “quantum” rules. With both light and electrons being both wave and particle, physics contained an apparent logical contradiction. Furthermore, though the “old” theory had successes with its concept of energy levels in atoms and molecules, it couldn’t theoretically deal at all with such seemingly simple entities as the hydrogen molecule or the helium atom which experimentally had well defined energy levels. The theory was a total mess. It was in 1925 that the beginnings of a completely new, fundamental theory made its appearance leading shortly to much more weirdness than had already appeared in the “old quantum” theory. In the next post I’ll delve into some of the story of the new QM.
Reality is what we all know about as long as we don’t think. It’s not meant to be thought about but reacted to; as threats, awareness of danger; bred into our bones by countless years of evolution. But now, after those countless years, we have a brain and a different kind of awareness that can wonder about such things. Is such wonder worthless? Who knows. Worthless or not, I’m stuck with it because I enjoy ruminations and trying to understand what we take for granted, finding as I think harder, nothing but mystery. In this post I will begin to talk about “reality” and try to clarify the idea somewhat, bringing in Zen, which may or may not be relevant.
In thinking about “reality” I will take it as a primitive, attempting no definition. One may try to get at reality by considering “fiction”, perhaps a polar opposite. In this consideration one notes that Aristotelean logic doesn’t apply. There is a middle one can’t exclude, because, in this case, the middle is larger and more important than the ends of the spectrum.
One can begin to work into this middle by considering the use of the word “fiction” in Yuval Harari’s Sapiens: A Brief History of Humankind, where “fiction” is applied to societal conventions and laws. Sapiens is a fascinating book, but Harari’s use of the word “fiction” for “convention” rubbed me the wrong way. Although laws and conventions are, strictly speaking, fictions, they have one property popularly attributed to “reality”. A common saying is: “One doesn’t have to believe in reality. It will up and bite you whether you believe in it or not.” The same applies to laws and convention. If one is about to be executed for “treason”, it doesn’t matter that the law is really a “fiction”, compared perhaps with physical reality. In fact, most “realities” whether physical or societal possess a large social component. This area of social agreement comes up when one judges whether another human is sane or crazy. The sine qua non of insanity is its defiance of reality as it is conceived by we “sane ones.” Unfortunately, it is all too easy to forget that conventions are a product of society and take them as absolutes. Teenagers are notorious for wanting to be “in” with their crowd even when the fashions of the crowd are highly dubious. But many so-called grown-ups are equally taken in by the conventions of society. Most of the time it is easy and harmless to go along with the conventions, but one should always realize that they are, in fact, made up and vary from society to society. Presumably that is what Harari was trying to emphasize.
Then there are questions of the depth of realities. In many cultures there is a claim for “levels of reality” beyond everyday physical realities like streets, tile floors, buildings, weather, and the world around us. Hindu mystics consider the “real” world Maya, an illusion. Modern physics grants the reality of the everyday world, but has found a world of possibly deeper reality behind it. There are atoms, molecules, elementary particles, all governed by the “reality” of quantum mechanics which lies behind what one might be tempted to call the “fiction” of classical mechanics. No physicist “really” considers classical mechanics a fiction, though perhaps many would claim there is a wider and possibly deeper reality behind it. Most physicists would leave such questions to philosophers and would consider serious thought about them, a waste of time. Physics first imagined the reality of molecules in the nineteenth century, explaining concepts and measurements of heat related phenomena. For example, temperature is the mean kinetic energy of molecular motion related to what we measure with a thermometer by Boltzmann’s constant. In the early 20th century there were very reputable scientists skeptical of the existence of atoms and molecules. Most of them were convinced of the atom’s reality by Einstein’s theory of Brownian motion (1905). As the 20th century wore on the entire basis of chemistry was established in great detail by quantum theories of electron states in atoms and molecules. In the twenties and thirties cosmology came into being. Besides explaining the genesis of atomic elements, cosmology, using astronomical observations and theory, finds a universe consisting of 10’s of billions of galaxies, each consisting on average of 10’s of billions of stars, all of which originated in a “big bang” some 13.6 billion years ago. In a later post I’ll consider the current situation physics finds itself in, with dark matter, dark energy, string theory, and ideas of a multi-verse. If one considers these as realities, one should not hold such a belief too firmly. History teaches us that physics is subject to revolutions which alter the very “facts” of physical reality. Besides the lurking revolutions of the future one notes that the “realities” of physics and chemistry lie in their theories which have proved essential for the “reality” of our modern technologies. One might claim however, that these are theories of reality, rather than a more immediate impingement of reality in our lives. I hope to say more about “physical reality” in the next post.
Leaving the physical world, one asks, “What about myth, an admitted fiction?” If a myth has a deep meaning and lesson for our lives, doesn’t that entail a certain kind of reality of more importance than a trivial sort of physical reality? Consider “myth” vs. “history”. Reality for history depends on “primary sources”, written records. The “written” record might be that of an oral interview when recent history is concerned; but the idea is that there is a concrete record of some kind that relates directly to the happenings that history is reporting. Consider the stories about Pythagoras I wrote about in the last post. These stories were based on “secondary sources”, accounts written hundreds of years after Pythagoras’s death, relying on hearsay or vanished primary sources with no way of telling which was which. They form the basis for the shallow kind of myth that gives “myth” its common pejorative connotation. We dismiss the myths about Pythagoras’s golden thigh, his flying from place to place, where he may appear simultaneously, not simply because these claims conflict with our present scientific world view, but because they have no relevance to facts about Pythagoras which matter to us in considering his contributions to the history of mathematics. The myths about Pythagoras can be considered “trivial” myths which discredit the very idea of myth. But what about deeper myths? Most religions tell stories about their founders and contributors which have a high mythic content. I ask in this context, “Does distinguishing between myth and historical reality in matters of religious history, really matter, or matter at all?” Buddhists are notorious for being unfazed when various historical stories are proven fictional by historians. I would baldly state their attitude as: “The religious importance of the story is what matters; not the factual truth of every so-called fact in the canon.” Getting closer to home, I might ask, “Suppose the facts about Jesus’s physical existence were convincingly proved to be completely fictional. Would it matter to Christianity?” I would guess that it WOULD be devastating to believers, but that, in fact, it SHOULDN’T be. What matters in Christianity is the insight that feelings of love are deeply embedded in the universe and that Jesus, whether a fictional person or not, is responsible for bringing this “fact” to life, to showing that in the deep mystery one might call “God”, there is a forgiveness of the animal brutishness of humans. If through an active nurture of love in ourselves we experience this deep truth and express it in the way we act towards others, we redeem ourselves, and potentially, all of humanity. The stories, “myths” if you will, help us towards this experiential realization, a realization that is utterly unrelated to “belief”, a realization which could be called “Christian Satori”. The uniqueness of Christianity, as far as I can tell, is this emphasis on “love”. Unfortunately, the methodology of Christianity, with its historical emphasis on grasping ever harder at “belief”, is deeply flawed, leading backwards to the brutishness, rather than forward to love. Certain Christian thinkers, Thomas Merton for example, seem to have realized that Zen practice can be helpful in reaching a deeper understanding of their religion. One aspect of a Western Zen would be its applicability to a Western religious practice of a more deeply realized Christianity. Actually, whether or not “love” is embedded in the universe, we, as humans are susceptible to it, and can choose to base our lives on realizing its full depths in our beings.
Getting back to “reality”, I’ll consider possible insights from traditional Eastern Zen. So far in talking about Zen I’ve emphasized the Soto school of Japanese Zen and have tried to show how various Western ideas are susceptible to a deeper understanding by means of what might be called Western Zen. Actually, I claim that the insights of Zen lie below any cultural trappings; and that for a complete understanding, particularly as such might relate to “reality”, one should consider Zen in all its manifestations. The Rinzai Japanese school is the one we typically find written about in the US. It is the school which perhaps (I’m pretty ignorant about such matters) has deeper roots in China where Zen originated and the discipline of concentrating on Koans came into being. An excellent introduction to this school is the book Zen Comments on the Mumonkan, by Zenkei Shibayama, Harper and Row, 1974. The Chinese master Wu-men, 1183-1260, collected together 48 existing Koans and published them in the book, Wu-wen kuan. In Japan Wu-wen is called “Mumon” and his book is called the Mumonkan.
During the late 1960’s and early 1970’s I attended an annual conference of what was then called the Society for Religion in Higher Education. Barbara, my wife at the time, as a former Fulbright scholar, was an automatic member of this Society. As her husband I could also attend the conference. The meetings of the Society were always very interesting with deeply insightful discussions going on, day and night. These discussions never much concerned belief in anything, but concentrated on questions of meaning and values. In fact, the name of the Society was later changed to the Society for Values in Higher Education. During one of the last meetings I attended, possibly in 1972, there was much discussion about a new Zen book that Kenneth Morgan, a member of the Society was instrumental in bringing into being. Professor Morgan had arranged for the Japanese Master Zenkei Shibayama to give Zen presentations of the Mumonkan at Colgate University. The entire Mumonkan had been translated into English by Sumiko Kudo, a long-time acolyte at Master Shibayama’s monastery and was soon to be published. Having committed to understanding Zen, I was very interested in all of this and looked forward to seeing the book. After moving to Oregon in 1974 I kept my eyes open for it and immediately bought it when it first appeared at the University of Oregon bookstore. Later, I developed a daily routine of doing some Yoga after breakfast and then reading one of the Koans.
The insights that the Koans are to help one realize are totally beyond language. The Koans may be considered to be a kind of verbal Jiujitsu, which when followed rationally will throw one momentarily out of language thinking into an intuitive realization of some sort. I had encountered various Koans before working through the Mumonkan and had found little insight, but, as a student of physics and mathematics, thought of them as fascinating problems to be enjoyed and solved. I realized that in working on a difficult problem in math or physics, the crucial break-through often comes via intuition. One has a sudden insight, and even before trying to apply it to the problem, one realizes that one has found a solution. In a technical area one’s insight can be attached to mathematical or scientific language and the solution is a concrete expression which solves a concrete problem. I realized that with Zen, one might have a similar kind of intuitive insight even if it could not be expressed in ordinary language, but, perhaps, could be stated as an answering Koan to the one posed. Another metaphor besides the Jiujitsu one, is the focusing of an optical instrument, such as a microscope, telescope or binoculars. Especially when trying to focus a microscope one can be too enthusiastic in turning the focusing wheel and turn right past the focus, seeing that for an instant one had it, but that it was now gone. With a microscope one can recover the focus. With a Zen Koan the momentary insight is usually lost and efforts at recovery hopeless.
A somewhat better example of this focusing metaphor occurred when I was a professor at Auburn University. One quarter I taught a lab for an undergraduate course in electricity and magnetism. This was slightly intimidating as I was a theoretical physicist with little background in dealing with experimental apparatus. One afternoon the experiment consisted of working with an ac (alternating current) bridge similar to a Wheatstone bridge for direct current, but with a complication arising from the ac. Electrical bridges were developed in the nineteenth century to measure certain electrical quantities which are these days more easily measured by other means. Nowadays the bridges mainly have pedagogical value. With a Wheatstone bridge one achieves a balance in the bridge by adjusting a variable resistor until the current across the bridge, measured by a delicate ammeter, vanishes. One can then deduce the value of an unknown resistor in the circuit. With ac there is not only resistance but also a quantity called reactance, which arises because a magnetic coil or capacitor will pass an ac current. To adjust an ac bridge, one twiddles not only a variable resistance but a variable magnetic coil (inductor) which changes the reactance. In the lab there were about 5 or 6 bridges to be set up, each tended by a pair of students. The students put their bridges together with no difficulties; but then, after about 10 minutes, it became clear that none of the student teams had been able to balance their bridge. The idea was to adjust one of the two adjustable pieces until there was a dip in the current through the ammeter. Then adjust the other until the dip increased, continuing in this back and forth manner until the current vanished or became very small. It turned out that no matter what the students did, the current though the ammeter never dipped at all. Of course, the students turned to their instructor for help in solving their problem and I was on the spot. The experience the students had is quite similar to dealing with a Koan. No matter what one does, how much one concentrates, or how long one works at it, the Koan never comes clear. With the ac bridge the students could actually have balanced it by a systematic process, but this would have taken a while. I should have suggested this, but didn’t think of it. Instead I had a pretty good idea of some of the quantities involved in the circuit, whipped out my slide rule (no calculators in those days), and suggested a setting for the inductor. This setting was close enough that there was a current dip when the resistor was adjusted and all was well. The reason that balancing an ac bridge is so difficult is that the two quantities concerned, the resistance R and the reactance X, are in a sense, at right angles to each other, even though they are both quantities measured by an electrical resistance unit, ohms, which is not spatial at all. Nevertheless, even though non-spatial, they satisfy a Pythagorean kind of equation
R² + X² = Z²
where Z is called the Impedance in an ac circuit. The quantities R and X can be plotted at right angles to each other and a triangle made with Z as the hypotenuse. If one adjusts either R or X separately, one is reducing the contribution towards the impedance of one leg of the triangle which does not greatly affect the impedance, at least not enough to noticeably change the current through the ammeter of an ac bridge. Incidentally, what I’ve just explained is a trivial example of a tremendously important idea in theoretical physics and mathematics called isomorphism, in which quantities in wildly different contexts share the same mathematical structure.
I hope that the analogies of verbal Jiujitsu and getting things into focus make somewhat clearer the problem of dealing with Koans. One might well ask if such dealing is worth the trouble and, on a personal note, what kind of luck I’ve had with them, especially as they might throw some light on the nature of “reality”. First, I must say that I have found that engaging the Koans of the Mumonkan is very worthwhile even though most of them remain completely mysterious to me. Moreover, even though I have had epiphanies when reading some of the Koans or the comments about them, there is no way for me to tell whether or not I have really understood what, if anything, they are driving at. Nevertheless, after spending some years with them, off and on, in a very desultory, undisciplined manner, I feel that they have helped indirectly to make my thinking clearer. My approach when I first spent a year going through Zen Comments was to do a few minutes of Yoga exercises, with Yoga breathing and meditation, attempting to clear my mind. Then I would carefully read the Koan and the comments, not trying to understand at all, while continuing meditation. Typically, at that point, I would have a peaceful feeling from the meditation but no epiphany or understanding. I would then put the book aside and go about the business of the day until I repeated this exercise with the next Koan the next day. Sometimes I would skip a day and sometimes I would go back and look at an earlier Koan. This reading was very pleasant as an exercise. I tried to develop the attitude of indifference towards whether I understood anything or not and avoided getting wrought up in trying to break through. My feeling about this kind of exercise is that it does lead to some kind of spiritual growth whether or not the Koans make any sense. As for “enlightenment”, I think it is a loaded word and best ignored. A Western substitute might be “clarity of thought”. Whether or not meditation, studying Koans or just thinking has anything to do with it, I have, on occasion, been unexpectedly thrown into a state of unusual clarity, in which puzzles which once seemed baffling seemed to come clear. As for the Zen Comments I might make a few suggestions especially as they relate to “reality”. Consider, for example, Koan 19, “Ordinary Mind is Tao”, towards which the metaphor above, of finding a focus, might be relevant. If you haven’t heard about the concept of Tao, pick up and read the Tao Te Ching, Lao Tzu’s fundamental Chinese classic. Tao may be loosely translated as “Deep Truth Path”. Koan 19, as translated by Ms. Kudo reads as follows:
“Joshu once asked Nansen, ‘What is Tao?’ Nansen answered, ‘Ordinary mind is Tao.’ ‘Then should we direct ourselves towards it or not?’ asked Joshu. ‘If you try to direct yourself toward it, you go away from it,’ answered Nansen. Joshu continued, ‘If we do not try, how can we know that it is Tao?’ Nansen replied, ‘Tao does not belong to knowing or not knowing. Knowing is illusion; not knowing is blankness. If you really attain to Tao of no-doubt, it is like the great void, so vast and boundless. How then can there be right or wrong in the Tao?’ At these words Joshu was suddenly enlightened.”
Mumon Commented. This comment is very relevant.
“Questioned by Joshu, Nansen immediately shows that the tile is disintegrating, the ice is dissolving, and no communication whatsoever is possible. Even though Joshu may be enlightened, he can truly get it only after studying for thirty more years.”
I picked this particular Koan because it is one of the few that I feel I actually understand (although I may need another thirty years to really get it). Of course, I can in no way prove this. You must NOT be naïve and think that I understand anything. Furthermore, there is no real explanation of the Koan I can give. I can make a few remarks which should be considered as random twiddles of dials that may chance to zero the impedance in your mind.
First, the whole thing is a logical mess. On the one hand there is nothing special or esoteric about “deep truth path”. It is just the ordinary world (reality) that we sense. On the other hand, when we get “it”, the ordinary world dissolves and we feel an overwhelming sense of the infinite ignorance and non-being which surrounds the small island of knowledge we have attained in our human history so far. In fact, both the ordinary and the transcendent are simultaneously present to our awareness and one cannot be considered more significant than the other.
Note that this Koan is superstition free. There are no claims of esoteric knowledge. There are no contradictions of any scientific or historical claims to knowledge. There are no contradictions of anything we might consider superstitions. There is no contradiction of the doctrines of any religion. One might say that the Koan is empty of content. Of verbal content that is.
There is an implicit criticism of Aristotelean logic with its excluded middle. As I’ve already pointed out more than once in this blog, logic has a limited applicability. Part of the “game” of science is to accept only statements to which logic DOES apply. I may later go into stories from the history of physics of the difficulties of playing this exciting game of science, keeping logic intact, when experimental evidence seems to deny it. However, the “game” of physics or any other science is not all of life; and, in fact, Aristotelian logic has been, as I’ve called it in earlier blogs, “the curse of Western Philosophy” and an impediment to a deeper understanding of realities outside of science.
There is more to say about the Mumonkan, but I will leave such to a later blog post. As to differences between Soto and Rinzai Zen I wonder how serious these really are. Koan 19 seems to embody the Rinzai idea of instantaneous enlightenment until one sees Mumon’s comment about another 30 years being required for Joshu to really get it. The Soto doctrine is of gradual enlightenment and a questioning of the very “reality” of the enlightenment concept. A metaphor for either view is the experience of trying to get above a foggy day in a place like Eugene, Oregon, where, when the winter rain finally stops, the clear weather is obscured by a pea-soup fog. One climbs to a height such as Mt. Pisgah or Spencer’s Butte and often finds that though the fog is thinner with hints of blue sky, it is still present. But then there is perhaps a partial break and one sees through a deep hole towards a clear area beyond the fog. This vision may be likened to an epiphany or even to the “Satori” of Rinzai Zen. If we imagine we could wait on our summit for years until, after many breaks, the fog completely clears away, that would be full enlightenment.
Leaving any further consideration of Koan 19, I will end this post on a personal note. If indeed I’ve had a deep enough epiphany to consider it as Satori, this breakthrough has helped reveal that I have a healthy ego, lots of “ego strength”, a concept that Dr. Carr, head of the physics department at Auburn came up with. Experimental physicists, such as Dr. Carr, like to measure things. “Having a lot of ego strength” was his amusing term for people who are overly wrapped up in themselves. My possible Zen insights have not diminished my ego at all. Rather, they have helped to reveal it. I’ve learned not to be too exuberant about insights which as a saying goes, “leave one feeling just as before about the ordinary world except for being two inches off the ground.” If I get too exuberant, I wake up the next day, feeling “worthless”, in the grip of depression. This is a reaction to an unconscious childhood ego build-up in the face of very poor self-esteem. Part of spiritual growth is perhaps not losing one’s ego, but lessening the grip it has on one. I hope that further practice helps me in this regard. Perhaps, some psychological considerations can be the subject of a later post. I will now, however, work on the foundations for such a post by attempting to clarify the “reality” status of scientific theories.
Funny Numbers
During the century between about 600 BCE to 500 BCE, the first school of Greek philosophy flourished in Ionia. This, arguably, is the first historical record of philosophy as a reasoned attempt to explain things without recourse to the gods or out-and-out magic. But where on earth was Ionia? Wherever it was it’s now long gone. Wikipedia, of course, supplies an answer. If one sails east from the body of Greece for around 150 miles, passing many islands in the Aegean Sea, one reaches the mainland of what is now Turkey. Along this coast at about the same latitude as the north coast of the Peloponnesus (37.7 degrees N) one finds the island of Samos, a mile or so from the mainland; and just to the north is a long peninsula poking west which in ancient times held the city-state of Ionia. Wikipedia tells us that this city-state, along with many others along the coast nearby formed the Ionian League, which in those days, was an influential part of ancient Greece, allying with Athens and contributing heavily, later on, to the defeat of the Persians when they tried to conquer Greece. One can look at Google Earth and zoom in on these islands and in particular on Samos, seeing what is now likely a tourist destination with beaches and an interesting, rocky, green interior. On the coast to the east and somewhat south of Samos was the large city of Miletus, home to Thales, Anaximander, Heraclitus and the rest of the Ionian philosophers. At around 570 BCE on the Island of Samos Pythagoras was born. Nothing Pythagoras possibly might have written has survived, but his life and influence became the stuff of conflicting myths interspersed with more plausible history. His father was supposedly a merchant and sailed around the Mediterranean. Legend has it that Pythagoras traveled to Egypt, was captured in a war with Babylonia and while imprisoned there picked up much of the mathematical lore of Babylon, especially in its more mystical aspects. Later freed, he came home to Samos, but after a few years had some kind of falling out with its rulers and left, sailing past Greece to Croton on the foot of Italy which in those days was part of a greater Greek hegemony. There he founded a cult whose secret mystic knowledge included some genuine mathematics such as how musical harmony depended on the length of a plucked string and the proof of the Pythagorean theorem, a result apparently known to the Babylonians for a thousand years previously, but possibly never before proved. Pythagoras was said to have magic powers, could be at two places simultaneously, and had a thigh of pure gold. This latter “fact” is mentioned in passing by Aristotle who lived 150 years later and is celebrated in lines from the Yeats poem, Among School Children:
Plato thought nature but a spume that plays
Upon a ghostly paradigm of things;
Solider Aristotle played the taws
Upon the bottom of a king of kings;
World-famous golden-thighed Pythagoras
Fingered upon a fiddle-stick or strings
What a star sang and careless Muses heard:
Yeats finishes the stanza with one more line summing up the significance of these great thinkers: “Old clothes upon old sticks to scare a bird.” Although one may doubt the golden thigh, quite possibly Pythagoras did have a birthmark on his leg.
I became interested in Ionia and then curious about its history and significance because I recently wondered what kind of notation the Greeks had for numbers. Was their notation like Roman numerals or something else? I found an internet link, which explained that the “Ionian” system displaced an earlier “Attic” notation throughout Greece, and then went on to explain the Ionian system. In the old days when a classic education was part of every educated person’s knowledge, this would be completely clear as an explanation. Although I am old enough to have had inflicted upon me three years of Latin in high school, since then I had been exposed to no systematic knowledge of the classical world so was entirely ignorant of Ionia, or at least of its location. I had heard of the Ionian philosophers and had dismissed their philosophy as being of no importance as indeed is the case, EXCEPT for their invention of the whole idea of philosophy itself. And, of course, without the rationalism of philosophy, it is indeed arguable that there would never have been the scientific revolution of the seventeenth century in the West. (Perhaps that revolution was premature without similar advances in human governance and will yet lead to disaster beyond imagining in our remaining lifetimes. Yet we are now stuck with it and might as well celebrate.)
The Ionian numbering system uses Greek letters for numerals from 1 to 9, then uses further letters for 10, 20, 30 through 90, and more letters yet for 100, 200, 300, etc. The total number of symbols is 27, quite a brain full. The important point about this notation along with that of the Egyptian, Attic, Roman and other ancient Western systems is that position within a string of numerals has no significance except for that of relative position with Roman numerals. This relative positioning helps by reducing the number of symbols needed in a numeric notation, but is a dead end compared to an absolute meaning for position which we will go into below. The lack of meaning for position in a string of digits is similar to written words where the pattern of letters within a word has significance but not the place of a letter within the word, except for things like capitalizing the first letter or putting a punctuation mark after the last. As an example of the Ionian system, consider the number 304 which would be τδ, τ being the symbol for 300 and δ being 4. There is no need for zero, and, in fact, these could be written in reverse order δτ and carry the same meaning. In thinking about this fact and the significance of rational numbers in the Greek system I came to understand some of the long history with the sparks of genius that led in India to OUR numbers. In comparison with the old systems ours is incredibly powerful but with some complexity to it. I can see how with unenlightened methods of teaching, trying to learn it by rote can lead to early math revulsion and anxiety rather than to an appreciation of its remarkable beauty, economy and power.
In the ancient Western systems there is no decimal point and nothing corresponding to the way we write decimal fractions to the right of the decimal point. What we call rational numbers (fractions) were to Pythagoras and the Greeks all there was. They were “numbers”, period, and “obviously” any quantity whatever could be expressed using them. Pythagoras died around 495 BCE, but his cult lived on. Sometime during the next hundred years, one of his followers disproved the “obvious”, showing that no “number” could express the square root of 2. This quantity, √2, by the Pythagorean theorem, is the hypotenuse of a right triangle whose legs are of length 1, so it certainly has a definite length, and is thus a quantity but to the Greeks was not a “number”. Apparently, this shocking fact about root 2 was kept secret by the Pythagoreans, but was supposedly betrayed by Hippasus, one of them. Or perhaps it was Hippasus who discovered the irrationality. Myth has it that he was drowned (either by accident or deliberately) for his impiety towards the gods. The proof of the irrationality of root 2 is quite simple, nowadays, using easy algebra and Aristotelian logic. If a and b are integers, assume a/b = √2. We may further assume that a and b have no common factor, because we may remove them all, if any. Squaring and rearranging, we get a²/2 = b². Since b is an integer, a²/2 must also be an integer, and thus “a” itself is divisible by 2. Substituting 2c for a in the last equation and then rearranging, we find that b is also divisible by 2. This contradicts our assumption that a and b shared no common factor. Now we apply Aristotelian logic, whose key property is the “law of the excluded middle”: if a proposition is false, its contrary is necessarily true, there is no “weaseling” out. In this case where √2 is either a fraction or isn’t, Aristotelian logic applies, which proves that a/b can’t be √2. The kind of proof we have used here is called “proof by contradiction”. Assume something and prove it false. Then by the law of the excluded middle, the contrary of what we assumed must be true. In the early twentieth century, a small coterie of mathematicians, called “intuitionists”, arose who distrusted proof by contradiction. Mathematics had become so complex during the nineteenth century that these folks suspected that there might, after all, be a way of “weaseling” out of the excluded middle. In that case only direct proofs could be trusted. The intuitionist idea did not sit well with most mathematicians who were quite happy with one of their favorite weapons.
Getting back to the Greeks and the fifth century BCE one realizes that after discovering the puzzling character of √2, the Pythagoreans were relatively helpless, in part because of inadequacies in their number notation. I haven’t tried to research when and how progress was made in resolving their conundrum during the 25 centuries since Hippasus lived and died, but WE are not helpless and with the help of our marvelous number system and a spreadsheet such as Excel, we can show how the Greeks could have possibly found some relief from their dilemma. The answer comes by way of what are called Pythagorean Triplets, three integers like 3,4,5 which satisfy the Pythagorean Law. With 3,4,5 one has 3² + 4² = 5². Other triplets are 8,15,17 and 5,12,13. There is a simple way of finding these triplets. Consider two integers p and q where q is larger than p, where if p is even, q is odd (or vice-versa) and where p and q have no common factor. Then let f = q² + p², d = q² – p², and e = 2pq. One finds that d² + e² = f². Some examples: p = 1, q = 2 leads to 3,4,5; p = 2, q = 3 leads to 5,12,13. These triplets have a geometrical meaning in that there exist right triangles who sides have lengths whose ratios are Pythagorean triplets. Now consider p = 2, q = 5 which leads to the triplet 20,21,29. If we consider a right triangle with these lengths, we notice that the sides 20 and 21 are pretty close to each other in length, so that the shape of the triangle is almost the same as one with sides 1,1 and hypotenuse √2. We can infer that 29/21 should be less than √2 and 29/20 should be greater than √2. Furthermore, if we double the triangle to 40,42,58, and note that 41 lies halfway between 42 and 40, the ratio 58/41 should be pretty darn close to √2. We can check our suspicion about 58/41 by using a spreadsheet and find that the 58/41 is 1.41463 to 5 places, while √2 to 5 places is 1.41421. The difference is 0.00042. The approximation 58/41 is off by 42 parts in 100,000 or 0.042%. The ancient Greeks had no way of doing what we have just done; but they could have squared 58 and 41 to see if the square of 58 was about twice the square of 41. What they would have found is that 58² is 3364 while 2 X 41² is 3362, so the fraction 58/41 is indeed a darn good approximation. Would the Greeks have been satisfied? Almost certainly not. In those days Idealism reigned, as it still does in modern mathematics. What is demanded is an exact answer, not an approximation.
While there is no exact fraction equal to √2, we can find fractions that get closer, closer and forever closer. Start by noticing that a 3,4,5 triangle has legs 3,4 which though not as close in length as 20, 21, are only 1 apart. Double the 3,4,5 triangle to 6,8,10 and consider an “average” leg of 7 relative to the hypotenuse of 10. The fraction 10/7 = 1.428 to 3 places while √2 = 1.414. So, 10/7 is off by only 1.4%, remarkably close. Furthermore, squaring 10 and 7, one obtains 100, 49 while 2 = 100/50. The Pythagoreans could easily have found this approximation and might have been impressed though certainly not satisfied.
I discovered these results about a month or so ago when I began to play with an Excel spread sheet. Playing with numbers for me is relaxing and fun; and is a pure game whether or not I find anything of interest. I suspect that this kind of “playing” is how “real” mathematicians do find genuinely interesting results, and if lucky, may come up with something worthy of a Fields prize, equivalent in mathematics to a Nobel prize in other fields. While my playing is pretty much innocent of any significance, it is still fun, throws some light on the ancient Greek dilemma, and for those of you still reading, shows how a sophisticated idea from modern mathematics is simple enough to be easily understood.
With spreadsheet in hand what I wondered was this: p,q = 1,2 and p,q = 2,5 lead to approximations of √2 via Pythagorean triplets. Are there other p,q’s that lead to even better approximations? To find such I adopted the most powerful method in all of mathematics: trial and error. With a spreadsheet it is easy to try many p,q’s and I found that p = 5, q = 12 led to another, even better, approximation, off by 1 part in 100,000. With 3 p,q’s in hand I could refine my guesswork and soon came up with p = 12, q = 29. I noticed that in the sequence 1,2,5,12,29,… successive pairs gave increasingly better p,q’s. This was an “aha” moment and led to a question. Could I find a rule and extend this sequence indefinitely?
In my life there is a long history of trying to find a rule for sequences of numbers. In elementary school at Hanahauoli, a private school in the Makiki area of Honolulu, I learned elementary arithmetic fairly easily, but found it profoundly uninteresting if not quite boring. Seventh grade at Punahou was not much better, but was interrupted part way through the year by the Pearl Harbor attack of December 7, 1941. The Punahou campus was taken over by the Army Corps of Engineers and our class relocated to an open pavilion on the University of Hawaii campus in lower Manoa Valley. I mostly remember enjoying games of everyone trying to tackle whoever could grab and run with a football even if I was one of the smaller children in the class. Desks were brought in and we had classes in groups while the rain poured down outside the pavilion. Probably, it was during this year that we began to learn how fractions could be expressed as decimals. In the eighth grade we moved into an actual building on the main part of the University campus and had Miss Hall as our math teacher. The math was still pretty boring, but Miss Hall was an inspiring teacher, one of those legendary types with a fierce aspect, but a heart of gold. We learned how to extract square roots, a process I could actually enjoy, and Miss Hall told us about the fascinating things we would learn as we progressed in math. There would be two years of algebra, geometry, trigonometry and if we progressed through all of these, the magic of “calculus”. It was the first time I had heard the word and, of course, I had no idea of what it might be about, but I began to find math interesting. In the ninth grade we moved back to the Punahou campus and our algebra teacher was Mr. Slade, the school principal, who had decided to get back to teaching for a year. At first, we were all put off a bit by having the fearsome principal as a teacher, but we all learned quickly that Mr. Slade was actually a gentle person and a gifted teacher. As we learned the manipulations of algebra and how to solve “word problems”, Mr. Slade would, fairly often, write a list of numbers on the board and ask us to find a formula for the sequence. I thoroughly enjoyed this exercise and learned to take differences or even second differences of pairs in a sequence. If the second differences were all the same, the expression would be a quadratic and could easily be found by trial and error. Mr. Slade also tried to make us appreciate the power of algebra by explaining what was meant by the word “abstraction”. I recall that I didn’t have the slightest understanding of what he was driving at, but my intuition could easily deal with an actual abstraction without understanding the general idea: that in place of concrete numbers we were using symbols which could stand for any number. Later when I did move on to calculus which involves another step up in abstraction, I at first had difficulty in the notation f(x), called a “function” of x, an abstract notation for any formula; or indeed a representation of a mapping that could occur without a formula. I soon got this idea straight and had little trouble later with a next step of abstraction to the idea used in quantum mechanics of an abstract “operator” that changes one function into another.
Getting back to the sequence 1,2,5,12,29,… I quickly found that taking differences didn’t work; the differences never seemed to get much smaller because the sequence turns out to have an exponential character. I soon discovered, however, using the spreadsheet that quotients worked: take 2/1, 5/2, 12/5, 29/12, all of which become more and more similar. Then multiplying 29 by the last quotient, I got 70.08. Since 29 was odd, I needed an even number for the next q so 70 looked good and indeed I confirmed that the triplet resulting from 29, 70 was 4059, 4060, 5741 with an estimate for √2 that was off by only 1 part in a 100 million. After 70 I found the next few members of the sequence, 169, 408, 985. The multiplier to try for the next member seemed to be closing in on 2.4142 or 1 + √2. At this point I stopped short of trying for a proof of that possibility, both because I am lazy and because the possible result seemed uninteresting. What is interesting is that the sequence of p,q’s goes on forever and that approximations for √2 by using the resulting triplets will converge on √2 as a limit. The ideas of a sequence converging to a limit was only rigorously defined in the 19th century. Possibly it might have provided satisfaction to the ancient Greeks. Instead, the idea of irrational numbers that were beyond fractions became clear only with the invention by the Hindu’s in India of our place based numerical notation and the number 0.
Place based number notation was developed separately in several places, in ancient Babylon, in the Maya civilization of Central America, in China and in India. A place based system with a base of 10 is the one we now use. Somewhere in one’s education one has learned about the 1’s column just to the left of a decimal point, then the 10’s column, the 100’s column and so forth. When the ancient Hindu’s and the other civilizations began to develop the idea of a place based system, there was no concept of zero. Presumably the thought was the idea that symbols should stand for something. Why would one possibly need a symbol that stood for nothing? So, one would begin with symbols 1 through 9 and designate 10 by ”1·”. The dot “·” is called a “place holder”. It has no meaning as a numeral, serving instead as a kind of punctuation mark which shows that one has “10”, not 1. Using the place holder in the example above of Ionian numbers, the τδ would be 3·4, the dot holding the 10’s place open. The story with “place holders” is that the Babylonians and Mayans never went beyond, but the Hindu’s gradually realized the dot could have a numerical meaning within its own right and “0” was discovered (invented?). Recently on September 13 or 14th, 2017, there was a flurry of reports that carbon dating of an ancient Indian document, the Bakhshali manuscript revealed that some of its birch bark pages were 500 years older than previously estimated, dating to a time between 224 – 383 AD. The place holder symbol occurring ubiquitously in the manuscript was called shunya-bindu in the ancient Sanskrit, translated in the Wikipedia article about the manuscript as “the dot of the empty place”. (Note that in Buddhism shunyata refers to the “great emptiness” a mystic concept which we might take as the profound absence of being logically prior to the “big bang”) A readable reference to the recent discovery is According to the Wikipedia article the Bakhshali manuscript is full of mathematics including algebraic equations and negative numbers in the form of debts. As a habitual skeptic I wondered when I first heard about the new dating whether Indian mathematicians with their brilliant intuition hadn’t immediately realized the numerical meaning of their place holder. Probably they did not. An easy way to see the necessity of zero as a number is to consider negative numbers as they join to the positives. In thinking and teaching about math I believe that using concrete examples is the best road leading to an abstract understanding. The example of debts is a compelling example of this. At first one might consider one’s debts as a list of positive numbers, amounts owed. One would also have another list of positive numbers, one’s assets, amounts owned. The idea might then occur of putting the two lists together, using “-“ signs in front of the debts. As income comes in one’s worth goes, for example, -3, then -2, -1. Then what? Before going positive, there is a time when one owes nothing and has nothing. The number 0 signifies this time before the next increment of income sends one’s worth to 1. The combined list would then be …, -3, -2, -1, 0, 1, 2, 3, … . Doing arithmetic, using properly extended arithmetic rules, when one wants to combine various sources of debt and income becomes completely consistent, but only because 0 was used.
If the above seems as if I’m belaboring the obvious, let me then ask you why when considering dates, the next year after 1 BCE is not 0, but 1 AD? Our dating system was made up during an early time before we had adopted “0” in the West. Historians have to subtract 1 when calculating intervals in years between BCE and AD and centuries end in hundreds, not 99’s. This example is a good one for showing that if one gets locked in to a convention, it becomes difficult if not impossible to change. I was quietly amused at the outcry as Y2K, the year 2000 came along with many insistent voices pointing out the ignorance of we who considered the 21st century to have begun. The idea of zero is not obvious and I hope I’ve shown in considering the Pythagorean’s and their dilemma with square roots, just how crippled one is trying to get along without it.
In my last post, “Two Cultures”, I wrote that “…one hopes for a creative amalgam of West and East.” So far this blog has concentrated on Eastern, especially Buddhist ideas, particularly Zen, wondering if Western thought can be helpful in approaching the Zen experience. If I am indeed dedicated to going in the other direction demonstrating that Zen intuition can contribute to Western philosophy, I need now to understand Western philosophy at a deeper level. In fact, it may well be the case that Eastern and Western approaches to ultimate understanding are immiscible like oil and water, so that far from being helpful to one another their intersection becomes nothing more than a contradictory mess. My intuition says otherwise, but in order for me to specifically find and point out ways that each can help the other combine into a single broader and deeper approach to what it’s all about, I need a more thorough appreciation of Western philosophy. That is, I need to understand Plato. I say Plato because I remembered and then found (in the book I’m about to consider) a quote: “The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato” from Process and Reality by Alfred North Whitehead. Besides the Whitehead quote there is a general understanding that Western philosophy only came into full flower with Plato. Plato’s works were the urquell, the Spring from which all flowed.
Of course, over the years, I’ve been casually exposed to Plato. At Stanford, all freshmen at the time I was there, were required to take the year long History of Western Civilization course which consisted of the reading of works deemed significant for Western thought with lectures and discussions in class. The class was largely wasted on me, because, as a freshman, besides being occupied with my interesting roommates, I was on the swimming team, not much interested in History, and bone lazy. I do remember reading Plato’s Phaedo, impressed with the story though far from impressed with Socrates’s reasons for not being afraid of death. Then, over the years, I ran many times into allusions to the story of “the cave.” Then there are Platonic “ideals”. None of this exposure really grabbed me. What did make a difference was running recently into a piece on the internet which discussed a philosophical issue with impressive clarity. Here was someone who could talk philosophy in a way that made sense. The author was a women named Rebecca Goldstein. Googling her on the internet I found that she was a rather unusual philosopher in that she wrote novels as well as philosophy. I won’t get into the interesting biographical details about her because these can easily be found on the internet. After enjoying her novel, 36 Arguments for the Existence of God: A work of Fiction, I looked in Amazon to see what else she had written and saw listed Plato at the Googleplex: Why Philosophy Won’t Go Away. This was available in our library in eBook form so I read it on my Kindle, and then ordered a hard copy from Amazon. Below, in the interests of brevity I will sometimes refer to Ms. Goldstein as RNG (for Rebecca Neuberger Goldstein).
Understanding Plato via the writing of a gifted philosopher who writes with clarity seemed better than trying to find adequate translations of Plato’s work or trying to learn classical Greek so I that I could read him in the original. Of course, there would be the difficulty of really understanding Plato no matter what the approach. So, I will consider Ms. Goldstein’s book not as authoritative, but as a foundation for riffs off of what I conceive her to have said about Plato and Western Philosophy. Of course, I agree with her thesis that philosophy is here to stay and find her criticism of philosophy-jeerers, such as Lawrence Krauss, amusing and telling though that is not what interests me in her book. Incidentally, I have read Krauss’s A Universe from Nothing: Why There is Something Rather than Nothing, and found it fascinating. He is a great physics popularizer and, in my opinion, writes philosophically so his wholesale condemnation of philosophy is not to be taken seriously. Possibly, a critical review of his “Something” book by a philosopher intensified his antagonism toward philosophy to the point that he had to express his outrage. In that state one finds slings and arrows to hurl at philosophy, rather than relaxing one’s ideological grip as suggested in my last post. A wholesale condemnation of philosophy is ridiculous. However, it seems to me that the situation is not “either/or”, for part of the life blood of philosophy is criticism of philosophy. For example, if in getting at what really matters in philosophy, one should consider “differences that make a difference”, (Gregory Bateson’s definition of “information”), I find too often, philosophers seem to haggle over differences that to me make no difference whatsoever. Perhaps I lack a critical component of what it takes to be a philosopher. Whether or not that is so, I find Ms. Goldstein’s writing mostly clear and fascinating.
Before getting into what Ms. Goldstein has to say about Plato I will mention one more thought about philosophy. With most disciplines talking about or discussing the discipline is separate from practicing the discipline. Writing about physics, chemistry or molecular biology, sociology, economics, or engineering, for example, is not doing research in or practicing those disciplines. If one writes about philosophy however, one is actually doing philosophy whether or not one is a professional, card carrying, philosopher. If one writes ignorantly, without sufficient thought or insight, one is doing “bad” philosophy, easily dismissed; but, nevertheless, one is doing philosophy. The only other subject, I can think of offhand, which perhaps possesses this characteristic is literature. A literary critic, writing about a literary work can actually create a piece of literature. I don’t think this claim works for history. A historian can do primary research and write up the story she or he finds (readable history always tells a story), but as soon as she talks in general or makes a judgement, she is doing philosophy of history, not history. Perhaps this last claim is merely a quibble, but certainly one reason philosophy will never go away is that thoughtful people will always continue to practice it, making judgements and seeking insights into whatever is on their mind. Whether university departments of philosophy offering degrees in the subject will wither away in the future is another question. It seems to me intuitively, unlikely.
Turning to Plato whether in classical Greece or in today’s Googleplex, it is clear that as a professional philosopher RNG has read everything Plato wrote or might have written, probably in more than one translation, as well as what other philosophers have had to say about Plato, including inquiries into the meaning of words in classical Greek and into the ethos of the society that gave rise to Plato’s philosophy. A fascinating observation (Googleplex p4) is that it is difficult or impossible to discover what Plato really himself personally thought about any of the far flung positions expounded in his various dialogues. Positions there are aplenty, but no positions that Plato would unambiguously assent to. RNG remarks on the many disagreements that philosophers have had on Plato’s various positions and has compared him to Shakespeare as one, whose personal views are unknowable. Further (on p40), quoting from Plato’s Seventh Letter, RNG concludes that “he never committed his own philosophical views to writing.” And further, “Plato didn’t think the written word could do justice to what philosophy is supposed to do.” This in spite of the fact that he wrote extensively. RNG considers that the form of Plato’s writings as dialogue suggests that Plato’s view of what philosophy is supposed to do is “Nothing less than to render violence to our sense of ourselves and our world, our sense of ourselves in the world.” RNG quotes Plato, talking of philosophy as saying, “… for there is no way of putting it in words like other studies. Acquaintance with it must come rather after a long period of attendance in instruction in the subject itself and of close companionship, when suddenly like a blaze kindled by a leaping spark, it is generated in the soul and at once becomes self-sustaining.” (Googleplex, p40, Seventh Letter quote.)
This last sounds suspiciously like the “enlightenment” that is supposed to come out of Buddhist meditation and training. What is different is the methodology. With Plato’s philosophy one attains the transcendent state by intense thinking about the conundrums of philosophy, trying to gain insight through reason and rationality into deep, questions, compelling but unanswerable, which pursuit ultimately withdraws from one, the “life support” of one’s unquestioned certainties, leaving one “free” in an empty universe. Or am I reading too much into a specious resemblance between Plato and Buddhism? Certainly, besides bringing personal enlightenment, philosophy is attempting to bring about insights which can be expressed in language. It seems, in fact, that over the stretch of time since the days of classical Greece, philosophy has concentrated on trying to bring clarity to its questions using language in a precise way, rather than becoming a means of instilling an awareness beyond language. Western Philosophy, it seems, has given up a quest for transcendence by relinquishing such a pursuit to religions based on faith. It seems to me that Zen has a contribution to make here in that the enlightenment it postulates is beyond language and therefore is irrefutable via language. It is to be approached, according to what I’ve said earlier in this blog via a path which totally rejects superstition, magic or even belief in anything, as far as that is possible. Philosophy, it seems to me, is an excellent Western path for a “seeker” who is attracted in that direction. And if, as I assume, RNG is correct in what she has said about Plato’s philosophy, such seeking would not be new to philosophy, but instead a turn of a spiral back towards Plato’s original conception.
So much for this post. Later I would hope to return to RNG, Plato at the Googleplex and further ideas about a joining of East and West. For the immediate future, however I would like to take into account the objection that philosophy as a spiritual path is intellectually elitist, as indeed it might seem if one accepts the idea that “elitism” itself is other than an elitist convention. Be that as may be, now that I’ve brought up the idea of a “seeker”, it would be good to point out that seeking can adopt paths that are physical or artistic in nature though not necessarily anti-intellectual. So, onto the next post… |
bfeecf89c3f5920d | Next Article in Journal
Developing Computational Geometry and Network Graph Models of Human Lymphatic System
Next Article in Special Issue
Solid-State Testing of a Van-Der-Waals-Corrected Exchange-Correlation Functional Based on the Semiclassical Atom Theory
Previous Article in Journal
A Holistic Scalable Implementation Approach of the Lattice Boltzmann Method for CPU/GPU Heterogeneous Clusters
Previous Article in Special Issue
A Diagonally Updated Limited-Memory Quasi-Newton Method for the Weighted Density Approximation
Article Menu
Export Article
Computation 2017, 5(4), 49; doi:10.3390/computation5040049
Challenges for Theory and Computation
Institute for Materials Chemistry, Vienna University of Technology, Getreidemarkt 9/165, A-1060 Vienna, Austria
Received: 21 November 2017 / Accepted: 1 December 2017 / Published: 4 December 2017
The routinely made assumptions for simulating solid materials are briefly summarized, since they need to be critically assessed when new aspects become important, such as excited states, finite temperature, time-dependence, etc. The significantly higher computer power combined with improved experimental data open new areas for interdisciplinary research, for which new ideas and concepts are needed.
quantum mechanics; density functional theory; approximations; software; WIEN2k
1. Introduction
The role of computations for systems at the atomic scale (molecules or solids) has become routine for many standard applications, such as the interpretation of experimental data or providing a fundamental understanding of properties. Such calculations will also be needed in the future, and thus will remain important. However, there are new challenges, which we want to address with the focus on solids, interfaces, and surfaces, whose computational aspects were presented in a recent book chapter [1]. At present, it becomes important to scrutinize or reconsider the assumed basic assumptions and approximations used so far. They were needed to make computations feasible, but there are several cases that require new approaches, and thus one needs to go a step further. These cases include the atomic structure, quantum mechanics, the computational aspects (including software development), and the relation to experiments (and applications).
2. Presently Made Assumptions and Approximations
A series of assumptions and approximations are commonly made by simplifying and idealizing the complex solid-state materials so that we can simulate them and provide an understanding of their properties (e.g., [2]). We summarize these aspects in the next four subsections.
2.1. Atomic Structure
The early stages of solid-state computations were concerned with the study of relatively simple systems, such as a metal (e.g., Al) or an ionic solid (e.g., NaCl or CsCl), a semiconductor (e.g., Si), or a magnet (e.g., Fe or Co). For such systems, the assumption of a perfect crystal structure that can be presented by a unit cell was and is justified. This means that the unit cell—assuming periodic boundary conditions—is repeated to infinity in all three dimensions, and allows a representation in reciprocal space. These concepts are well known and made the computations of solids feasible. However, the interests have recently changed to more complex systems; for example, a crystal of finite size (nanocrystal), surfaces (e.g., in heterogeneous catalysis), impurities (doping), defects, or non-stoichiometry, as for example discussed in Ref. [1] (pp. 227–229). Additionally, the types of solids of interest have increased to cases such as superconductors, magnets, ferroelectrics, molecular systems, and all the way to proteins. This also means that a larger variety of chemical bonding now occurs, which includes metallic, covalent, ionic, but also van der Waals interactions.
2.2. Quantum Mechanics
The properties of crystals at the atomic scale are mainly determined by the electronic structure, requiring a quantum mechanical treatment. In the traditional approach, one goes from the time-dependent to the time-independent Schrödinger equation and neglects, as a first step, relativistic effects (i.e., we use Schrödinger’s equation instead of Dirac’s equation). Since the nuclei are much heavier than the electrons, one makes the Born–Oppenheimer approximation, in which the motion of the electrons is not coupled to the motion of the nuclei. This leads to Schrödinger’s equation for the electrons, where the nuclei are at rest, taken at T = 0 K. For studying vibrations, we can perform several such calculations (with displaced atomic positions) and derive the dynamical matrix, from which phonons can be computed. There are also alternative ways based on perturbation theory. Usually the electron–phonon coupling is not included.
In the past, we had three different directions for a quantum mechanical treatment, with very little interactions between them. For molecular systems, the quantum chemists started with the Hartree–Fock (HF) method based on many-electron wave functions. In the HF scheme, exchange is treated exactly (by construction), but it does not include correlation effects due to an averaging over all other electrons for a selected electron. Correlation can be included in one of the post-HF schemes, like configuration interaction (CI) [3] or coupled cluster methods (CC) [4], both of which can reach almost exact results but with the drawback of a high computational cost. This limits the approach to rather small systems. In quantum chemistry, such schemes are called “ab initio” methods.
In the solid-state community, density functional theory (DFT) was the preferred scheme, in which electron density plays the key role. Walter Kohn received the Nobel Prize for the development of density functional theory, and thus it is appropriate to mention his significant contributions as summarized in an obituary for him [5], which also contains several relevant references. In contrast to the wave function-based methods, the fundamental idea of DFT is to replace the complete many-electron wave function with the much simpler ground-state electron density as the main variable. This is an enormous simplification because the density depends only on position (i.e., three variables). This opens the possibility of treating relatively large systems. However, the exact DFT functional is unknown, and thus approximations are needed—an active field of research. The simple DFT schemes are comparable to HF in terms of computer time, but only for localized basis sets (as used in chemistry) but not for plane waves (as used in solids). In contrast to HF, DFT methods include correlation, but treat exchange only approximately.
The third category is many-body physics (MBP) [6], which can treat complex phenomena such as highly correlated electrons, fluctuations or effects like electron–phonon coupling. Such schemes can describe complex situations, but are often based on parameters that are not directly derived for a real system.
Fortunately, the experts from these three categories for solving quantum mechanics (QM) problems started to collaborate, and they now benefit from each other. Each method has advantages and disadvantages. For example, one can include a fraction of HF exchange in DFT in so-called hybrid methods, or extract parameters from DFT for a many-body treatment. This combination of theory is one of the new challenges.
2.3. Computational Aspects and Software Development
For any calculation, one must as a first step define the atomic structure in an idealized form, with a unit cell or supercell that can present defects, impurities, interfaces, and surfaces (including vacuum). Periodic boundary conditions are assumed, which means, for example, that an impurity atom has a periodic image in the neighboring supercells representing an artificial order. The larger the supercell is chosen to be, the less is the effect of interactions between these artificial periodic images. These schemes make it clear that the atomic structure is always an idealization. The input data can come from experiment or are chosen on purpose for studying hypothetical cases. The advantage of theory is that the atomic structure—although idealized—is well-defined as input, in contrast to experiment, which can only approximately determine it.
The second step is choosing the quantum mechanical treatment as mentioned above, namely wave function based methods (HF and beyond) [3,4], DFT [1,2,7], or MBP [6]. In addition, the choice between all-electron schemes or valence electrons only (using pseudopotentials) must be made, but also specifying how relativistic effects (including spin-orbit coupling) shall be treated (as discussed, for example, in chapter 4 of Ref. [1] (pp. 234–238)). Then, a computer code is selected (see for example [2]) for solving the corresponding equations with a proper basis set (for example the WIEN2k program package that was developed in my group, see The basis sets can consist of analytic (like Gaussian orbitals or plane waves) or numerical functions, or a combination of them. Depending on the property of interest, the convergence of a calculation needs to be tested; for example, in terms of the number of k-points (in the Brillouin zone) or basis sets (e.g., number of plane waves). For some properties (e.g., magnetic anisotropy energy), a high numerical precision is needed, since the energy difference may occur in the 10th decimal of the total energy.
It is very useful to have a large variety of computer codes in this field, since each of them puts the focus on different materials and properties. A certain code can be optimal for one case, but would not be a good choice for another. Sometimes we need a very high accuracy when we investigate fine details. In another case, a cruder calculation is sufficient for answering an open question. For the accuracy, we need a validation for well-chosen test cases. Reproducibility is important in this field. Different computer codes should give the same results, provided they use the same structure and (first principles) formalism (e.g., the same DFT functional) carried to full convergence. Recently, error estimates have been derived [7] for solid-state DFT calculations based on 40 different computer codes. These tests showed a very good agreement between the accurate codes (mostly all-electron full-potential codes), while deviations occurred for others (e.g., pseudo potential codes) which may be better in terms of efficiency. New concepts (methodology) can improve the efficiency, but new algorithms (e.g., for parallelization) can also be helpful, when properly chosen for the available hardware, ranging from laptops to supercomputers. With a more efficient computation, one can treat larger systems or explore more cases. The latter is needed, for example, in material design or optimization, where one can include all elements—irrespective of their abundance or environmental aspects—which are crucial for applications. Computations cannot find the optimal material, but they can “narrow design space” so that only those materials need to be synthesized and investigated which are predicted to have the desired property. For each open problem, we must find a good balance between accuracy and efficiency. The validation tests have shown that the deviation between accurate codes is often significantly smaller than the typical difference between theory (based on different functionals) and experimental data.
There are different ways of distributing a code: open source, access to the source code, or only executables; the latter is preferred by software companies. From a scientific perspective, the WIEN2k group favors making the source code available to the registered users. This policy has helped to generate a “WIEN2k community” of researchers (about 3000 groups around the world). Many of them have contributed to the development of the code in several aspects, such as bug fixes, adding or suggesting new features, and improving the documentation. Other developers also followed this strategy.
There is another problem that all computer code developers face—namely, user-friendliness. Based on the experience (from previous calculations) of the experts, one can provide many default options to make calculations easier—especially for novice users or experimentalists. However, there is also a drawback—namely the danger of using the code as a black box: “push a button and receive the result”. Previously the users had to think about how to run the calculation and thus look at details instead of ignoring them.
2.4. Comparing Theory with Experiment
The importance of theoretical simulations has increased over the years due to the significantly improved computer power. Calculations can provide a basic understanding of structure–property relations (see for example [2]). In this context, it is often helpful to decompose the result into contributions for finding the driving force, which experiments cannot find. For example, one can compute artificial cases like an impurity atom in a solid with and without relaxation of the atomic positions of the neighbors. From a comparison, we can clarify whether or not the relaxation is crucial for the studied property. The increased complexity of systems often makes computer graphic tools essential for analyzing many details of a computation. Take a unit cell with one thousand atoms and consider results as the band structure, density of states, or electron density. The data are stored somewhere, but the analysis needs new tools.
When deviations between a theoretical simulation and experimental data occur, we must critically scrutinize their origin.
• Did we properly model (idealize) the atomic structure (as discussed in Section 1)?
• Is the chosen quantum mechanical treatment, e.g., by DFT sufficient (Section 2)?
• Is full convergence reached in the calculation (as summarized in Section 3: in terms of k-points and basis sets)?
• Are there additional aspects which may affect the results, such as relativistic treatment, finite temperature, pressure, ground state, excited or metastable state?
• Are the assumptions or idealizations justified?
All five categories can cause an observed deviation between theory and experiment. Some aspects can be tested, for example by using a larger supercell, different DFT schemes, or repeating the calculation with more basis functions. With all the possibilities mentioned in the previous sections, it is often useful to combine different theories according to their advantages while keeping their disadvantages in mind. Even good agreement between theory and experiment may come from error compensation (too simple atomic structure but incomplete convergence). In some cases, parameters are used in a simulation, for example a Hubbard U in correlated systems, which may open a band gap. If one only adjusts U to fit the experimental band gap the agreement is obvious. However, when several experimental data agree for one chosen U, one gets at least a consistent picture which may be close to reality.
During the last decade, not only computation has improved significantly, but also the experimental techniques; for example, by improving the resolution (in space and time) or having better detectors. Recent developments (e.g., the use of short laser pulses) bring a new focus on the time dependence instead of time averaging assumed so far. We often need sophisticated experiments to ask the proper questions, which theory may be able to answer. Alternative explanations can be tested by simulations using predefined (partly artificial) test cases.
3. Discussion and New Challenges
Approximately 20 years ago, the fields of quantum chemistry, DFT, and many-body theory had hardly any cooperation between them, but this has fortunately changed. The strengths and weaknesses of the different approaches are recognized and mutually respected. One can solve complex problems only by close collaboration with the corresponding experts. Currently, computational chemistry and physics is often done within density functional theory, which occurs in a large variety of approximations. They need to be explored and validated by even more sophisticated methods, which, however, may be limited to relatively small system sizes due to their computational effort. The interdisciplinary nature of these issues is obvious, since the open challenges include chemistry, physics, mathematics, computer science, and materials science. By a combined effort of experts from all these fields, a substantial process has been achieved.
Another challenge comes from the experimental side. Take the work of Gatti and Macchi [8], who have presented a comprehensive overview of charge-density-related research that is closely related to DFT. For example, the multipolar model allows calculation of the static deformation electron density, and thus avoids the thermal smearing effects due to the atomic motions (Debye–Waller factor). Such presentations can be directly compared to DFT calculations that correspond to T = 0 K. New questions arise in connection with the time-resolved structural analysis based on the development of the X-ray free-electron laser making time-resolution of a few fs accessible [9]. This is a new challenge for theory, namely in terms of time dependence.
Let us illustrate what can already be done with a few examples. Time-independent DFT focuses on ground state properties. Formally speaking, we should not interpret the Kohn–Sham (KS) energies (in the form of the band structure) as excitation energies as discussed in [1,2]. However, this standard DFT single-particle model quite often describes excitations rather well. A proper DFT treatment for excited states is time-dependent DFT (TDDFT) [10]. In TDDFT one must make severe approximations by the choice of an approximate exchange-correlation-kernel, limiting the accuracy of this scheme. A properly chosen scheme (such as a core-hole calculation) allows the study of core-excitation spectra. The band gap is an important quantity for semiconductors or insulators, and can be calculated using an adjusted DFT functional (such as mBJ [11]). Recently, a careful discussion about the band gaps of solids was presented combining fundamental concepts (generalized Kohn–Sham theory) with applications for selected systems [12]. If DFT single particle theory is not sufficient, one can use the DFT orbitals as input for many-body perturbation theories, such as the GW approximation [13] for better quasiparticle energies, in which the self-energy ∑ is expanded in terms of the single particle Green’s function G and the screened Coulomb interaction W. Another scheme is the Bethe–Salpeter equation (BSE) approach [14,15] to account for excitonic effects. Recently, the combination of TDDFT and excitons was presented in [16]. For highly correlated systems, which need a good description of the localized 3d or 4f states, inclusion of a Hubbard U into a generalized gradient approximation (GGA) may be sufficient for a proper description of the electronic structure (called GGA+U). However, sometimes one needs to go beyond this and use schemes like the dynamical mean field theory (DMFT) [17] to improve the agreement with experiments (mostly spectroscopy). Recently, even structure optimizations of correlated materials became possible by a combination of DFT (using WIEN2k) and the embedded DMFT [18]. Last but not least, the weak but sometimes important van der Waals interactions are usually not well described by standard DFT, requiring more sophisticated schemes [19]. In layered structures, DFT schemes (local density approximation-LDA, most GGAs, meta-GGAs, and hybrids as discussed in [1,2]) fail badly, but nonlocal versions demonstrate one way to proceed [20]. Such aspects become important in solid materials with a strong quasi-two dimensional regime [21].
It shall be mentioned here that there are other schemes to go beyond the topics present in this short review, such as molecular dynamics or thermodynamics, which are not mentioned here. Another direction is the ability to make use of extensive data (from theory or experiment) which are the key to the application of machine learning in materials science research [22]. The focus of this presentation is on solids, which are complicated systems, but they can be treated by a variety of methods. Many details are needed to make progress: concepts, a realistic atomic structure, accuracy, efficiency, validation of schemes, success or failure, trends, and predictions. In the past, idealizations and simplifying assumptions were necessary to make computations feasible. With the significantly improved computer power, it makes sense to critically analyze these assumptions, explore new concepts, and implement them into new (or existing) computer codes. There is no universal scheme that works for everything.
I want to thank the members of my group and the many researchers who contributed in the development of the WIEN2k code.
Conflicts of Interest
The author declares no conflict of interest.
1. Schwarz, K.; Blaha, P. DFT calculations for real solids. In Handbook of Solid State Chemistry; Theoretical Description; Dronskowski, R., Kikkawa, S.H., Stein, A., Eds.; Wiley-VCH Verlag: Weinheim, Germany, 2017; Volume 5, pp. 227–259. ISBN 978-3-527-32587-0. [Google Scholar] [CrossRef]
2. Schwarz, K. Computation of material properties at the atomic scale. In Selected Topics in Application of Quantum Mechanics; Pahlavani, M.R., Ed.; InTech: Rijeka, Croatia, 2015; Chapter 10; pp. 275–310. ISBN 978-953-51-2126-8. [Google Scholar]
3. Werner, H.-J.; Knowles, P.J. An efficient internally contracted multiconfiguration-reference configuration interaction method. J. Chem. Phys. 1988, 89, 5803–5814. [Google Scholar] [CrossRef]
4. Bartlett, R.J.; Musial, M. Coupled-cluster theory in quantum chemistry. Rev. Mod. Phys. 2007, 79, 291. [Google Scholar] [CrossRef]
5. Schwarz, K.; Sham, L.J.; Mattsson, A.E.; Scheffler, M. Obituary for Walter Kohn (1923–2016). Computation 2016, 4, 40. [Google Scholar] [CrossRef]
6. Bloch, I.; Dalibard, J.; Zwrger, W. Many-body physics with ultracold gases. Rev. Mod. Phys. 2008, 80, 885. [Google Scholar] [CrossRef]
7. Lejaeghere, K.; Bihlmayer, G.; Björkman, T.; Blaha, P.; Blügel, S.; Blum, V.; Caliste, D.; Castelli, I.E.; Clark, S.J.; Dal Corso, A.; et al. Reproducibility in density-functional theory calculations of solids. Science 2016, 351. [Google Scholar] [CrossRef] [PubMed]
8. Gatti, C.; Macchci, P. (Eds.) Modern Charge-Density Analysis; Springer: Berlin, Germany, 2000; ISBN 978-90-481-3836-4.
9. Schoenlein, R.W.; Chattopadhyay, S.; Chong, H.H.W.; Clover, T.E.; Heimann, P.A.; Shank, C.V.; Zholents, A.A.; Zolotorev, M.S. Generation of femtosecond pulses of synchrotron radiation. Science 2016, 287, 2237–2240. [Google Scholar] [CrossRef]
10. Runge, E.; Gross, E.K.U. Density-functional theory for time-dependent systems. Phys. Rev. Lett. 1984, 52, 997. [Google Scholar] [CrossRef]
11. Tran, F.; Blaha, P. Accurate band gaps of semiconductors and insulators with a semilocal exchange-correlation functional. Phys. Rev. Lett. 2009, 102, 226401. [Google Scholar] [CrossRef] [PubMed]
12. Perdew, J.P.; Yang, W.; Burke, K.; Yang, Z.; Gross, E.K.U.; Scheffler, M.; Scuseria, G.E.; Henderson, T.M.; Zhang, I.Y.; Ruzsinszky, A.; et al. Understanding band gaps of solids in generalized Kohn–Sham theory. Proc. Natl. Acad. Sci. USA 2017, 114, 2801–2806. [Google Scholar] [CrossRef] [PubMed]
13. Jiang, H.; Blaha, P. GW with linearized augmented plane waves extended by high-energy local orbitals. Phys. Rev. B 2016, 93, 115203. [Google Scholar] [CrossRef]
14. Hetaba, W.; Blaha, P.; Tran, F.; Schattschneider, P. Calculating energy loss spectra of NiO: Advantages of the modified Becke–Johnson potential. Phys. Rev. B 2012, 85, 205108. [Google Scholar] [CrossRef]
15. Laskowski, R.; Blaha, P. Understanding the L2,3 X-ray absorption spectra of early transition 3d elements. Phys. Rev. B 2010, 82, 205104. [Google Scholar] [CrossRef]
16. Turkowski, V.; Din, N.U.; Rahman, T.S. Time-dependent density-functional theory and excitons in bulk and two-dimensional semiconductors. Computation 2017, 5, 39. [Google Scholar] [CrossRef]
17. Held, K. Electronic structure calculations using dynamical mean field theory. Adv. Phys. 2007, 65, 829–926. [Google Scholar] [CrossRef]
18. Haule, K.; Pascut, G.L. Forces for structural optimizations on correlated materials within DFT+embedded DMFT functional approach. Phys. Rev. B 2016, 94, 195146. [Google Scholar] [CrossRef]
19. Mori-Sánchez, P.; Cohen, A.J.; Yang, W. Many-electron self-interaction error in approximate density functionals. J. Chem. Phys. 2006, 125, 201102. [Google Scholar] [CrossRef] [PubMed]
20. Rydberg, H.; Dion, M.; Jacobson, N.; Schröder, E.; Hyldgaard, P.; Simak, S.I.; Langreth, D.C.; Lundquist, B.I. Van der Waals density functional for layered structures. Phys. Rev. Lett. 2003, 91, 126402. [Google Scholar] [CrossRef] [PubMed]
21. Thiel, S.; Hammerl, G.; Schmehl, A.; Schneider, C.W.; Mannhart, J. Tunable quasi-two-dimensional electron gases in oxide hertostructures. Science 2006, 313, 1942–1945. [Google Scholar] [CrossRef] [PubMed]
22. Liu, Y.; Zhao, T.; Ju, W.; Shi, S. Materials discovery and design using machine learning. J. Materiom. 2017, 3, 159–177. [Google Scholar] [CrossRef]
Back to Top |
fc7ff54a9de42e1c | Electronic friction is fundamental to understand surface chemistry dynamics
Anyone who has studied, even superficially, some thermodynamics has encountered the word adiabatic very early. This is because adiabatic processes are extremely useful to understand the basics of the field. An adiabatic process is any process that occurs without heat (or matter) entering or leaving a system. In general, an adiabatic change involves a fall or rise in temperature of the system.
But words may be misleading if we switch from the classical, macroscopic, phenomenological realm of thermodynamics to that of quantum mechanics, where adiabatic means something related but completely different. In quantum mechanics, an adiabatic approximation is one where the time dependence of parameters, such as the inter-nuclear distance between atoms in a molecule is slowly varying. This slow-motion behaviour means that the solution to the Schrödinger equation that defines de system at one point in time goes continuously over to the solution at a later time. This kind of approximation was developed by Max Born and Vladimir Fock.
A very common adiabatic approximation used in molecular and condensed matter physics is the Born-Oppenheimer one. In it the motion of atomic nuclei is taken to be so much slower than the motion of the electrons that, when calculating the motions of electrons, the nuclei can be taken to be in fixed positions. This approximation is very successful, but it is known to be not completely true.
For example, what would happen if we had a gas interacting with a metal surface and a chemical reaction is taken place? We would have ions moving in metal electronic gases for a while. In this conditions, as Pedro Echenique and others demonstrated1, we could expect a friction effect, an electronic friction, a stopping power of an electron gas for slow ions. But this electronic friction is also a source of electronic nonadiabaticity or, from another point of view, a way of losing energy needed for the main chemical reaction.
The study of these ways of losing energy in surface reactions, especially catalytic ones, is terribly important in technological applications. But, quite paradoxically, to date, most accurate solutions of the full nuclear-electron wave function are restricted to systems of the complexity level of gas-phase H2+. For other systems approximations are far less rigorous, using a combination of quantum and classical dynamics. Still, the imposed computational burden nevertheless still restricts their practical use to simple metals and subpicosecond time scales, to symmetric adsorbate trajectories, or to only qualitative accounts of the metal electronic structure.
An efficient way, in computational-demand terms, of producing calculations with predictive power and material specific trajectories would be to correct the Born-Oppenheimer approximations with classical molecular dynamics that could incorporate the concept of electronic friction. Some recent attempts has been done using still another approximation, that atoms are independent, so that the computational cost is further reduced. But is this approach successful?
Now a group of researchers, that includes Iñaki Juaristi from Materials Physics Center – UPV/EHU and DIPC, perform a substantiated assessment of the quality of the nonadiabatic description. They find 2that the approximation is qualitatively good but not perfect. They propose a way to further improve it.
In the study, accurate experimental reference data are used, primarily focusing on the internal stretch mode of two systems which have been studied most extensively and conclusively by experiments: CO adsorbed on Cu(100) and Pt(111).
Figure 2. Vibrational lifetimes for CO on (a) Cu(100) and (b) Pt(111). Values as obtained within the independent-atom approximation (IAA) and the atoms-in-molecules (AIM) approach are contrasted to corresponding predicted lifetimes published by Forsblom and Persson (FP) and Krishna and Tully (KT). For comparison, experimental values as obtained from pump-probe spectroscopy by Morin et al. for CO on Cu(100) and Beckerle et al. for CO on Pt(111) are shown as a dotted line and a blue stripe further indicating the reported experimental uncertainty. | Credit: Rittmeyer et al (2015)
The authors find that rather than an explicit account of the surface band structure, the analysis suggests that missing intramolecular contributions are the reason for the main differences, as one would expect from the atom independency assumption and the neglect of intramolecular effects. But also that approximately incorporating such contributions through an atoms-in-molecules (AIM) numerical calculation indeed yields consistent lifetimes for a range of diatomic adsorbate systems.
The presented AIM alternative accounts for energy dissipation approximately through a charge partitioning scheme. As it effectively treats the molecular electrons as part of the metallic substrate, it is expected that the AIM friction concept to generally rather overestimate nonadiabatic energy losses and to perform best for chemisorbed adsorbates at close distances to the surface.
The results consolidate the importance of approximations that incorporate electronic friction in the study of the technologically critical catalytic systems.
1. Echenique et al (1981) Density functional calculation of stopping power of an electron gas for slow ions Solid State Communications DOI: 10.1016/0038-1098(81)91173-X
2. Simon P. Rittmeyer, Jörg Meyer, J. Iñaki Juaristi, and Karsten Reuter (2015) Electronic Friction-Based Vibrational Lifetimes of Molecular Adsorbates: Beyond the Independent-Atom Approximation Phys. Rev. Lett. DOI: 10.1103/PhysRevLett.115.046102
2 Trackbacks
Dealing with fire in microgravity | NEB
[…] Electronic friction is fundamental to understand surface chemistry dynamics […]
Dealing with fire in microgravity | NEB
Leave a reply
|
256782bd18fdccca | From Wikipedia, the free encyclopedia
Jump to: navigation, search
Atomic force microscopy (AFM) image of a PTCDA molecule, which contains clusters of five carbon rings.[1]
A scanning tunneling microscopy image of pentacene molecules, which consist of linear chains of five carbon rings.[2]
AFM image of 1,5,9-trioxo-13-azatriangulene and its chemical structure.[3]
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds.[4][5][6][7][8] Molecules are distinguished from ions by their lack of electrical charge. However, in quantum physics, organic chemistry, and biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions.
In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are in fact monoatomic molecules.[9]
Molecular science[edit]
The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties; in practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate.
History and etymology[edit]
According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass.
• Molecule (1794) – "extremely minute particle", from French molécule (1678), from New Latin molecula, diminutive of Latin moles "mass, barrier". A vague meaning at first; the vogue for the word (used until the late 18th century only in Latin form) can be traced to the philosophy of Descartes.[11][12]
The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties.[13] This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.
Molecules are held together by either covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment, for example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements.[14]
A covalent bond forming H2 (right) where two hydrogen atoms share the two electrons
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms, these electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding.[15]
Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions).[16] This transfer of electrons is termed electrovalence in contrast to covalence; in the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. In simpler words, an ionic bond is the transfer of electrons from a metal to a non-metal for both atoms to obtain a full valence shell.
Molecular size[edit]
Most molecules are far too small to be seen with the naked eye, but there are exceptions. DNA, a macromolecule, can reach macroscopic sizes, as can molecules of many polymers. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope, some of the largest molecules are macromolecules or supermolecules.
The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å.[17]
Effective molecular radius is the size a molecule displays in solution,[18][19] the table of permselectivity for different substances contains examples.
Molecular formulas[edit]
Chemical formula types[edit]
The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts.
A compound's empirical formula is a very simple type of chemical formula,[20] it is the simplest integer ratio of the chemical elements that constitute it.[21] For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethyl alcohol or ethanol is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule.
The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules, however different isomers can have the same atomic composition while being different molecules.
The empirical formula is often the same as the molecular formula but not always, for example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH.
The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations.
Structural formula[edit]
3D (left and center) and 2D (right) representations of the terpenoid molecule atisane
For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule; in this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas.
Molecular geometry[edit]
Structure and STM image of a "cyanostar" dendrimer molecule.[22]
Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure, the chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities.
Molecular spectroscopy[edit]
Hydrogen can be removed from individual H2TPP molecules by applying excess voltage to the tip of a scanning tunneling microscope (STM, a); this removal alters the current-voltage (I-V) curves of TPP molecules, measured using the same STM tip, from diode like (red curve in b) to resistor like (green curve). Image (c) shows a row of TPP, H2TPP and TPP molecules. While scanning image (d), excess voltage was applied to H2TPP at the black dot, which instantly removed hydrogen, as shown in the bottom part of (d) and in the rescan image (e). Such manipulations can be used in single-molecule electronics.[23]
Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to Planck's formula). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission.[24] Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal).
Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures changes in vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy actually measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule.
Theoretical aspects[edit]
The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry.
When trying to define rigorously whether an arrangement of atoms is sufficiently stable to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state",[4] this definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state[25] and is so loosely bound that it is only likely to be observed at very low temperatures.
Whether or not an arrangement of atoms is sufficiently stable to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe.
See also[edit]
1. ^ Iwata, Kota; Yamazaki, Shiro; Mutombo, Pingo; Hapala, Prokop; Ondráček, Martin; Jelínek, Pavel; Sugimoto, Yoshiaki (2015). "Chemical structure imaging of a single molecule by atomic force microscopy at room temperature". Nature Communications. 6: 7766. Bibcode:2015NatCo...6E7766I. PMC 4518281Freely accessible. PMID 26178193. doi:10.1038/ncomms8766.
2. ^ Dinca, L. E.; De Marchi, F.; MacLeod, J. M.; Lipton-Duffin, J.; Gatti, R.; Ma, D.; Perepichka, D. F.; Rosei, F. (2015). "Pentacene on Ni(111): Room-temperature molecular packing and temperature-activated conversion to graphene". Nanoscale. 7 (7): 3263–9. Bibcode:2015Nanos...7.3263D. PMID 25619890. doi:10.1039/C4NR07057G.
3. ^ Hapala, Prokop; Švec, Martin; Stetsovych, Oleksandr; Van Der Heijden, Nadine J.; Ondráček, Martin; Van Der Lit, Joost; Mutombo, Pingo; Swart, Ingmar; Jelínek, Pavel (2016). "Mapping the electrostatic force field of single molecules from high-resolution scanning probe images". Nature Communications. 7: 11560. Bibcode:2016NatCo...711560H. PMC 4894979Freely accessible. PMID 27230940. doi:10.1038/ncomms11560.
5. ^ Ebbin, Darrell D. (1990). General Chemistry (3rd ed.). Boston: Houghton Mifflin Co. ISBN 0-395-43302-9.
6. ^ Brown, T.L.; Kenneth C. Kemp; Theodore L. Brown; Harold Eugene LeMay; Bruce Edward Bursten (2003). Chemistry – the Central Science (9th ed.). New Jersey: Prentice Hall. ISBN 0-13-066997-0.
8. ^ Zumdahl, Steven S. (1997). Chemistry (4th ed.). Boston: Houghton Mifflin. ISBN 0-669-41794-7.
9. ^ Chandra, Sulekh (2005). Comprehensive Inorganic Chemistry. New Age Publishers. ISBN 81-224-1512-1.
10. ^ "Molecule". Encyclopædia Britannica. 22 January 2016. Retrieved 23 February 2016.
11. ^ Harper, Douglas. "molecule". Online Etymology Dictionary. Retrieved 2016-02-22.
12. ^ "molecule". Merriam-Webster. Retrieved 22 February 2016.
13. ^ Molecule Definition (Frostburg State University)
14. ^ "The Hutchinson unabridged encyclopedia with atlas and weather guide". Oxford, England. Retrieved 28 February 2016.
15. ^ Campbell, Neil A.; Brad Williamson; Robin J. Heyden (2006). Biology: Exploring Life. Boston, Massachusetts: Pearson Prentice Hall. ISBN 0-13-250882-6. Retrieved 2012-02-05.
16. ^ Campbell, Flake C. (2008-01-01). Elements of Metallurgy and Engineering Alloys. ASM International. ISBN 9781615030583.
17. ^ Roger L. DeKock; Harry B. Gray; Harry B. Gray (1989). Chemical structure and bonding. University Science Books. p. 199. ISBN 0-935702-61-X.
18. ^ Chang RL; Deen WM; Robertson CR; Brenner BM (1975). "Permselectivity of the glomerular capillary wall: III. Restricted transport of polyanions". Kidney Int. 8 (4): 212–218. PMID 1202253. doi:10.1038/ki.1975.104.
19. ^ Chang RL; Ueki IF; Troy JL; Deen WM; Robertson CR; Brenner BM (1975). "Permselectivity of the glomerular capillary wall to macromolecules. II. Experimental studies in rats using neutral dextran". Biophys J. 15 (9): 887–906. Bibcode:1975BpJ....15..887C. PMC 1334749Freely accessible. PMID 1182263. doi:10.1016/S0006-3495(75)85863-2.
20. ^ Wink, Donald J.; Fetzer-Gislason, Sharon; McNicholas, Sheila (2003-03-01). The Practice of Chemistry. Macmillan. ISBN 9780716748717.
21. ^ "ChemTeam: Empirical Formula". Retrieved 2017-04-16.
22. ^ Hirsch, Brandon E.; Lee, Semin; Qiao, Bo; Chen, Chun-Hsing; McDonald, Kevin P.; Tait, Steven L.; Flood, Amar H. (2014). "Anion-induced dimerization of 5-fold symmetric cyanostars in 3D crystalline solids and 2D self-assembled crystals". Chemical Communications. 50 (69): 9827–30. PMID 25080328. doi:10.1039/C4CC03725A.
23. ^ Zoldan, V. C.; Faccio, R; Pasa, A. A. (2015). "N and p type character of single molecule diodes". Scientific Reports. 5: 8350. Bibcode:2015NatSR...5E8350Z. PMC 4322354Freely accessible. PMID 25666850. doi:10.1038/srep08350.
25. ^ Anderson JB (May 2004). "Comment on "An exact quantum Monte Carlo calculation of the helium-helium intermolecular potential" [J. Chem. Phys. 115, 4546 (2001)]". J Chem Phys. 120 (20): 9886–7. Bibcode:2004JChPh.120.9886A. PMID 15268005. doi:10.1063/1.1704638.
External links[edit] |
4a8e4f878a4d521a | The Search for Planet X : นักดาราศาสตร์ได้คำตอบเริ่มต้น อะไรจุดระเบิด “ซูเปอร์โนวา”
หน้า 5 จาก 15 Previous 1, 2, 3, 4, 5, 6 ... 10 ... 15 Next
ตั้งหัวข้อ eye in the sky on Tue Mar 16, 2010 12:46 am
Q: (L) Okay, you said at that time that a Transdimensional Atomic Remolecularizer
was buried at Oak Island. Is that correct?
A: Yes.
Q: (L) Who buried it there?
A: Learn. You already have tools. We are trying to teach you to use
your most precious commodity.
A: You betcha!
Q: (L) What I read about Oak Island was that there were legends of lights
being seen there prior to 1703.
A: Yes.
Q: (L) Prior to 1703 would put the burial of whatever is there at least prior to that time, correct?
A: Yes.
Q: (L) Were those lights the lights of craft of other beings other than the natives of this planet?
A: Electromagnetic profile.
Q: (L) What was noticed when the kids arrived on the Island was that a limb was sawed off
of a tree over the depression and there were marks of the rope and pullys had been utilized.
(T) If something more advanced dug the pit, they wouldn’t have used chain hoists and pullys.
(L) That is what I am getting at. So, if there was evidence of this kind of stuff on the tree,
it would seem to indicate that somebody had been doing something there who was a little more
human or limited in their technology, is that correct?
A: Yes.
Q: (L) Now, my thought is that, it is beyond human technology to have produced
that pit at that point in history?
A: Beyond known technology.
Q: (L) And yet humans may have been involved in that activity?
A: Bingo. Some humans have always communed with "higher" powers.
We are speaking of conscious communion in this and other and instances.
Q: (L) When was the pit dug?
A: 1500s. Nationality is not issue. Access sect information. Now, who claimed communion,
Laura has in memory banks from absorption of mass reading practice.
Q: (F) Was there a sect from that era that claimed communion?
A: Yes.
Q: (L) I think that this may have had something to do with the people that later
became known as the Cajuns, a French religious sect that was living there...
They called it Arcadia.
A: Maybe.
Q: (L) Now, this article says that it would have taken a hundred men working
every day for six months to have built this pit...
A: No.
Q: (L) The article also says that it must have been dug in 1780...
A: No.
Q: (L) When at one point they drilled into the pit, some bits of gold came up and
a piece of parchment and maybe some other odds and ends. What were these?
A: Alchemy is your clue.
Q: (T) The remolecularizer made it.
(L) Why not? If these people were involved in doing this, why did they do it?
A: Instructed to do it.
Q: (L) They were instructed by the higher powers they were in contact with, correct?
A: Yes.
Q: (L) What did they intend to do with it once it was there?
Did someone intend to come back for it at some point in time?
A: No.
Q: (T) Is it buried there in that location for a specific reason?
A: Sure.
Q: (T) Does the location itself have something to do with the purpose of it?
A: Magnetic.
Q: (T) Are there other ones buried on the planet?
A: Yes.
Q: (T) Are they aligned to each other on the planet in some kind of geometric pattern?
A: Maybe.
Q: (T) Do they all work together?
A: Maybe.
Q: (J) Can you tell us where some of the other ones are?
A: Use mind, that is what it is there for. [/size]
Q: (T) We are using our minds. And, we are talking to you about this. We are friendly.
A: Shortcut city. It’s not nice to fool Mother Cassiopaea!
Q: [Laughter] (T) Mirth! If we were to follow the coordinates where this thing is buried,
would it lead us to others?
A: Try it and see. When L__ said he wanted to hunt for buried treasure,
do you think he had this in mind? All the clues are there for you to find
if you do your homework!
Q: (L) Okay. I want to get back to the function of this thing. You suggest it is buried
not to be dug up. It is actually buried to stay there? Is that correct?
A: Yes.
Q: (L) Then that explains a lot of things about the way it was buried.
It is claimed that there was found, at a certain level, a rock with carving on it.
It was destroyed through carelessness. I am curious as to what this said.
A: Measure marker.
Q: (J) Could it be possible that this device was somehow related to
the crystal pyramid principle of Atlantis?
A: In a small sense.
Q: (L) Is this device continuously operational?
A: No.
Q: (L) What stimulates it to go into operation? That is, assuming it does.
A: Magnetic anomalies.
Q: (J) Is it affected by earthquakes?
A: Can be.
Q: (L) Are these magnetic anomalies ones that occur naturally on the planet?
A: Both.
Q: (L) So, they can occur naturally on the planet or they can be generated or
stimulated by some other source?
A: Yes.
Q: (J) Is this device a doorway for entry into this dimension?
A: Can be used as such.
Q: (T) Is it a stand-alone machine or is it to be used in conjunction with others?
]A: Either.
Q: (L) Next question: In reading about crop circles; I know that we have been told that
by which they are made.
Is it like electromagnetic imprinting, is it like a whirlwind.
A: Field transfer.
Q: (L) What kind of field?
A: Magnetic.
A: No.
such as a craft of some sort?
A: No.
A: We can give "clue." See Hoagland.
Q: (L) What does Hoagland say?
](T) He says that basically what we see in this density is a 3rd dimension reflection of 4thdimension
are not circular, they are hexagonal. Something like that, anyway.
A: Thoughts.
Q: (L) Who is thinking these thoughts?
A: Yours truly.
A: "Look" is not point. You need visual stimuli in order to remember.
Yours is a physical dependent existence. Your media resists, why? Suggest discussion.
(J) There wasn’t anything in Barnes and Noble either.
Why would the media resist crop circles?
(S) The same reason they resist everything else.
(F) But, they don’t resist everything else as much as crop circles.
(L) Here is something I got of the net recently. [reads]
represent the handiwork of extraterrestrial invaders or crafty tradesmen
generated ball lightening, numerous whirlwinds or some other peculiar
atmospheric phenomena. These scenarios apparently suffered a severe blow late
the last decade in southern England.
But this newspaper orchestrated, widely publicized admission didn’t settle
incorporate a number of ingenious, previously unknown, geometric theorems of
He concluded his letter as follows:
"The media did not give you credit for the unusual cleverness
behind the designs and the patterns."
in the musical scale corresponding to the keys on the piano."
What he discovered were geometric relationships which simply are not taught anymore
claimed that they did it could not possibly have done it.
(F) Well, the thing that is so strange to me is that since 1992 there hasn’t been any reporting
in the American media about this phenomenon at all.
(L to TF) Is there any way you could check that? [Tom is a reporter with a major newspaper.]
(TF) I already have.
(L) You have? What have you found?
(TF) There’s a lot.
(TF) I didn’t notice the dates. I didn’t notice if there was any turned out after 1992...
(F) There’s not...
[Indicates file thickness large to small] But I don’t know what years any of it is.
(TF) I know it hasn’t been in the news. I don’t remember seeing anything in the news
for several years.
(F) It hasn’t been here, but it has been in Britain.
(TF) Right!
there has been a television black-out on it here. The other thing is Linda Howe
then just drop the whole thing? If so, why?
(F) Because it’s too frightening. I remember in 1991 and 1992 this thing was heating up
and heating up.
(TF) That’s true.
"Oh! That’s it! Okay, forget about it." That was so strange because my impression of journalists
surface type explanations that don’t explain anything and which are not adequate, suddenly
Mary what’s-her-name stepped on the pedal. Oh, okay, no problem!"Obviously that didn’t happen!
This just didn’t make logical sense for those of us who had looked at the crop circles,
and even people who don’t follow this type of subject matter closely, who I have talked to,
people who brush off the subject of UFOs, have told me that this explanation just doesn’t add up!
brushed off?!
logistically speaking.
These guys would have to be working non-stop, 24 hours a day, flying all around the globe...
continue to do it each and every summer since that time. Wouldn’t somebody catch them by now?
Here is something that can be photographed.
(F) It doesn’t prove it...
(J) there’s something else going on...
(F) I don’t think it proves it, but it makes it very hard to ignore. As I have stated before,
were the words he used. This is a scientist!
Being very defensive because it stabbed into the heart of his whole life’s work.
(F) Exactly!
(L) It stabs into the heart of materialism.
but if you don’t pay too much attention to it you can brush it off...
They are there. You can see them.
(F) And in just short periods of time! It just doesn’t make sense. Just imagine,
Mr. F, it is your assignment to go out into the wheat fields of England,
in the dark and to make this intricate figure...
(F) Right!
(S) I don’t know if it was Sightings or Encounters, but one time they had a segment
(F) Yes, and it’s happening in Puerto Rico. And, the alleged report on this one was
that Army type vehicles came in and destroyed it so people couldn’t see it.
Which leads me to believe, with my suspicious mind, that somebody doesn’t want
this stuff going on, for whatever reason.
"Well, you’re in charge; what is this? What’s going on?" And, you can’t answer them.
You have lost credibility as the authority.
because they can’t explain it. It offends the scientific community because they can’t explain it.
(L) Yes, the church calls everything they can’t explain "The Work of the Devil."
(T) Which one?
A: Maybe.
A: No.
Q: (L) As you know, I have been studying the Sufi teachings, and I am discovering
so many similarities in these Sufi "unveilings" to what we have been receiving through
this source, that I am really quite amazed, to say the least. So, my question is: could
what we are doing here be considered an ongoing, incremental, "unveiling," as they call it?
A: Yes.
Q: (L) Now, from what I am reading, in the process of unveiling, at certain points,
when the knowledge base has been sufficiently expanded, some sort of inner unveilings
then begin to occur. Is this part of the present process?
A: Maybe.
a significant increase in knowledge, that it is sort of cyclical - I go through a depression
before I can assimilate - and it is like an inner transformation from one level to another.
this process in some way?
A: It is a natural process, let it be.
Q: (L) One of the things that Al-Arabi writes about is the ontological level of being.
Concentric circles, so to speak, of states of being. And, each state merely defines relationships.
At each higher level you are closer to a direct relationship with the core of existence,
and on the outer edges, you are in closer relationship with matter. This accurately explicates
the 7 densities you have described for us. He also talks about the "outraying" and
the "inward moving" toward knowledge. My thought was certain beings,
such as 4th density STS, and other STS beings of 3rddensity, who think that they are
creating a situation where they will accrue power to themselves, may, in fact, be part of
the "outraying" or dispersion into matter. Is this a correct perception?
A: Close.
Q: (L) Al-Arabi says, and this echoes what you have said, that you can stay in
the illusion where you are, you can move downward or upward. Is this, in part,
whichever direction you choose, a function of your position on the cycle?
A: It is more complex than that.
Q: (L) Well, I am sure of that. Al-Arabi presents a very complex analysis and
he probably didn’t know it all either... Nevertheless, in many places it is almost
a word-for-word reflection of things that have been given directly to us through this source.
A: Now, learn, read, research all you can about unstable gravity waves.
Q: (L) Okay. Unstable gravity waves. I’ll see what I can find.
Is there something more about this?
A: Meditate too! We mean for you, Laura, to meditate about unstable gravity waves
as part of research.
Q: (L) Okay. Would it be alright to ask a few more questions about the Sufis?
A: Not unless you wish to get off the track.
Q: (L) That would be off the track from the way we are moving at present?
A: Not until you have memorized Sufi teachings to the extent that
you can cross reference with Bible and similar works.
Q: (L) Okay. So, we are onto something with the Sufi teachings. But, we don’t need to
get off the track. I guess that they did with the Koran what some other mystics have done
with the Bible. It is clear that there is something under the surface of it, but it is corrupted
and twisted. And, I was convinced by seeing this underlying pattern that it was possible
to penetrate the veil, and that gave me the impetus to push for a breakthrough.
A: Unstable gravity waves unlock as yet unknown secrets of quantum physics
to make the picture crystal clear
Q: (L) Can we free associate about these gravity waves since no bookstores
are open at this hour? Gravity seems to be a property of matter. Is that correct?
A: And....
Q: (L) And hmmmm....
A: And antimatter!
Q: (L) Is the gravity that is a property of antimatter "antigravity?"
Or, is it just gravity on the other side, so to speak?
A: Binder.
Q: (L) Okay. Gravity is the binder. Is gravity the binder of matter?
A: And... Gravity binds all that is physical with all that is ethereal through
unstable gravity waves!!!
Q: (L) Is antimatter ethereal existence?
A: Pathway to. Doorway to.
Q: (L) Are unstable gravity waves... do unstable gravity waves emanate from 7th density?
A: Throughout.
Q: (L) Do they emanate from any particular density?
A: That is just the point, there is no emanation point?
Q: (L) So, they are a property or attribute of the existence of matter,
and the binder of matter to ethereal ideation?
A: Sort of, but they are a property of anti-matter, too!
Q: (L) So, through unstable gravity waves, you can access other densities?
A: Everything.
Q: (L) Can you generate them mechanically?
A: Generation is really collecting and dispersing.
What is an astronomical twin phenomenon?
A: Many perfectly synchronous meanings. Duplicity of, as in "Alice through the looking glass."
A: Yes, and...
A: Yes, and... Astronomical.
an alternate universe composed of antimatter?
A: Yes, and....
or are manifested in our universe?
A: More like doorway or "conduit."
A: Think of it as the highway.
in order to bring about some sort of transition to a new universe?
A: No. Realm Border is traveling wave.
via the the impetus of the traveling wave, or realm border?
through which space/time can be bent
, or traveled through via this "bending."
IS the bending of space/time? Is that it?
A: Yes.
Q: (L) Unstable gravity waves... antimatter... destabilizing the gravity waves through
when they abduct people?
either 3rd or 4th density.
A: No. That is TransDimensional Atomic Remolecularization.
A: They wouldn’t.
Q: (L) Why?
A: No space; no time.
is possibly where the poor guys of flight 19 are stuck?
A: Yes.
A: Yes. And if you are in a time warp coccoon, you are hyperconscious, i.e.
is connected or closed, as in
"Philadelphia Experiment."
Q:! (L) Is this Mars Rock in the news leading up to some definite, overt interaction with aliens?
(T) They told us, we know it, yes!
A: Gradually.
Q: (T) That’s what it’s all about. Now if they want to go to Mars to look for civilizations
and stuff, which they’re going to lead up to, and back to the moon here, and all this,
and they’re going to make Hoagland feel really good, because he’s right!
A: Notice how you heard nothing about the Mars Probes until the rock announcement?
The excavation robot spacecraft. One Probe is already on its way, another to follow.
No further explanation about "loss" of Mars Explorer.
Q: (L) What did happen to the Mars Explorer?
A: Blacked out. You see, ’Too risky.’ And too much too soon,
due to pressure from Hoagland and others.
Q: (T) My own opinion is that they’ve already been there, and they know what’s there.
A: No. Microbes are easier to swallow than humans in togas!
Q: (T) Cleopatra and Antony are not going to go over real big this week!
Especially with the Bible scholars.
(F) And the scientists! OK, you just mentioned that somebody from this planet
already launched a Mars Probe. A new Mars Probe, that no one in public knows about.
Because it’s never been talked about. So, it’s a secret probe. Who does it belong to?
A: Was secret US government.
Q: (J) When did it go up?
A: September of 1995.
Q: (T) Last September, a year ago. So, it’s gone for a year. It takes it a year,
two years to get there? Maybe not that long. So, it’s over half-way there at this point.
A: Yes. Next year.
Q: (T) Next year for the next probe?
A: Yes.
Q: (T) Is this going to be one of those public ones? A publicly announced one?
A: They both are.
Q: (T) What is the purpose of these probes?
A: Excavation to display living organisms.
Q: (T) Display?
(L) Yes, for public consumption. In other words, not only do we have a rock now,
that shows evidence that there was...
(T) Oh, display, as when they find it and dig it up, they’re going to show it on camera!
(L) Yes!
(T) Connie Couric will interview it!
(L) Right!
(F) First they said they found no evidence, then they said it was inconclusive...
Now, who the hell knows what they found! In revealing things, we’ll start with
fossilized life, and then move on...
(L) So, they’re going to display the discovery of living organisms on Mars to take
the next step to acclimate...
A: Yes.
Q: (L) So, in other words, this process is going to be something of an on-going thing,
and that all of these people who are cranking around about, you know, alien landings...
A: No faces, though.
Q: (L) There’s not going to be any ’Faces On Mars?’ They are not going to show us...
A: Won’t be revealed, what do you think happened with Mars Explorer?
Hoagland forced their hand.
Q: (T) What do we think happened to the Mars Explorer? I think they switched channels.
They just moved it from one communication post to another, and it’s doing exactly
what it’s supposed to be doing. And they did it in such a way, that the NASA people
really didn’t know what happened, so that when they were asked, they could say,
’We don’t know what happened to it!’ Because they really don’t know what happened!
(L) When we’re talking about this dealing with these Mars Explorers - is all this stuff,
or most of this stuff, coming from the 4th density manipulations of human minds, rather than...
A: Yes.
Q: (L)... rather than actual, physical entry and doing of deeds? Is that it?
A: Yes.
Q: (T) I have a question. They’re going to display live organisms, like, how did they put that
’Living organisms’? How big are these living organisms going to be? How advanced?
A: Teeny-tiny.
Q: (T) So, we’re still talking about microscopic organisms here?
A: Yes.
Q: (J) So, they won’t wave at us!
A: But these will be alive. Can’t you see the progression here?
"Don’t want to scare Grandma Sally Bible Thumper/Stockmarket Investor!"
Q: (L) All right, let’s get on to our questions here. Let me ask about the tetrahedron.
Terry, you ask it, because you know more about it.
(T) The Tetrahedron, triangle mathematics that Hoagland is working with in conjunction
with the Mars/Cydonia region where he supposedly discovered this...
A: Energy consolidator. EM Wave capturer.
Q: (T) Ok, so it’s an EM wave capturer. Does it also emit EM waves?
A: Close. Channels and enhances, when used properly, and in pristine conditions.
Q: (T) Hoagland is not talking about... whatever he’s talking about, as far as
the mathematics go, of the tetrahedral triangles within the sphere, which I’m assuming
this planet is calling the sacred geometries, but are physics-type things of different densities,
which may not actually be right. OK, this doesn’t apply just to Mars, this is, every sphere
has these same properties...
A: Yes.
Q: (T)... a golf ball, a base ball; I know they’re not perfect spheres, they have dimples;
all the way up to the sun, and so forth and so on, of any size, made out of any material,
as long as it’s a sphere, it will have the same properties.
A: No. Must be magnetized.
Q: (T) OK, it’s a magnetized sphere; something that has a magnetic field around it.
A: Yes.
Q: (L) Is the tetrahedral configuration a property of the magnetism?
A: No.
Q: (T) OK, my question is, the sphere has to be able to generate a magnetic field,
like the earth has a magnetic field, like Mars generates a magnetic field...
A: Or be magnetized by installation of internal magnetic generator.
Q: (L) OK, what’s the purpose of this? What’s the purpose of these tetrahedrons?
What are the...
A: Purpose is not proper term. It is a reflection of universal balance.
Q: (L) OK, well, this guy J__ says that they are designated by different monuments
on the planet’s surface...
A: Nonsense!!! Artificial constructed tetrahedrons are placed on strategic locations
on the planet’s surface in order to utilize magnetic fields properly.
Q: (L) Who places these artificially constructed tetrahedrons at these points?
A: The artificial constructors.
Q: (L) And who are they?
A: Whomever they may be. Nineteen degrees north and south.
Q: (T) Those are the numbers that Hoagland came up with, with his stuff. On most of
the planets, and our sun, we seem to have major events happening, or have happened...
A: Hawaii.
Q: (T) Yes, Hawaii, Puerto Rico... let’s see, 19 degrees north and south, Phillipines,
I think, is somewhere close, on the south side. Major volcanos...
(F) Phillipines is on the north side, that’s not in the Southern Hemisphere...
(T) I’d have to pull out a global map to see what the 19 degrees are.
On Mars, Cydonia resides at approximately 19 degrees, the Giant volcano,
the dead volcano on Mars is approximately 19 degrees, the stuff that they found on Venus,
the major things, are at approximately 19 degrees. The sunspots are approximately 19 degrees,
the red spot on Jupiter...
(L) Do the tetrahedrons spin within the sphere? Do these power points of the tetrahedron spin?
A: Energy fields flow in balance.
Q: (T) Is there... now, am I correct in the fact that there’s a direct relationship here
to the real Hebrew Star of David, to these tetrahedrals?
A: Yes.
Q: (T) Yes. So that that symbol is not a religious symbol, as such, but a very important...
(L)...power symbol.
A: Yes. So is pentagon.
Q: (T) So is the Pentagon? These are part of what humans describe as the sacred geometries.
A: Yes. You as Atlanteans knew this, and lived by it in many ways. For example,
the pyramid recharges by capturing exactly half the energy points, thus allowing
a positive imbalance buildup to be captured, then expended.
eye in the sky
จำนวนข้อความ : 141
Registration date : 18/02/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ eye in the sky on Tue Mar 16, 2010 11:42 pm
"That's Impossible!" (2009)
byNoel Huntley
from NoelHuntley Website
Von Neumann
(mathematician). It appeared hardly likely though that human technology
and the object cannot be seen.
In 1943 the first successful test of invisibility was achieved but much to the amazement of the military
and scientists the ship completely disappeared. When it returned the military insisted on using
a crew the next time in order to gain information on what happened when it disappeared.
It has been recorded that Tesla objected at this point and either resigned or even attempted
sabotage and was removed from the project. The ship disappeared and returned,
and the horrific effects of this have become fairly well known: insane crew with some of them
embedded in the hull of the ship.
the magnetic base tone of D4, creating a magnetic window.
The Guardian Alliance refer to them as the Futczhi. Zeta-Dracos are also involved as go-
betweens in humans and Futczhi relations. They motivated the government to create
the experiment in 1983.
The Zetas had succeeded in creating a wormhole (a vortex interconnecting
dimensions and times)
as part of their plan for expanding their implant grid system
In addition, the Zetas used this dimensional window to bring in a fleet of spacecrafts,
secretly, to position them for directing coded electromagnetic D1 pulses at the Sun.
This can be achieved via astral dimensions within our Earth into the D2 Earth body of
which its centre is connected to the Sun centre through inner-space geometry.
The purpose was to misalign the Sun/Earth energy relationship which they accomplished,
and further to misalign Earth and parallel Earth (Tara) where D1 (Earth) connects to D4 (Tara).
(Earth is D1, D2, D3; Tara is D4, D5, D6.)
This would create a repulsion zone between Earth and Tara and prevent ascension
at the closing of the planetary time period of 26,556 years. We would then have to continue
to reincarnate in repeated cycles.
The result of the pulses directed at the Sun was to reverse the magnetic polarity into electrical
at the D1 level, causing the D1 vortices of Earth and Sun to repel. They also reversed
the D4 frequencies so that Earth would repel Tara - and further this would cause
misalignment in the other dimensions D2, D3, D5, D6.
This violation of the harmonic balance between these planets and Sun caused
increasing Sun flares between 1943 and 1972.
Astronomers became very concerned
about the dangers to Earth. In fact they predicted an explosion on the Sun by
about 1972 which would continue, with the possibility of destroying all life on Earth.
Fortunately the Sirian Council of the Guardian Alliance intervened. From theETs' point of view
in 1972 there would have been a red pulse released from the Sun, destroying all life on Earth.
The D1 frequency band corresponds to red in the spectrum; hence the name red pulse.
It is an intense expanding wave of ultralow frequency energy.
On August 7, 1972 scientists recorded the most intense flare ever but it was a puzzle as to
why it subsided. The Sirian Council had in fact intervened. The Sun's vortices were
out of balance but the correction was a major task and lengthy. A temporary solution was
to create a frequency fence around Earth to prevent resonance from the red pulse and
subsequent destruction of all life.
This procedure is described in the article on the so-called
In spite of the Zeta's plan of disrupting the alignment of the Sun, Earth and Tara, and preventing
ascension being thwarted, they nevertheless had succeeded in creating their wormhole in 1943,
a key item for a series of three such endeavors;
the other two at magnetic peaks in 1983 and 2003. The
Montauk books detail the events of 1983
in which the secret government were again induced and deceived into creating a time machine
linking 1983 to 1943 expanding and strengthening the 1943 wormhole.
The third experiment must be conducted in 2003, the last magnetic peak before 2012
and the ascension period.
The government must again be deceived into performing
another experiment which would reinforce the Zeta's implant network
sufficiently to form an effective frequency fence for the whole population
preventing the major aspects ofthe ascension from occurring.
The 1983 wormhole widened the rip of 1943 and linked the time periods successfully to their
Dracos-Zeta D4 time period - their base of operations. Note that these Zeta plans were
a hindrance to the competing alien faction
, the
Anunnaki, in their One World Order agenda
and they attempted to block the plan. The expanding wormhole network strengthened its
connection to the Phantom Matrix (also through their Falcon wormhole, and this so-called
Montauk vortex grid system can now radiate psychotronic mind control directives to the population).
The ET information in this article came from the Guardian Alliance material - A. Deane's Voyagers books
Mystery of 11:11
The Number 11
Some interesting observations
11 August 1999 at 11:11 am there was a total solar eclipse.
21 December 2012 at 11:11 am the Mayan calendar ends.
In the astrological sense with the precision of equinoxes & the 26,000 years cycle of Earth
wobbling through the 12 signs of zodiac we are moving in to the new age of Aquarius and
Aquarius is the 11th astrological sign.
Returning to pyramid numerology and the significance of the number 11:
1234321 = 11:11 x 11:11
121 countries = 11:11
11 fatalities
In binary 11 stands for 3, which is the trinity. 11 x 3= 33. 33 is the number of 33rd degree mason.
The first plane that hit the world trade centre was flight 11.
Total number of crew on flight 11 was 11.
New York State is the 11th state of the US constitution.
September 11, 2001 is 11 years from 2012.
The world trade centre commenced building in 1966 and finished in 1977. It took 11 years to build.
On September 11, 1990 at 9:09 pm (11 years prior to September 11, 2001)
Bush Sr. made the very first speech entitled “towards the New World Order” at the UN.
Going back 60 years in time, on 11 September 1941 soil was broken to lay
the foundation of Pentagon.
The American NASA project to the moon was Apollo 11.
On the 11th hour of the 11th day of the 11th month, Remembrance Day is celebrated in Britain.
The word crown was derived from Anglo French word caroon, which is derived from
the Latin caroona. The year the word crown was first established was 1111 AD.
The 1972 Munich Olympics provided the first world stage for global terrorism and
the inclusion of the significant number '11' was anything but a coincidence.
11 Israelis were killed by Black September (the 9th month 9/11), a group with ties to
Yasser Arafat’s Fatah organization.
The Olympics were host to 121 countries. 121 divided by 11 gives 11.
Read more about this mystery of 11:11 here ==>>
Zeta Reticuli Aliens/ Bob Lazar 1/4
Zeta Reticuli Aliens/ Bob Lazar 2/4
Zeta Reticuli Aliens/ Bob Lazar 3/4
Zeta Reticuli Aliens/ Bob Lazar 4/4
In the same blue-folder briefings which accurately represented the fields of study Lazar
and the others at S4 were involved with, were other briefings that involved the beings,
their motives and the historical involvement with this planet. A few of the briefing
dealt with claims the aliens made regarding their involvement with us. Lazar emphasized
that these were "simply words on paper" and even if truly documented,
they could have been lies on the part of the Reticulans.
With these disclaimers in mind, following is a list of those statements.
The Reticulans claimed to have genetically "externally corrected" our evolution up to
65 times over the last ten thousand years. Divided evenly, that would be one correction
every 150 years.
Humans were referred to a "containers". Unfortunately the aliens viewed us
simply as containers of genetic material. Literally, Genetic Cultures.
There was an uncomfortable amount of information on recombinant DNA methods,
and viral weaponry. It is speculated that viruses were used to genetically redirect
our evolution since viruses are the only organism that could attach to the human
and impart a new genetic code.
According to the aliens our religions were given to us so, as the aliens said,
"to prevent the 'containers' from destroying themselves". There were various
references to religious belief systems that currently exist today.
The Reticulans can exert a form of mind control on humans. This form of control is
best started when the human is quiet and relaxed. Sleeping is preferable.
Stimulated states of mind proved this form of mind control ineffective.
ฐานทัพ UFO
รัฐบาลสหรัฐเคยปฏิเสธการมีตัวตนของพื้นที่ 51 พวกเขาพยายามที่จะปิดบังอะไรไว้ หรือว่ามันคือ
ถ้าหากมีคนถามว่าพื้นที่บริเวณใดในสหรัฐเป็นพื้นที่ที่ลึกลับที่สุด เราคงจะได้รับคำตอบว่ามันคือ
พื้นที่ 51 หรือ Area 51
ยูโฟ (UFO - Unidentified flying object)
บินอยู่เหนือบริเวณนั้นบ่อยครั้งจนหลายคนสงสัยว่าบริเวณพื้นที่ 51 น่าจะเป็นฐานทัพหรือ
ทุกๆ เช้าของวันทำงานจะมีคนอย่างน้อย 500 คนผ่านเข้าไปยังประตูทางขึ้นเครื่องบินที่มี
สนามบินแมคคาร์เรน ในลาสเวกัส เจ้าของพื้นที่ที่เป็นเขตหวงห้ามส่วนนี้ก็คือ
บริษัท อีจีแอนด์จี (EG&G- Edgerton, Germeshausen, and Grier, Inc.)
ผู้คนเหล่านั้นต้องบอกรหัสผ่าน "เจเน็ท" (JANET) ตามด้วยเลขประจำตัว 3 หลักก่อนที่จะผ่านเข้าไป
ขึ้นเครื่องโบอิ้ง 737 ที่ไม่มีเครื่องหมายใดๆ ระบุว่าเป็นเครื่องบินของใคร
Janet (airline)
EG&G (Janet Airlines) - Details and Fleet History
EG&G (Janet Airlines)
Airline Full Name EG&G Special Projects, Inc
United States
Airline Founded 1972
Fleet Size 7 Aircraft
Average Fleet Age * 16.1 Years
สายการบินนี้จะออกบินทุกๆ ครึ่งชั่วโมงโดยมีจุดหมายปลายทางอยู่ที่ทะเลสาบกรูม (Groom Lake)
พื้นที่ 51 หรือที่รู้จักกันในอีกชื่อหนึ่งว่า ทะเลสาบกรูม นั้น
อยู่ห่างจากลาสเวกัสไปทางตอนเหนือราว 90 ไมล์
อันที่จริงแล้วพื้นที่ 51 เคยเป็นที่ตั้งของฐานทัพทหารแห่งหนึ่งของสหรัฐที่สร้างขึ้นในปี ค.ศ. 1955
โดยมีวัตถุประสงค์เพื่อใช้เป็นสถานที่ทดสอบเครื่องบินจารกรรม U2
นับตั้งแต่นั้นมาก็มีการใช้พื้นที่ 51 เป็นสถานที่ทดสอบเครื่องบินจารกรรม เช่น เจ้าวิหคทมิฬ Blackbird (SR71)
เครื่องบินขับไล่ล่องหน F117 Stealth Fighter, เครื่องบินทิ้งระเบิดล่องหน B2 Stealth Bomber
อีกทั้งยังถูกใช้เป็นสถานที่วิจัยโครงการลับออโรร่า (Aurora Project)
เคร่องบินรบเหล่านี้จะถูกทำการทดสอบสมรรถนะที่บริเวณทะเลสาบกรูม เมื่อพวกเขาทดสอบเครื่องบินรบ
ประวัติพื้นที่ 51
เดือนมีนาคม ค.ศ. 1955 เคลลี่ จอห์นสัน (Kelly Johnson) ผู้ออกแบบเครื่องบินจารกรรม U2
ได้รับมอบหมายจาก ซีไอเอ ให้ออกแบบเครื่องบิน U2 นอกจากนี้แล้วเขายังได้รับมอบหมายให้
หาสถานที่เพื่อใช้ทดสอบเจ้า U2 นี้ด้วย
เคลลี่ได้ส่งเจ้าหน้าที่ โทนี เลอวิเอร์ (Tony Levier) นักบินที่จะทำการบินทดสอบเครื่องบิน U2
กับ ดอร์ซี่ เคมเมเรอร์ (Dorsey Kammerer) ไปสำรวจพื้นที่ร้างกลางทะเลทรายตอนใต้ของ
รัฐแคลิฟอร์เนีย, เนวาดา และ อริโซนา 2 สัปดาห์ต่อมาโทนีก็กลับมาส่งรายงาน เคลลี่ดูรายงาน
เปรียบเทียบสถานที่ทั้ง 3 แห่งแล้วก็ ตัดสินใจเลือกพื้นที่บริเวณทะเลสาบกรูม ในรัฐเนวาดา
ทะเลสาบกรูม มีชื่อเรียกอย่างอื่นอีกมากมายนับตั้งแต่มีการก่อสร้างฐานทัพขึ้น เคลลี่เรียกมันว่า
พาราไดซ์แรนช์ (Paradise Ranch - ทุ่งหญ้าสวรรค์) แต่หลังจากมีการทดสอบเครื่องบินจารกรรม U2
ในเดือนกรกฏาคม ค.ศ. 1955 มันกลับถูกเรียกสั้นๆ ว่า แรนช์ (The Ranch - ทุ่งหญ้า) ในความเป็นจริงแล้ว
ฐานทัพ(ลับ) แห่งนี้มีชื่อเรียกอย่างเป็น ทางการว่า วอร์เตอร์ทาวน์ สตริป (Watertown Strip) ตามชื่อ
แอลเลน ดูลเลส (Allen Dulles) ผู้อำนวยการ ซีไอเอ ในสมัยนั้น
ที่มาของชื่อพื้นที่ 51
เดือนมิถุนายน ค.ศ. 1958
คณะกรรมาธิการพลังงานปรมาณู United States Atomic Energy Commission (AEC)
ได้เข้ามาใช้พื้นที่บริเวณทะเลสาบกรูม ร่วมกับกองทัพสหรัฐ เพื่อทำการทดลองโครงการลับๆ บางอย่าง
พวกเขาเรียกสถานีทดลองนี้ว่า สถานีทดลองเนวาดา (Nevada Test Site)
คณะกรรมาธิการฯ ได้แบ่งพื้นที่ออกเป็นส่วนๆ แล้วกำหนดหมายเลขให้แต่ละส่วน
บริเวณส่วนที่เป็นฐานทัพนั้น ได้หมายเลข 51
อ้างถึงทะเลสาบกรูม แต่พวกเขาจะเรียกมันสั้นๆ ว่า พื้นที่ 51 ตามที่คณะกรรมาธิการพลังงานปรมาณู
ใช้เรียก ถึงแม้ว่าการทดลองลับของ AEC จะเสร็จสิ้นไปตั้งแต่ปี ค.ศ. 1970 แล้วก็ตาม
ในปี ค.ศ. 1970 กองทัพอากาศสหรัฐได้เข้ามายึดพื้นที่นี้อย่างถาวรเพื่อใช้เป็นสถานที่ทดลอง
เครื่องบินรบรุ่นใหม่ๆ ตลอดไปจนถึงการทดลองเครื่องบิน มิก 21
และอาวุธทันสมัยอื่นๆ ของรัสเซีย ที่ทางสหรัฐยึดมาได้เมื่อปี ค.ศ. 1967
ปี ค.ศ. 1975 พื้นที่ 51 ได้ถูกกำหนดให้เป็นหนึ่งในเขตจำลองการรบทางอากาศ
ภายใต้ชื่อรหัสว่า "ธงแดง" (RED FLAG exercise)
คราวนี้พื้นที่ 51 จึงถูกเรียกสั้นๆ ในชื่อใหม่ว่า "จตุรัสแดง" (Red Square) แต่ชื่อกึ่งเป็นทางการนั้น
"ดรีมแลนด์" (Dreamland - แดนในฝัน) และในช่วงทศวรรษ 1970 ก็ได้มีการทดลองโครงการ
ด้านอวกาศและการทดลองเครื่องบินที่ทันสมัยที่สุดคือ "แทคอิทบลู" (Tacit Blue)
ฐานทัพทะเลสาบกรูม ถูกขยายอาณาเขตออกไปอีกในช่วงทศวรรษ 1980 มีการสร้างสนามบินเพิ่มเติม
ที่มีอยู่มากมาย อุปกรณ์สื่อสาร เรดาร์ และจานดาวเทียมได้รับการติดตั้ง ตึกราม อาคาร โกดังหลายแห่ง
ถูกสร้างขึ้นมาใหม่ บนทะเลสาบกรูม คาดว่ามันถูกใช้เป็นกองบัญชาการของศูนย์ทดลองเครื่องบินรบ
ของกองทัพที่ถูกเรียกว่า ดีแทชเมนท์ 3 (Detachment 3)
ทะเลสาบกรูมเป็นภูเขา ในปี ค.ศ. 1984 กองทัพสหรัฐจึงต้องขยายเขตหวงห้ามออกไปอีกโดยหวังว่า
จะเป็นการกันไม่ให้มีใครสามารถมองเข้าไปยังในบริเวณฐานทัพได้ แต่ก็ยังมีจุดที่สามารถใช้เป็นที่
สอดแนมได้อีก 2 แห่งห่างจากทะเลสาบกรูมไปทางตอนใต้ราว 12 ไมล์ คือที่บริเวณไวท์ไซด์พีค
(White Side Peak) กับ ฟรีดอม ริดจ์ (Freedom ridge) เพื่อป้องกันไม่ให้มีคนอาศัยสถานที่ทั้งสอง
เป็นที่สอดแนมได้อีก ในปี ค.ศ. 1995 กองทัพจึงได้ประกาศให้เขตดังกล่าวเป็นเขตหวงห้ามด้วย
เพราะมันสามารถแยกแยะได้ว่าผู้ที่บุกรุกเข้ามาเป็นมนุษย์หรือสัตว์ หน่วยลาดตระเวณนิรนามนี้
แคโม ดูดส์ (Camo dudes) ซึ่งจะมีหน่วยสนับสนุนทางอากาศที่ใช้เฮลิคอปเตอร์
Sikorsky HH-60G Pave Hawk คอยให้การสนับสนุนทางอากาศ
โครงการลับต่างๆ ที่ใช้พื้นที่ 51 เป็นสถานีทดลองนั้นค่อยๆ ทยอยจบลง เช่น
การทดลองเครื่องบินจารกรรมแทคอิทบลู เสร็จสิ้นเมื่อปี ค.ศ. 1985 การทดลองเครื่องบิน
ขีปนาวุธแอดวานซ์ครูซ (Advanced Cruise Missile) ถูกยกเลิกในปี ค.ศ. 1992
การทดลองขีปนาวุธ สแตนด์ออฟแอทแทค (Stand-off Attack Missile) ถูกยกเลิกในปี ค.ศ. 1994
ความลับในพื้นที่ 51
เป็นไปได้ว่ากองทัพอากาศยังคงมีภารกิจอื่นๆ ที่ไม่เป็นที่เปิดเผยในปี ค.ศ. 1989 สถานีโทรทัศน์ลาสเวกัส
ได้ถ่ายทอดการให้สัมภาษณ์ โรเบิร์ท ลาซาร์ (Robert Scott Lazar) ผู้ที่อ้างว่าเขาเคยทำงานในพื้นที่ 51
โรเบิร์ท กล่าวว่าเขาได้รับมอบหมายให้ทำการศึกษาวิศวกรรมยานอวกาศของมนุษย์ต่างดาว
ในพื้นที่ 51 มียานอวกาศรูปทรงกลมคล้ายจานจำนวน 9 ลำบินขึ้น-ลงในเขตหวงห้ามบริเวณที่ชื่อ S4
บริเวณทะเลสาบปาปูส (Papoose Lake) ซึ่งอยู่ห่างจากทะเลสาบกรูม
ไปทางทิศตะวันตกเฉียงใต้ราว 10 ไมล์
เรื่องราวของโรเบิร์ทเป็นที่วิพากษ์วิจารณ์เป็นอย่างมาก มันทำให้เรื่องราวเกี่ยวกับวัตถุบินลึกลับ
ที่ผู้คนสงสัย เริ่มปะติดปะต่อขึ้นเป็นรูปเป็นร่าง ยานบินรูปทรงกลมอาจเป็นการทดลอง
ระบบต้านแรงโน้มถ่วงโลกแน่นอน เทคโนโลยีน่าทึ่งเช่นนี้ต้องถูกปกปิดเป็นความลับสุดยอด
นอกจากนี้ยังมีการทดลองเครื่องบินจารกรรมที่มีความเร็วมากกว่าเสียง 5 เท่า โดยใช้
พลังขับเคลื่อนชนิดใหม่ เช่น พัลส์ เดโทเนชั่น เวฟเอนจิน (Pulse Detonation Wave Engine)
การทดลองเครื่องบินที่มีความเร็วมากกว่าเสียงหลายเท่าหรือที่เรียกว่ายานไฮมัค (High-Mach Vehicle)
โดยสร้างเครื่องบินที่มีลักษณะเป็นลูกผสมระหว่างเครื่องบิน A12 และ D21 หรือที่เรียกกันว่า
ซูเปอร์วอลคารี (Super Valkarie) ซึ่งมีคนจำนวนมากที่เคยไปด้อมๆ มองๆ แถวพื้นที่ 51
จากคำกล่าวอ้างของโรเบิร์ท S4 เป็นสถานที่ใช้สำหรับศึกษา วิจัยวัตถุบินลึกลับภายใต้ชื่อ
โครงการ มูนดัสท์ (Moondust) บรรดาสิ่งก่อสร้างทั้งหลายถูกอำพรางอยู่ใต้พิ้นทราย
โรเบิร์ททำงานในห้องทดลองร่วมกับนักวิทยาศาสตร์อีกคนหนึ่งชื่อ แบร์รี่ คาสติลลิโอ (Barry Castillio)
นักวิจัยแต่ละกลุ่มจะถูกแยกทำงานในส่วนต่างๆ พวกเขาถูกจำกัดให้มีเพื่อนร่วมงานเพียงแค่ไม่กี่คน
แบร์รี่ เป็นเพื่อนร่วมงานเพียงคนเดียว ที่ช่วยโรเบิร์ท ศึกษาค้นคว้าเรื่องการขับเคลื่อนของยานอวกาศ
วันแรกที่โรเบิร์ท เดินทางถึง S4 เขาถูกนำตัวไปที่ห้องพยาบาลเพื่อทำการตรวจผิวหนัง เขาถูกทา
ด้วยสารหลายชนิดตามจุดต่างๆ บนแขน วันต่อมาก็จะมีเจ้าหน้าที่มาตรวจเช็คดูว่าผิวหนังเขาเกิด
ไม่เพียงเท่านั้น เขายังถูกสั่งให้ดื่มสารบางชนิดที่ทำให้ร่างกายของเขามีภูมิคุ้มกันสูงขึ้น
สารนี้ช่วยป้องกันเขา จากสิ่งแปลกปลอมที่อาจได้รับจากการสัมผัสวัตถุที่มาจากต่างดาว
สารที่โรเบิร์ท ดื่มนั้นมีกลิ่นเหมือนกับกลิ่นต้นสน และในคืนนั้นหลังจากที่เขาได้ดื่มสาร
สร้างภูมิคุ้มกันเข้าไป เขาก็เกิดอาการเป็นตะคริวที่ท้องน้อย ไม่ต้องสงสัยเลยว่ามันเป็นผลข้างเคียง
มาจากสารสร้างภูมินั้น ต่อมาโรเบิร์ท ถูกแนะนำให้รู้จักกับเรเน่ (Rene) ซึ่งเขาก็ไม่รู้ว่าเรเน่ เป็นใคร
และมีหน้าที่อะไรใน S4 เจ้าหน้าที่ ที่ทำงานอยู่ในส่วนของ S4 นั้นมีอยู่แค่เพียง 22 คนเท่านั้น
หัวหน้าของโรเบิร์ท ชื่อ เดนนิส มาริอานี (Dennis Mariani)
เขารู้จักเดนนิส ตอนที่ไปสัมภาษณ์งานที่บริษัท อีจีแอนด์จี (EG&G) ซึ่งตอนนั้นยังมีสำนักงาน
อยู่ที่สนามบินแมคคาร์เรน ในลาสเวกัส แต่ปัจจุบันได้ย้ายมาอยู่ที่ฐานทัพอากาศ เนลลิส
(Nellis Air Force Base)
ในวันแรกๆ มีเจ้าหน้าที่พาโรเบิร์ทไปที่ห้องเล็กๆ ที่มีเพียงแค่โต๊ะและเก้าอี้กับแฟ้มเอกสาร
กว่า 100 แฟ้ม ข้อความในแฟ้ม ล้วนเป็นข้อมูลที่เกี่ยวข้องกับมนุษย์ต่างดาวและเทคโนโลยี
มนุษย์ต่างดาว เขาใช้เวลาวันละครึ่งชั่วโมงในการศึกษาข้อมูลในแฟ้มเหล่านั้น
ข้อมูลในแฟ้มเหล่านั้นดูเหมือนจะเป็นบทสรุปให้กับเหล่านักวิทยาศาสตร์ที่มาทำงานใน S4 ว่า
งานที่พวกเขาได้รับมอบหมายให้ทำนั้นเกี่ยวข้องกับสิ่งมีชีวิตต่างพิภพ โรเบิร์ท ได้เห็นการทดสอบบิน
ของยานบินรูปทรงประหลาดและยิ่งตกใจมากขึ้นอีก เมื่อเห็นรายงานเขียนว่า
โครงการที่โรเบิร์ท ทำอยู่นั้นเป็นส่วนย่อยของโครงการใหญ่ เขารับผิดชอบเรื่องการค้นคว้า
ภายใต้ชื่อ โครงการกาลิเลโอ (Project Galileo)
แต่ในกรณีของโรเบิร์ทนั้น เขาต้องอาศัยความรู้ในแขนงอื่นด้วยจึงทำให้เขาได้รับอนุญาติ
โครงการไซด์คิก (Project Sidekick) เป็นหนึ่งในสองโครงการที่โรเบิร์ท ได้รับอนุญาติให้รู้ข้อมูล
ได้บางส่วน มันเป็นการศึกษาค้นคว้าเรื่องอาวุธลำแสง (Beam Weapon) ที่จะถูกติดตั้งบนเครื่องบินรบ
อาวุธลำแสงนี้ ต้องอาศัยความรู้เรื่องแรงโน้มถ่วงและการรวมแสงให้เป็นลำ อาวุธชนิดนี้
โครงการลุคกิ้งกลาส (Project Looking Glass) เป็นการศึกษาเรื่องกฏกายภาพของ
ในการสร้างแรงโน้มถ่วงจำลอง และเช่นกันว่า
การทดลองในโครงการกาลิเลโอ ประสบความสำเร็จเป็นที่น่าพอใจ โรเบิร์ทได้เห็นรายงาน
และหลักฐานต่างๆ ที่พิสูจน์ถึงความถูกต้องของมัน ซึ่งทำให้เชื่อได้ว่าโครงการอื่นๆ ที่ทำใน S4 นั้น
ก็ประสบความสำเร็จเช่นเดียวกัน แต่โรเบิร์ท ก็ปฏิเสธที่จะถือเอากรรมสิทธิ์เหนือความสำเร็จนั้น
ไม่ว่าการทดลองที่พื้นที่ 51 จะเป็นอะไรก็ตาม ก็ยังมีผู้คนที่อยากรู้อยากเห็นเป็นจำนวนมาก
ได้พยายามสอดแนมเข้าไปใกล้เพื่อบันทึกภาพ ซึ่งส่วนใหญ่ก็ใช้เป็นหลักฐานยืนยันได้ดีว่า
All that is necessary for evil to triumph is for good men to do nothing!!
Silence means APPROVAL!!
eye in the sky
Registration date : 18/02/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Wed Mar 17, 2010 1:23 pm
"THEY are the gatekeepers.THEY are holding all the keys, THEY are guarding all the doors,"
Morpheus, in "The Matrix"
"The Cold War ethics is over"
(Al Gore, US Vice-President and candidate for the 2000 elections) (May, 2000)
1 - Introduction
2 - The Formation Of The Prison
3 - Analyzing And Listing The Keywords
4 - Deconstructing Their Speculation - Part I
5 - Quick Reminder - They Will Label This Page For Your Mind
6 - Deconstructing The Keywords
7 - Constructing Your Anti-Giving Up Safeguard
8 - Deconstructing Their Speculation - Part II:
8-A) There Is No Game
8-B) The Praises Are False
8-E) There Is No Gift To Receive - Only Tools To Work
The Keywords Exist Only In The Stage-World
"Delirium Trumans" - The Trumanization Of The World Around You
9 - Deconstructing Their "News"
10 - Quick Reminder - Practical Tips
11 - Quick Reminder - Customizing The Exercises For Your Needs
12 - Deconstructing Their Sick Key Ideas And Its Derived Speculations:
12-A) There Is No Payment - Deconstructing Their "Auric Sacrifice Doctrine"
12-C) Your Beloved Wife Is Not Your Mother (Or Your Beloved Husband Is Not Your Father)
Kundalini Is No Fire
There Is No "Going Back Home":
1. General Concepts
'Going Back Home'Versus Vampirism
Erasing Your Personal History
Your Beloved One As A Dentist - Generic Alleged Impediments
The Three Types Of Vampirism And The 'Going Back Home' Inducement:
5.I - The Vampirism Rooted In The Opposite Sex In General
5.II -
The Teen-Rooted Vampirism
(Especially Male Vampires Towards Teen Girls)
7. Accustoming Your Brain With The "Going Back Home" Premise
The Psychologist Approach - "Debunking" True Love In The World Of Thought Control
9. Complementary Optional Procedures - Reinforcing Your Position As The Owner Of Your Destiny
10. Creating Your Reality - Belief Generates Experience
11. Emptying Yourself - And Understanding The Foreign Lizard's Heart
12-F) Dismantling Distorted Nomenclature Over Some Of The Main Keywords
12-G) Going Over "Vampirism"
12-H) Comparisons - Looking Through The "Looking-Glass"
12-J) Reflected Thoughts, "Looking-Glass" And Subliminal Prejudice
Deconstructing Idolatry In The Stage-World
Alleged Character Restrictions To Form A Couple
13 - Quick Reminder - Using The Right Label ... Or Not
14 - Deconstructing Their Speculation - Part III:
14-A) The Reality Fishing Techniques
The Reality Cloning Techniques
The Behavior Inducement Technique
14-E) Some Of Their Rules Can Be Bent, Others Can Be Broken
The Dispersal Technique - Undermining Your Visualization
The Reductionist "Corral" Technique - Limiting Your Reality In Space Or Time
The Pretended Seriousness Technique - Manipulating A Task They Charge You In Your Stage World
The Invented Little Stories - Reinforcing Their Description Of Reality
Valuing Their Performance In The Stage World
The "Mathematized Feeling" Technique - Trying To Inculcate A New "Love" In Your Mind
15 - Quick Reminder - Suggested Archetypes For Generic Uses
16 - Deconstructing Their Speculation - Part IV:
16-A) Denying The Truth - The Pretended Naturalness Technique
16-B) Denying The Truth - The "Hurricane Is Not Over" Technique
(Denying The Passage Of The Hurricane Through Your City Years Ago)
16-C) Denying The Truth - The "Hurricane Has Never Existed" Technique - Trumanizing Your Past
16-E) Question Reality - Noticing Little Incongruities In The Very Structure Of The Stage World
17 - Deconstructing Your Nightly Dreams
18 - Deconstructing Their Speculation - Part V:
18-B) The Horse Movements
19 - Plan B - Using Antonyms
20 - Additional Information
The Allegory of The Alien
(If you're a neutral person or a character of Our Side, the link above is for you)
This is an illustrated allegory in sixteen chapters about the alien intervention on Earth,
masquerading as humans were euphemistically called "the Confederates of Nirvana"
and "the Consortium of Animals from Dragonia".
the surface of Earth.
an alien crystal or an alien implant inside your head.
Brief references to specific shows or movies like "V", 'Star Trek - The Next Generation",
"The day the Earth stood still" and "Wag the dog" are merely illustrative...
Additional Information
• Alien presence on Earth - The Sun Microsystems Ad
• Opening the "Iron Curtain" that separates realities between the Stage and the Backstage
• Plato's "Allegory of The Cave"
(from Plato's "Republic", Book VII, 514a-c to 521a-e)
• Pre-Hurricane History - Gallery of Pictures of Neutral People
• Reinforcing your position as the owner of your destiny - Complementary optional procedures
• Security protocols - Suggested guidelines
• The Victor Tausk's "Influencing Machine" used by aliens on Earth
• What is the Matrix ? - The spiritual Matrix
• Your beloved one as a dentist
จำนวนข้อความ : 1111
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Fri Mar 19, 2010 11:39 pm
by Niara Terela Isley
Denver Extraterrestrial Contact Examiner
August 10, 2009
from Examiner Website
Step 1: Abolish the National Security Act of 1947 Signed into law by President Harry S. Truman
the same month and year as the July UFO crash at Roswell in 1947, the establishment of this piece
of legislation was ostensibly, in part, to keep the public from knowing that the UFO/ET phenomenon
was real and to allow that technology to be studied, researched and perhaps be back-engineered.
It allowed for a great deal to be hidden from the public “FOR REASONS OF NATIONAL SECURITY”.
President Barack Obama: A possible disclosure administration?
The problem with the National Security Act of 1947 is that this legislation quickly became
a “carte blanch” for ambitious people desiring personal power and wealth to begin to do
a broad range of things behind a legal and nearly impenetrable curtain of secrecy that
could not be touched or investigated due to said “reasons of national security”.
This gave unprecedented power to people who were quickly corrupted by this
total shield from public view who began abusing that power and using that curtain
of secrecy to hide all manner of unsavory, even heinous, activities, such as the importation
of German scientists after World War II through Project Paperclip, leading to
mind control experimentation and abuses and broad monitoring of the general
population from childhood on through standardized testing in schools
looking for individuals who could be used or exploited to serve the personal agendas
of those protected behind National Security Act of 1947 secrecy.
The National Security Act of 1947 came to be less and less about “national security” and
far more serving the personal agendas of those individuals working in powerful positions
behind this legal wall.
From the now-well known experiments with LSD to create a super soldier who would follow
any orders without question, to the mind control experimentation finding it's way more
and more into the mainstream via whistle blowers and some select documentaries, including
one that aired on the History Channel, information is leaking out through cracks in that wall.
Add to this the reality of extraterrestrial technology kept secret and sequestered
in military and corporate black projects
, when it's release and implementation to the public
could have perhaps averted or mitigated the environmental crises now looming over
the entire world, we have a whole chain of events begun with the implementation of this
1947 legislation that have put the interests of a few ahead of the well-being of the nation's
and the world's populations.
In a country with a carefully rendered Constitution and Bill of Rights set in place to prevent
undue gatherings of power in the hands of any one individual or group, grounds are excellent
to abolish the National Security Act of 1947 on grounds that it is unconstitutional – and a stain
on the honor of a country that has called itself “the land of the free, and the home of the brave.”
Step 2: Countering resistance
If President Obama was to encounter resistance or stonewalling from the military-industrial complex,
at that point he could call in the 400+ Disclosure Project Witnesses (video) in a nationally televised
and webcast disclosure conference, including representatives from countries around the world
who have been releasing their previously classified UFO files.
A variety of other former government insiders and whistle blowers in other areas could likely
be found and brought forward as well. This would take the information to the people,
where it now belongs, and perhaps always should have belonged.
Step 3: Amnesty in exchange for helping implement environmentally-healthy technologies
An offer of amnesty could be extended to all who are willing to come forward with
what they know about extraterrestrial or other suppressed technologies,
how they operate and how they can be implemented in an energy-conversion
transition to decisively end all use of oil and coal-based technologies
as soon as possible.
Serious crimes, such as murder and mind control abuse committed under
National Security Act of 1947 secrecy would need to be reviewed for amnesty on
a case by case basis. Individuals committing serious crimes and identified with
a proclivity to try to continue such abuses under some other guise of secrecy
need to be incarcerated for public safety.
Amnesty would not be able to be
granted in such cases.
Step 4: Exercise of “Eminent Domain” and transparency of development and
implementation at every level and stage
When the various technologies are revealed, exercise the government's prerogative of
eminent domain” to bring such information and technology within guidelines and facilities
where it can be fully developed for the global public good. As was not done in the past,
enact full transparency of all aspects of this energy transition.
Put an end to “need to know” compartmentalization and allow all members working on
the project to work together synergistically and in concert so that the work can be done
in the fullest and most efficient way possible, so that the whole can be viewed by all
and understanding and insight applied to the project as a whole.
To mitigate any losses or perceived losses on the part of corporate interests, offer a amnesty
in exchange for dropping any criminal charges against them, including pursuing knowing
and wanton environmental destruction for personal gain, at the cost of millions of lives to date
and potentially causing billions more deaths due to environmental catastrophes;
said corporate owners, board members and CEOs knowing full well the consequences of
their actions while also suppressing and sequestering technologies that could possibly
have averted such tragedies.
Step 5: End all use of nuclear weapons and energy plants
Stop all engagement with nuclear armaments and energy plants and open dialogues with
all countries with rudimentary or full nuclear capability to disarm and close down plants
with offers of new energy alternatives.
Step 6: Preparing the public for contact
All information, official and unofficial, regarding contact with extraterrestrial beings of any kind
could be addressed publicly in a subsequent press conference to the first one, helping to
prepare the public for the actuality of extraterrestrial reality and contact. Community groups
and liaisons could be created to help people discuss and adjust to the prospect of contact.
There have been perhaps hundreds of films made over all the years of movie-making
putting out myriad extraterrestrial scenarios, from friendly contact to invasions of
the worst kind. New documentaries made, sorting available truth from fiction
could be made and circulated to help people adjust to the reality of
extraterrestrials and reduce fear.
This preparation for contact would help the millions of people around the world who have
had abductions or contacts who have had to keep such encounters secret due to official denial
and ridicule and allow them to get real support and begin to understand their experiences
in a larger context.
Depending on their level of integration of their experiences, some of them might be able
to step into information disseminating roles with the public and be interviewed
for “contact” documentaries.
They could share their perceptions and insights about these beings from other worlds
from their own contact experiences.
Step 7: Extend an invitation for full, open contact with extraterrestrials
The cessation of nuclear activities could send a clear signal to extraterrestrial intelligences
engaging humanity at this time that we are becoming ready for full open contact. Through
a publicly televised press conference, invite contact and take a leadership role in initiating
a forum for developing friendly and beneficial relations with our extraterrestrial neighbors.
Since there are some extraterrestrials who look just like us,
there are sure to be some listening.
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sat Mar 20, 2010 12:45 am
by Ki’ Lia
February 2010
from 2012GoddessCosmos Website
My recruitment into a classified project involving Time & Quantum Access Technologies
and Gaia’s urgent call for Disclosure
This a section for beginners to global conspiracy knowledge.
For those who are already informed, you can skip to the core Mars Colony
recruitment story here below.
Like many who are awakening in these End Times or Shift of Ages,
I know I am a spiritual being having a temporary human experience.
The challenges are rough, yet what’s on the other side is beyond imaginable.
Always unfathomable to me is how very little our current reality is based on true wisdom,
compassion and the laws of nature. How much longer are we going to endure
the lies of society… from media, politics, religion, money, science... to the lies within
all of us as human individuals? How much longer are we going to let our planet devolve?
Be overrun by the elite game players of war, famine, disease and negativity?
How much longer are truth-tellers, dissidents and revolutionaries going to be ridiculed,
shut down, silenced or even killed?
What if after dozens to hundreds to thousands of years of the cover-up of our real star
origins and neighbors, the truth comes out and the world finally knows that
we are not alone in the universe?
How could the truth of this magnitude be kept hidden for so long?
How and when could mass Disclosure happen? How will the world change?
How can we prepare?
Disclosure has been a major 'meme' accelerating in the alternative media.
It is the tangible form of the ‘Ascension’ or consciousness movement –
this expansion beyond our physical form and the returning to our true eternal selves
as originating from the stars.
Disclosure involves a highly complex set of topics: exopolitics, UFO sightings & crashes,
ET contact, free energy & quantum travels, alternative medicine, metaphysics, psychic abilities,
secret societies & Global Elite NWO agendas, Mayan Calendar & 2012 prophecies,
Atlantis & ancient civilizations, return of Divine Feminine & Sacred Union, common origin
myths of world religions, geomagnetic & solar system changes, sacred geometry,
the endless rabbit hole, etc...
I used to read books on conspiracies (like David Icke’s Biggest Secret),
which were very fascinating and resonating to me about what was going on at this crazy
planet in all its multi-layers. However, I didn’t fully grasp how deep the rabbit hole was
or how I could ever possibly be related to a global conspiracy until I encountered
this Mars colony project that I will be describing later here.
Despite being naive to how evil deliberately deceives and manipulates, I always was
trying to unlock what was happening. I knew I was being led into something full of
treachery, yet I remained conscious and alert.
For those just entering these topics, be excited, discerning and forewarned.
For quite some time, these things have been ridiculed and suppressed as the fringe,
loony and out-there by fierce, unethical government-sanctioned operations
(i.e., 1950's CIA Robertson Panel). A disinformation campaign is a cleverly packaged
message alternating between phrases of truths and lies, which leaves the receiver
confused, disoriented or indifferent.
Many times however, the ET/UFO disinformation operation is so embedded into
a culture (in mainstream as well as alternative media) that it becomes part of the esoteric
truth canon, and it’s hard to tell fact from fiction, except for the genuine and
investigative seer. These fields of study have been corrupted and distorted so much that
the gems of absolute truth are few and far between.
In a higher perspective, if these mass institutional lies are still happening, then we as
a collective society is still playing the patriarchal, divisive game of abuser and victim.
External forces are only responding to a polarity within us.
When we activate the forgotten Goddess out of her long silence or oppression,
then her harmony will balance the etheric realms and in turn a physical tipping point
or critical mass that will trigger Disclosure – this revealing of the shadows of the planet
and catapulting us into love.
How can we navigate and sift through this rapid influx of information coming from
all over the place, especially this World Wide Web? How do we reconcile all
the various prophecies about 2012 and beyond, and create and prepare for
our collective future?
Famous prophets throughout the ages have warned us about being on a timeline of
catastrophe. Famous prophets have given us grand visions of a timeline of enlightenment
and transcendence. Many prophets, not so famous, have been also apathetic and
cynical about any change happening.
Anybody who claims to know all the answers is false.
Anybody who claims they are the only one able to save humanity is false.
It’s a collective effort, and no one in a human body knows everything.
Trusted leaders will make their mark, but each person has her/his unique
and precious role.
Some of the popular prophets can be seen as time travelers who are accessing
different timelines when the things they are predicting for this current timeline
have already occurred. When their prediction is made here in this timeline,
the vibration of the words and message cause a ripple in the matrix or
Butterfly Effect’ that inevitably changes things.
Scientists who make theories and predictions based purely on 3D empirical evidence
without factoring in their interfacing consciousness, or the ‘observer effect’ in
quantum physics, are missing a whole lot of truth. While the hard data might be accurate,
a forecast about the future would be faulty without the consideration of the power of
people’s consciousness and intent to alter events.
The awakened collective and Mother Earth Gaia are orchestrating this shift.
She is the voice of true change, compassion, divine justice and forgiveness.
When we speak with her guidance and voice, we will dissolve the masculine dominance
that has been happening in the past few thousand years and create a sacred marriage in
the universal matrix of reality.
Out of this union, the divine child is born. We all will be reborn.
This is a glimpse into a personal experience of how a few years ago I, Ki’ Lia, got recruited into
an extremely dangerous mission to Mars, and my strange and profound encounter with
secret society agents and their use of time and quantum access technologies
to manipulate our collective evolution.
This is a revelation about how government agencies currently have been establishing Mars
as a survival colony
and how the widely-prophesied date of 2012 has been seen as diverging
into two major timelines, either catastrophe or transcendence.
While those not familiar with any of these concepts might find my story unbelievable
and shocking, my story is organically emerging and being normalized in an exopolitical context
as established by many courageous new and longtime whistleblowers and researchers of
classified trillion-dollar Black Budget projects. I’m not asking anyone to believe in any particular
ideology or philosophy, rather I’m sharing my genuine experiences and understandings in hopes
that others will explore and pursue the truths apart from the lies as fed to us from all angles in society.
I was recruited as an interdisciplinary designer and futurist who has been consulting
and collaborating with many renowned new paradigm leaders.
Primarily though, I have been a virtuoso artist in music, design, dance and writing,
who has exhibited numerous praised works. I am also a multidimensional guide
who has given hundreds of transformative readings and healings. I am also a human with
the spectrum of emotions and flaws, who is vulnerable to this negative world.
With an arts degree from Stanford University, I have been developing sacred song-dance rituals
and a holographic theater model in relative seclusion, but am now publicly emerging to stand with
other truth-tellers.
I am critically urging all world leaders and governments to disclose their engagement with UFOs
and extraterrestrial civilizations and their core cover-up of life on Mars. I believe this information is
unethically withheld beyond ordinary reasons of security. And time is running out for humanity
to awaken from our slumber.
We all have a universal right to know and to live prosperously, so I am asking all citizens to be
educated about our stellar history and destiny, as hidden in plain sight throughout all of society.
With the keys to unlock our highest knowledge and potential, I believe we can build together
an unprecedented, enlightened civilization.
I grew up as precocious and ‘psychic’ but didn’t have the metaphysical language to express
my worldview until my late teens when I was exposed to more of the radical culture of
my birthplace San Francisco. I knew I was very different, like the storied alien
who got dropped and abandoned on Earth.
Unusually fascinated with the intergalactic storylines of Star Wars and Star Trek, I had little idea
though that they contained variations of the truth of the multidimensional tales of our universe.
In the rapidly rising UFO/ET Disclosure movement, respected whistleblowers consistently state
that the genre of science fiction was created to hide classified secrets and perpetuate disinformation
about governments’ engagement with alien species and worlds (see interview with military
whistleblower Bob Dean). I had great visions of the future and/or memories of the ancient past of
advanced Space Age or Golden Age civilizations, and was driven to build holographic temple-theaters
for music, dance, art, storytelling and journeying into inner and outer cosmos. However, my throat felt
suffocated and my body was paralyzed, and I felt that nothing I could do or say could ever match
the perfection of Source or a Force of Nature that I knew so well deep within.
I could not understand how people were completely out of touch with their spirit and were not
making every moment dedicated to serving a Utopic vision of the future. I gained my degree in
the arts from Stanford University, as well as several certifications from leading-edge schools.
When I was in college, my psychic sensitivity was immense, and my womb and entire body felt
bombarded constantly by global doomsday thoughts.
I could barely watch the news, as the shock and brutality of war and poverty just
devastated my entire system.
Later, through much corroboration, I would realize that I probably sensitively was picking up
and being directly hit by heavy negative frequencies transmitted from underground military labs
as revealed by a scientific insider (Leuren Moret, see below video), which would relate to
an agenda of a scientist I would later encounter in my Mars recruitment.
To outsiders, I appeared to have high spiritual and physical health, while reputable holistic
physicians did confirm my longtime, hidden symptoms of PTSD and electromagnetic targeting;
and trusted psychics would mention my being under heavy etheric attack.
What I knew then was that what I was experiencing was the Earth’s wounds as my own,
and I had to go through the archetypal wheel of human experiences to grasp and transform
that pain. I would accelerate through a decade long series of nonstop tests and trials, including
mediating highly unusual tragedies and experiencing physical illnesses or emotional crises
as premonitions before major global events such as 9/11, Iraq War and natural disasters.
Receiving numerous supernatural signs and omens, such as angelic light, flashing stars and
UFO sightings, were critical in helping me to understand what was happening and the meaning
of my path. I acquired great strength in recovering quickly and finding the next, greater level
of my mission.
Frantically studying and researching for answers and cures, I was strategizing all sorts of
multidisciplinary projects based on the ‘Gaia’ theory and our vital spiritual, ecological and
societal interconnections – also called the Universal God/dess blueprint (Isis, Venus, Shakti
and her thousand names), not the cliché glamour or New Age goddess, but the ancient and
timeless Mother force, unified with the Father force, to govern the elements, cosmic spheres
and all life cycles through unconditional love and wisdom.
With huge ambitions, I was urgent to find others who wanted to end global tyranny and
to remember and rebuild ultimate Paradise on Earth.
Many missing puzzle pieces of my journey began to fall actually into place in my late 20’s,
when I encountered something extremely dangerous concerning a Mars colony, global security,
the fate of the human race and the famous date of 2012 – a potent marker embedded in
my spiritual DNA and the basis of my ongoing interdisciplinary research.
My account here of my experience with this Black Ops project is based on years of meditation,
various spiritual practices and attunement to my ‘Higher Self.’ It’s not derived from any sort of
implanted memories or hallucinations.
No one has ever doubted my integrity, however if anyone in the future does, I stand strong to
my truth, and many people in my life stand witness to me. As a creed in whistleblower lands,
the best and safest place to hide is out in the open, and so I am choosing to come forward.
Also, I have nothing more to lose. And everything to gain in helping humanity grow.
This is the part of my path I shared with my friend Laura Magdalene Eisenhower,
who is an incredible healer, guide and creative soul.
Right after 9/11 and in the midst of much global turmoil, we synchronistically met at
a psychic program, where we were recognized for our charismatic presence and accurate skills.
Encountering her vibrant and free-loving spirit and seeing her symbol-weaving ancient story
in her tattoos and talismans, I felt an instant recognition.
Realizing we had been experiencing parallel traumas, since childhood and in the program,
we also left the place at the same time.
Several years of friendship later, in the spring of 2006 in Washington DC, I met her and her
new romantic partner, who I will call Agent X. He claimed to know himself archetypally
as Joseph of Arimathea/Osiris/Orion and affiliated with different, interlinked secret societies, e.g.
Knights Templar and Freemasons.
He and Laura quickly formed an intimate relationship,
and I helped conduct a ‘Divine Union’ rites of passage for them.
Agent X revealed that his group had identified her through her bloodline, as the matrilineal
great-granddaughter of 34th U.S. President Eisenhower (and the Allied Commander who
defeated Hitler). As well, they knew her as a unique reincarnation of Magdalene/Sophia/Isis
(ever since she was young, many psychics have recognized her).
He also said his group was interested in her twin sons, who they knew as Romulus and Remus
(founders of Rome) and the hero twin archetypes in the Mayan prophecy.
They had a list of male partners, who she could be with in possible timelines, and he was
one of them. They targeted her (especially her heart) and these men through electromagnetic
or psychic weaponry, and indeed many men tried to destroy her throughout her life.
Agent X admitted that he (as well as his parents) was implanted with a chip and had a multiple
personality disorder, which involves very sudden robotic and abusive behaviors.
This is the typical profile of someone who was subjected to well-documented
MK-ULTRA experiments and multi-generational occult ritual abuse.
While Agent X didn’t reveal what he knew about me or how I could have been identified
in various timelines, Laura herself has confirmed to me my own inner knowledge of my very few
incarnations. Like her, I’ve met also a series of men who were variously Nazi manipulated and
ET abused since childhood and who created very uneasy environments for me.
Their data about Laura and her partners were seemingly gathered through a time viewing device
(which they had called ‘Looking Glass’) or ‘Orion’s Cube’ or possibly through remote viewing or
even time travel – all part of their cadre of top classified technologies (already disclosed in
increasing black projects literature).
Credible whistleblowers have proposed too that I could be a ‘person of interest.’
This includes lawyer Andrew D. Basiago who was recruited into DARPA’s Project Pegasus
in the 1960’s-70’s to teleport to Mars based on Tesla technologies, and who also attests
to the use of time devices known as ‘Chronovisors’ in order to collect data about future leaders,
as well as able to track past lives:
• http://www.projectpegasus.net
• http://www.projectmars.net
Also, another insider, time scientist and ex-Air Force employee David Lewis Anderson
(below video), recently has resurfaced in public to confirm the experiments of time travel and
Project Pegasus activities, and how the misuse of time technologies is an issue that can’t be
at all underestimated.
Meeting Agent X for the first time, he gave me much confidential and startling intel
about what he and his group knew:
• How the military was monitoring numerous ET races (around the ‘57’ number that Sgt.
Clifford Stone famously testifies, and the reality of the much speculated
1954 treaty system between Eisenhower and ETs.
• The dramatic energetic changes in the solar system and galaxy (I later researched
Richard Hoagland
and David Wilcock’s credible analysis on interplanetary climate change,
and on solar cycle 24 & coronal mass ejections, and global cataclysmic consequences).
• How a distinct threshold of white light (or blankness for the unimaginative) occurs in 2012, as seen through Looking Glass (a couple of years later, Wilcock releases a video about this).
• The revolutionary leaps happening on the photonic or quantum level.
• How the government eschatologically was aware of the role of key
ancient Egyptian archetypes/reincarnations, the Great Pyramid and
astrological alignments (see the corroborating work of researchers
Robert Bauval, and Graham Hancock).
In the context of these profound shifts in the universe, Agent X discussed his brilliant vision
for a new space initiative – an awe-inspiring plan to explore the next frontiers of space in service
to mankind’s unity and consciousness expansion.
His plan included:
• a colonization mission to Mars or Moon as a commercial-government-academic partnership
• a separate, firewalled academy to train new explorers in multidimensional living
and prepare the public for First Contact with ETs.
He designated me as the main fundraiser for the space initiative as well as the head of his proposed
academy. He intuitively saw that I was destined to do this and greatly encouraged my initiative.
This vision to terraform Mars and develop a 21st century Starfleet Academy activated
my childhood sparks to venture through space. Though to be given an opportunity to help
colonize off-worlds was something I would never have thought of in a million years.
Having never shied away from an adventure, I dove right in and started adapting my lifelong
temple-theater vision.
In congruence, I also just had started working with a futures nonprofit think-tank on
another quixotic plan – a world-class ecocity project that could exemplify, foster and
network emerging sustainable hubs around the planet, which are urgently necessary to
resettle rapid, mass migrant populations.
Moreover, in my typical over-committing style, while I was formally consulting for small leadership
trainings and web start-ups, I was developing a venture philanthropy model that could fund
many landmark projects with my collaborators, who were all leading pioneers in sustainable
building, integrative health, virtual entertainment and telecommunications.
We all had great individual drive and pragmatic solutions to help mankind, yet what we needed
was a solid, cooperative and financially rewarding infrastructure, which is what Agent X was
looking for as well.
My science and business background wasn’t extensive, but as a quick multidimensional learner,
I could immediately spot key alliances and best of breed technologies. I had made links to
significantly wealthy people for my various ventures, but I continuously questioned why the heck
with all their connections would they want me involved.
Well, Agent X recognized my leadership and visionary skills, and wanted funds coming
through his intimate allies, in order for him to achieve some equal footing with his senior advisors.
He assembled a core team headed by chief scientist Dr Harold E. (Hal) Puthoff, the well-documented
scientist of zero point field physics, HAARP, remote viewing and mind control technologies,
who was educated at Stanford and sponsored by the CIA, various government agencies
and private corporate interests.
Agent X proposed a board of directors involving the most renowned futurists, astronauts
and space entrepreneurs, who were all affiliated or already working with his circle.
He also was conversing with a state senator and suggesting that the Air Force could be shuffling
tens of thousands for this project. Under high confidentiality, I would receive many project emails
and occasional phone updates about team meetings and business plans.
They discussed the rush toward commercialization and privatization, the Space Race with
other major nations and the Disclosure playbook involving many secret societies.
I was sent for review many scientific documents about the key technology components:
• propellantless propulsion or faster-than-light warp drive
• plasma ion fusion
• ultraconductors
As well as regarding:
• vehicle design, land, air and aquatic robotic rovers
• artificial intelligence
• advanced communications and knowledge transfer
• architectural compositions and other capacities to terraform and replenish life on Mars
I also was asked to look into other aerospace academies, virtual reality, psychotronic weapons,
invisible shielding and a whole spectrum of exotic, quantum access technologies.
Needless to say, the information influx and research potentials completely overwhelmed
the bejeezus out of me. It was both my sci-fi dream and nightmare coming into reality.
At the time, I was not well-versed with the hidden space program as publicly leaked by
several whistleblowers in alternative media. From a psychic view, I knew the existence of ETs
and intergalactic travel, but I didn’t understand what the government was doing in classified
Black Budget projects.
So throughout my experience, I was trying to grasp the legitimacy of this recruitment,
as well as the reality of the mind-boggling complexity of scientific, political and cultural
aspects this project entailed.
For 6 months, in the swirl of my other projects, I was envisioning all intersecting project
possibilities, researching, strategizing, designing and locating allies. In constant fear of
asking for more information, I kept questioning what was being hidden and what was
being made visible to me, and why.
Throughout my work, I questioned whether this plan was either:
• A Cover Project It could have been a way to distract the public from the military’s ‘real’ activities, i.e., their already existing spacefleet and colony on Mars (as whistleblowers testify). He did reveal that one of his adjacent companies was acting as a ‘cover.’ Agent X could have been teleporting to Mars already using remote viewing methods (as exopolitician Alfred Webre describes http://peaceinspace.net), or through ‘stargates’ – aka wormholes that enable a travel shortcut between two points in vast space or time.
As well, stargate is a lost soul technology of ancient Egyptian, Sumerian and Holy Grail symbolism (see mythologist William Henry’s analysis of the Illuminati and Stargates). Agent X never mentioned using the publicly leaked ‘Jumproom’ teleportation method (like an elevator), but he very well could have been.
• A Real Project
Along with their zero point physics, remote viewing and stargate travel capacities, they could have been
actually set on building new spacecraft that could transport large cargo for Mars terraforming purposes. Whistleblowers testify that travel through artificial stargates was allowed only for humans carrying no metal components (see whistleblower site).
The project also could have been a distorted version of NASA’s public Ares/Orion Mars mission and
affiliated with other private agendas (see here).
I suspected that Agent X was intersecting with a wide variety of factions – some that seemed benevolent but most were mixed and very dark.
So basically this appeared to be a very big convoluted and manipulative mess.
Underlying this space project was a massive amount of terror, survival and manipulation,
and the urgent need to escape deliberately planned (through wars and a global depopulation agenda)
or foreseen cataclysms (through Looking Glass) in 2012.
Agent X was convinced that Armageddon was taking place with
World War 3/4 in the horizon,and there was nothing anybody could do about it.
He told us we were part of the select few who would escape on ships called
‘Sophia’ and ‘Merlin,’ and we would seed a new civilization
on Mars
(whereby the Moon was the initial publicized goal).
We realized that they wanted to control the transformative effect of Isis (Laura as
Magdalene-Isis aspect and I, Quan Yin-Isis), Osiris and the Holy Grail (i.e., royal, divine blood)
power (see William Henry's research), and escalate their dominance on and off planet.
Their plans later would become mainstream propaganda as revealed in Hollywood’s ‘2012’ movie
and its marketing site, which even describes a survival lottery and various escape havens.
Since my early 20’s, I intuitively knew about the 2012 period as a quantum leap to a new world
for everyone and for me personally, and read books about it. So when Agent X talked about
how this mission was tied to this date, a huge concrete puzzle piece landed.
My entire life experiences were preparing me for this scenario.
This group of men I was involved with was incredibly veiled and had a lot of misused power,
and I knew I had to climb their hierarchal ladder and try to steer the project away from any doom
and quickly toward building Paradise for everybody on Earth, Mars, wherever and
anywhere in the cosmos.
Meanwhile, Laura had been reluctantly preparing to be the main multidimensional teacher of
the proposed academy, and courageously was refusing to go along with Agent X’s belief of
a negative global outcome and being taken to Mars. She was fighting for her strong belief
in how we could regenerate life fully on Earth.
While we were on opposite coasts, I was there to hear about a disturbing series of events
involving Agent X’s erratic and paranoid mind controlled behaviors, especially after his
meetings with his team.
I would give her psychic readings about the situation, such as telling her phrases that
Agent X was programmed deliberately to repeat, and she would confirm my accuracies.
One time in our phone conversations, I suddenly became frightened about our line
being tapped, and about half an hour later, Agent X notified that he was circling her block
because agents indeed were spying. My intuitions have always alerted me to danger,
and so I feel very protected.
While I have felt internal chasms, I have never been afraid of being externally threatened,
and for better or worse, I don’t really take that many precautions.
By Winter 2006, after much personal doubt about my involvement in the project and
endless external challenges, as well as seeing the danger involved (including Agent X discovering
a murdered body), I naturally transitioned out of my role, and Laura and Agent X’s relationship
ended as well.
I discovered later that the project went public and entered the Google Lunar X Prize competition,
while the underlying premise of possible 2012 catastrophes and a populated
Mars still remaining hidden.
As of December 2009, Agent X is still publicly mentioning his research and development.
Protection of Agent X’s identity is necessary because he is a victim of brutal mind control
who needs great healing and transformation.
After leaving the situation, I realized that there was no way they could fulfill their dubious
plans successfully. They were not supposed to lose the game in their minds, but they were not
going to give up easily. We thought of the various schemes they could conceive to gain control
through timeline access, cloning and their whole bag of crazy, misused technologies.
However, the main battle for Laura and I was over.
Later, I came across a prominent whistleblower (who since has been heavily compromised
and hence being publicly discredited) and his 2007 account of how the timeline
(http://projectcamelot.org/2009.html) has shifted into the positive one, as seen through
Looking Glass.
This resonated deeply.
Again, my story may sound way far out in space somewhere for those newly encountering
conspiracy facts (not just theories), but I ask that everyone do their own research on all my
discussed topics for yourselves. I hope my personal recruitment will make more sense later
to those who become better informed. I myself am on a continuing quest for more answers
to my own puzzle and to our macrocosmic one of course.
For corroborating evidence and resources, please see here.
These are the supporting reference topics:
• 2012: Ascension Timeline | Disaster Timeline | Venus/Mayan Calendar
| Pyramids | Crop Circles | New/Classified Science | Mass UFO Landings
• Archetypes: Goddess Gaia | Sacred Union | Reincarnation/Multidimensional Lives |
Reptilians/Marduk | ET Races& Nibiru
• Exopolitics: Media Programming | Mind Control/Psychic Weapons |
Time Travel Technology | Hidden Mars Colony | US Presidents UFO –
Eisenhower &Obama | HAARP Earth Control | Space Race
To learn more about 2012 and ways to empower yourself and your community, please see here.
Now in the beginning of 2010, having regained my voice and livelihood that has been
hyperdimensionally suppressed since young (through cutting-edge, holistic voice analysis
methods, my vocal harmonics have been analyzed as being capable of reaching
the collective subconscious)… and having seen enough strong if not irrefutable corroborating
evidence and testimony that fill in the gaps of our recruitment experience, I am asking that
serious investigations be made into the real agenda and cover-up, of the currently
existing Mars colony as a survival plan for a planned or foreseen disastrous 2012 Earth timeline.
I truly want to see the END of the era of secrecy and the critical release of quantum leap
solutions and technologies for the benefit of all.
There are simply no more excuses for apathy, resistance, fear, ridicule, ignorance or dismissal…
and follow status quo, i.e., the perpetuated, embedded lies of mainstream media and society.
The amount of ethical and independent news articles, shows and forums is rising daily to reveal
the shadow players hiding truths from the populace. The path toward truth is difficult indeed –
in separating the Truth from planted disinformation (in both mainstream AND alternative media),
or simple misinformation (such as lack of compassionate insight), but it is the ONLY path to take.
The trick is to develop the art of discernment and a relationship with your Higher Self – to determine
what resonates to your entire mind, body, heart and soul.
I am incredibly grateful for all the brave individuals who have stepped forward with
their truth and facing seemingly insurmountable challenges.
As we continue toward 2012, in this ‘time acceleration matrix’ of the Mayan Calendar
(see researchers Barbara Hand Clow, and Carl Johan Calleman), I/We ask all those who have been
a victim of Black Ops or time viewing and quantum access activities and all those ona global
healing mission at no matter stage –– to gather, find answers and implement solutions together.
We must come together and stand for all the children of Mother Gaia, those without voices,
women and men, young and elderly, disenfranchised and without adequate food or technology,
animals and trees – for all who want a world filled with everlasting peace, wellness and prosperity.
As a citizen of the United States, I/We urgently ask President Barack Obama to foster his pledge
to Open Government policy and truly stand for the best interests of the people of our nation.
I/We ask that he live up to his Nobel Peace Prize and stand for the entire global family.
So, I/We call for open congressional hearings and scrupulous investigations into the trillion-dollar
Black Budget Operations. In deep forgiveness, I would be there to testify as a witness.
I/We demand the hidden controllers of our planet to emerge from their shadows.
Those who have been destroying the planet through war, earth disasters, pollution,
pandemics, food poisoning, media lies, mind control, subversion of divine female/male
and beyond. They will be put on trial not for retribution but for re-integration into society
and under amnesty as long as they stop with their evil plans for the human race.
The cycle of violence MUST end, and compassionate and restorative justice will prevail.
I/We call every single person of every age, race, faith and background to stand up for the Truth...
of our loving and laughing eternal soul nature. Our internal reunion with the Divine Feminine
and Masculine
. Our power of consciousness to heal, travel and manifest instantaneously.
And our magnificent galactic heritage and destiny.
We must rebuild our civilization and restore our lost Paradise, Atlantis, Shambhala,
or Heaven on Earth.
Every feeling, thought and act right now can change things immeasurably...
Please never underestimate your power. We each can make the Age of Peace and
Enlightenment happen in this timeless moment.
We are mere reflections of the infinite cosmic soul. Incredibly crazy, ridiculous,
enlightened and ever joyous... through it all... We are all One. We are ALL in this together.
Eternal Love and Wisdom to All,
Ki’ Lia
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sat Mar 20, 2010 1:35 am
by Dr. David Lewis Anderson
from AndersonInstitute Website
ten different technologies an methods.
Key characteristics are identified for each and described below.
an empty circle indicates it is not.
• "Time Control" indicates whether travel to future, past, or both are possible.
• "Matter Transport" is solid if both matter and information can be transported, empty if
only information can be transported.
• "Tech Viability" is solid if the technology or method is viable with present state-of-the-art
technology or within two generations.
• "Possible Without Exotic Materials" is solid if materials required are available
today or within two generations.
• "Relatively Low Input Power" is solid if time control is achievable within power generation
capabilities available today or within two generations.
The time control technologies and methods above include the following
• Quantum Tunneling
• Near-Lightspeed
• Alcubierre Warp Drive
• Faster-than-Light
• Time-warped Field
• Circulating Light Beams
• Wormholes
• Cosmic Strings
• Tipler Cylinder
• Casimir Effect
Return to El Universo de La Nueva Fisica
Return to Teleportation
Return to Time Travel
Return to Black Holes
Return to Quantum
Quantum Tunneling
The correct wavelength combined with the proper tunneling barrier makes it possible
to pass signals faster than light, backwards in time.
a 10 centimeter chamber containing cesium vapor.
are presented in the picture below.
This is followed by more detail describing the phenomenon below.
mechanics because the behavior of particles is governed by Schrödinger's wave-equation.
Wave coupling effects mathematically equivalent to those called "tunneling" in quantum mechanics
and to acoustics.
although it does penetrate into medium type 2 to some extent.
that the particle "tunnels" through the barrier.
Quantum Tunneling Introduction
Quantum Tunneling
The scale on which these "tunneling-like phenomena" occur depends on
the wavelength of the traveling wave.
For electrons the thickness of "medium type 2" (called in this context "the tunneling barrier")
With Schrödinger's wave-equation, the characteristic that defines the two media discussed
located at a point.
located at a point: they are always spread out ("delocalized") to some extent, and the kinetic energy
of the delocalized object is always positive.
like points, particular in the context of Newton's Second Law and classical mechanics generally.
to use the mathematics of moving points).
Clearly, a hypothetical classical point particle analyzed according to Newton's Laws could not
if conditions are right.
An approach to tunneling that avoids mention of the concept of "negative kinetic energy"
is set out below in the section on "Schrödinger equation tunneling basics".
Reflection and tunneling of an electron wave packet directed at a potential barrier.
The bright spot moving to the left is the reflected part of the wave packet. A very
dim spot can be seen moving to the right of the barrier. This is the small fraction
of the wave packet that tunnels through the classically forbidden barrier.
Also notice the interference fringes between the incoming and reflected waves.
the animation alongside.
this statement is a bit of linguistic conjuring trick. As indicated above, "tunneling-type"
been in quantum mechanics that evanescent wave coupling has been called "tunneling".
(However, there is an increasing tendency to use the label "tunneling" in other contexts too,
and the names "photon tunneling" and "acoustic tunneling" are now used in the research literature.)
"transmission coefficient").
mathematical physics, or in any other simple way.
Mathematicians and mathematical physicists have been working on this problem since
approximately. In physics these are known as "semi-classical" or "quasi-classical" methods.
the "JWKB approximation").
versions of the theory.
for which exact mathematical solutions to the Schrödinger equation exist.
Tunneling through a realistic barrier is a reasonably basic physical phenomenon.
the first occasion on which they encounter the "semi-classical-method" mathematics needed
to solve the Schrödinger equation approximately for such problems.
Not surprisingly, this mathematics is likely to be unfamiliar, and may feel "odd". Unfortunately,
it also comes in several different variants, which doesn't help.
a particle is "really" point-like, and just has wave-like behavior. There is very little experimental
"really" delocalized and wave-like, and always exhibits wave-like behavior, but that in some
This second viewpoint is used in this section.
beyond the scope of this article on tunneling.
Although the phenomenon under discussion here is usually called "quantum tunneling" or
"quantum-mechanical tunneling", it is the wave-like aspects of particle behavior that are important
in tunneling theory, rather than effects relating to the quantization of the particle's energy states.
For this reason, some writers prefer to call the phenomenon "wave-mechanical tunneling.
George Gamow
and perhaps most importantly semiconductor and superconductor physics.
quantum tunneling.
heating effects that plague high-speed and mobile technology.
overcome the limiting effects of conventional microscopes (optical aberrations,
electrons and nuclei such as hydrogen and deuterium.
It has even been shown, in the enzyme glucose oxidase,
that oxygen nuclei can tunnel under physiological conditions.
Back to Contents
Near-Lightspeed Travel
time travel are presented in the picture below.
This is followed by more detail describing the effect below.
Back to Contents
Alcubierre Warp Drive
a spacecraft to contract and the space behind it to expand.
of a spacetime exhibiting features reminiscent of the fictional "warp drive" from Star Trek,
which can travel "faster than light" (although not in a local sense - see below).
time travel are presented in the picture below.
This is followed by more detail describing the effect below.
Alcubierre Warp Drive Description
behind it to expand.
"faster than light" in the sense that, thanks to the contraction of the space in front of it,
the warp bubble.
Alcubierre Metric
The Alcubierre Metric defines the so-called warp drive spacetime.
destination faster than light would in normal space.
exhibiting the desired "warp drive" effects more clearly and simply.
Mathematics of the Alcubierre drive
γij is a positive definite metric on each of the hypersurfaces.
The particular form that Alcubierre studied is defined by:
with R > 0 and σ > 0 arbitrary parameters. Alcubierre's specific form of the metric can thus be written;
The existence of exotic matter is not theoretically ruled out, the Casimir effect and the accelerating
universe both lending support to the proposed existence of such matter.
travel (and also to keep open the 'throat' of a wormhole) is thought to be impractical.
a warp drive in the absence of exotic matter.
once and for all.
Physics of the Alcubierre drive
the Alcubierre metric has some apparently peculiar aspects.
made very small within the volume occupied by the ship.
Alcubierre interpreted his "warp bubble" in terms of a contraction of "space" ahead of the bubble
ADM observers.
his metric.
This practice means that the solution can violate various energy conditions and
require exotic matter.
a way to distribute the matter in an initial spacetime which lacks a "warp bubble" in such a way
spacetimes violate various energy conditions.
could perhaps be physically realized by clever engineering taking advantage of such
quantum effects.
Counterarguments to these apparent problems have been offered, but not everyone
is convinced they can be overcome.
By contracting the 3+1 dimensional surface area of the 'bubble' being transported by the drive,
not able to go dashing around the galaxy at will.
the necessary infrastructure.
Miguel Alcubierre
any action outside the bubble.
cannot do this while "in transit", the bubble cannot be used for the first trip to a distant star.
even if the metric is physically meaningful.
constructing an Alcubierre Drive.
Back to Contents
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sat Mar 20, 2010 11:30 pm
Faster-than-Light Travel
does not forbid the existence of particles that travel faster than light at all times.
moving subluminally through the distorted region).
these solutions is uncertain.
time travel are presented in the picture below.
This is followed by more detail describing the effect below.
Outside of mainstream physics, others have speculated on mechanisms that might
in the physics research community.
Fictional depictions of superluminal travel and the mechanisms of achieving it are
also a staple of the science fiction genre.
or about 186,282 miles per second.
still slower than c), leading to Cherenkov radiation.
and thus neither qualifies as FTL as described here.
related by Poincaré transformations.
These transformations have important implications:
• The relativistic momentum of a massive particle would increase
have infinite momentum.
infinite time with any finite acceleration, or infinite acceleration
for a finite amount of time.
• Either way, such acceleration requires infinite energy. Going
beyond the speed of light in a homogeneous space would hence require
• Some observers with sub-light relative motion will disagree about
which occurs first of any two events that are separated by a
the speculative hypothesis of possible Lorentz violations at a
presently unobserved scale (for instance the Planck scale).
Lorentz invariance to be a symmetry of thermodynamical statistical
• While Special and general relativity do not allow superluminal
with space rather than moving through space.
Despite the established conclusion that relativity precludes FTL travel,
some have proposed ways to justify FTL behavior:
Radically Curve Spacetime Using Slip String Drive
There is one way that doesn't violate Relativity. Andrew L. Bender's Slip String Drive.
Bender proposes traveling by completely isolating a region of spacetime from
the rest of our universe using Einstein's gravity waves. These compression waves of
no longer limited by relativity.
Time passes normally within the isolated region, eliminating the possibility
of paradox or time travel.
Ignore special relativity
evidence strongly supports Einstein's theory of special relativity as the correct description
an approximation at conventional (much less than c) speeds.
Similarly, general relativity is an overwhelmingly supported and experimentally verified
conventional acceleration from subluminal to superluminal speeds is not possible.
Faster light (Casimir vacuum and quantum tunneling)
Einstein's equations of special relativity postulate that the speed of light in
a vacuum is invariant in inertial frames.
Casimir Vacuum Force
The experimental determination has been made in vacuum.
This is known as the Scharnhorst effect.
about one part in 1036.
a "preferred frame" for FTL signaling.
violations", and invoked Hawking's speculative chronology protection conjecture
which suggests that feedback loops of virtual particles would create "uncontrollable
Other authors argue that Scharnhorst's original analysis which seemed to show
claim to have violated relativity experimentally by transmitting photons faster than
Nimtz told New Scientist magazine:
However, other physicists say that this phenomenon does not allow information
to be transmitted faster than light.
Give up causality
without going through the intervening space.
timelike curves (i.e., time travel) and causality violations.Causality is not required by
paradoxes; this is the Novikov self-consistency principle.
reasonable choice of cosmological coordinates.
passing through the region.
Give up (absolute) relativity
it must necessarily be quite subtle and difficult to measure.
Amelino-Camelia and João Magueijo.
to exceed c.
might be preferred by conventional measurements of natural law.
Non-physical realms
points that are close together in hyperspace.
proposed by mainstream science.
Space-time distortion
a recession velocity which is faster than light.
faster than light.
arbitrarily distant points in space.
faster than light traveling outside the wormhole.
of our universe as time moves on.
Heim theory
the paper received some media attention in January 2006.
few serious attempts to conduct further experiments.
Quantized space and time
As given by the Planck length, there is a minimum amount of 'space' that can exist
in this universe (1.616×10−35 meters).
the same thing using a finite amount of energy and acceleration?
to exist which always moves faster than light.
The hypothetical elementary particles that have this property are called tachyons.
Physicists sometimes regard the existence of mathematical structures similar to Tachyons
the theory needs further refining.
General relativity
an object along with it.
hypothetical exotic matter or negative energy.
closed timelike curves.
FTL phenomena
In these examples, certain influences may appear to travel faster than light, but they do not
Daily motion of the Heavens
On a geostationary view Alpha Centauri has a speed many times greater than "c"
Light spots and shadows
Closing speeds
the accelerator.
according to the principle of special relativity.
which is less than the speed of light.
Proper speeds
If a spaceship travels to a planet one light year (as measured in the Earth's rest frame)
as measured by the traveler's clock (although it will always be more than one year as measured
by a clock on Earth).
by the time taken, measured by the traveler's clock, is known as a proper speed or
does not represent a speed measured in a single inertial frame.
the destination before the traveler.
Phase velocities above c
can routinely exceed c, the vacuum velocity of light.
and so cannot convey any information.
Group velocities above c
can be obtained before the pulse maximum arrives.
without this effect.
Universal expansion
the speeds of these galaxies.
commoving coordinates, which are often described in terms of the "expansion of space"
between galaxies.
by a factor of around 1020 – 1030.
Astronomical observations
Apparent superluminal motion is observed in many radio galaxies,
blazars, quasars and recently also in microquasars.
when the speed calculations assume it does not.
moving at close to the speed of light.
particles to such speeds.
Quantum mechanics
Certain phenomena in quantum mechanics, such as quantum entanglement,
appear to transmit information faster than light.
According to the No-communication theorem these phenomena do not allow
and all of its environment.
Since the underlying behavior doesn't violate local causality or allow FTL it follows that
The uncertainty principle implies that individual photons may travel for short distances
taken into account when enumerating Feynman diagrams for a particle interaction.
To quote Richard Feynman:
speed of light. You found out in the last lecture that light doesn't go only in straight lines;
now, you find out that it doesn't go only at the speed of light! It may surprise you that
speed, c.
– Richard Feynman
on average.
transmission in optics - most often in the context of a kind of quantum tunneling phenomenon.
transmission of information.
There has sometimes been confusion concerning the latter point.
Quantum teleportation transmits quantum information at whatever speed is used
the same being true for the other pairs. Right there you have a nibble's worth of
the entangled x bits are updated.
– SkewsMe.com
Hartman effect
The Hartman Effect
The Hartman effect is the tunneling effect through a barrier where
the tunneling time tends to a constant for large barriers.
For large gaps between the prisms the tunneling time approaches a constant and thus
the photons appear to have crossed with a superluminal speed.
faster than c, because the tunneling time,
Casimir effect
between the objects.
as if it came from the vacuum.
to send signals faster than c or violate causality.
EPR Paradox
by Alain Aspect in 1981 and 1982 in the Aspect experiment.
Thus functions quantum teleportation.
demonstrated nonlocal quantum correlations between particles separated by over 10 kilometers.
see no-communication theorem for further information.
the speed of light.
Delayed choice quantum eraser
Delayed Choice Quantum Eraser
Variable speed of light
The interpretation of this statement is as follows.
Variable Speed of Light
by João Magueijo, it cannot be measured.
amount of substance, and luminous intensity.
dimensionless quantities expressed in terms of ratios between the quantities being measured
of one another.
measurements, known as Planck units.
While it may be mathematically possible to construct such a system,
Back to Contents
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 12:17 am
Time-warped Fields
can move matter and information forward or backward in time.
David Lewis Anderson, USAF
Officer and Scientist, founder of time-warped field theory.
As general relativity predicts, rotating bodies drag spacetime around themselves in
a phenomenon referred to as frame-dragging.
compared to the predictions of Newtonian physics. The predicted effect is small -
about one part in a few trillion.
The key characteristics of the application of time-warped fields for
time control and time travel are presented in the picture below.
This is followed by more detail describing the science below.
Frame Dragging Effect Basics
The Anderson Time Reactor operates by accessing the high energy
potential and effects, existing across two regions of twisted spacetime,
to create containable and controllable fields of closed-timelike curves.
in the vicinity of rotating massive objects.
partly thanks to the Gravity Probe B experiment.
Static mass increase is another effect.
Mathematical Derivation of Frame Dragging
where rs is the Schwarzschild radius
and where the following shorthand variables have been introduced for brevity
becomes the orthogonal metric for the oblate spheroidal coordinates
We may re-write the Kerr metric in the following form
that depends on both the radius r and the colatitude θ
In the plane of the equator this simplifies to:
the latter's rotation; this is frame-dragging. Frame-dragging occurs about every rotating mass
and at every radius r and colatitude θ.
The Anderson Time Reactor
Twisted spacetime around the earth, or
any rotating body, contains enormous
levels of potential energy. This is due
to the tension in the fabric of spacetime
caused by inertial frame-dragging.
Time Reactor or spacetime battery.
that stored energy and through the coupling process create dense fields of
Closed Timelike Curves (CTCs).
where inertial frame dragging effects are present twisting spacetime between
two regions of space.
David Lewis Anderson
created by inertial frame-dragging.
the localized point in spacetime.
accessing the significant potential energy that existing around any rotating body
anywhere in spacetime.
Spacetime-Motive Force
Spectral image of energy pattern
near time reactor emitter and power
collector array showing coupling
and discharge of spacetime-motive
force including energy drift in the
direction of inertial frame dragging
of the Earth. New Mexico, USA, 2008
The coupling of these two points accesses what Dr. Anderson labeled a "spacetime-motive force"
controlling of fields of closed-timelike curves.
the worlds combined power generation capabilities today.
The amount of spacetime motive force depends on several factors.
must operate between the two regions to open a "discharge path."
can be controlled in several ways through phasing and other characteristics of
the emitter and power collector array.
A Practical Approach to Achieving Time Control
Practical time control and time travel requires significantly large energy levels,
from some source, to operate effectively.
the ability to produce more than all of the world's combined power generation
capabilities today.
Time-warped field theory demonstrates
a practical way to generate the
necessary concentrated CTCs and
high power levels, without high input
power, for practical time control
The rotating mass creates a twist in the fabric of spacetime who's natural state and
can be accessed.
generate very concentrated fields of closed-timelike curves near the Time Reactor's emitter
and power collector array.
both forward and backwards time control.
Back to Contents
Circulating Light Beams
theoretically walk through time as you walk through space.
distributions of matter in Einstein’s general theory of relativity.
rotating dust cylinders, and the rotating universe of Gödel.
and time travel are presented in the picture below.
This is followed by more detail describing the approach below.
exhibited inertial frame dragging.
Ronald L Mallett
in theory, be possible.
inside the circle to twist, causing a gravitational force.
time that Mallett believes will make time travel possible.
are seeking National Science Foundation funding for experiments that they hope
will support their theories.
a neutron inside the circle.
the circulating light.
Mallett's theories of time travel would be supported.
Where the experiments will go from there is unclear.
philosophical issues as well as physical ones. Consider the "Grandparent Paradox" in which
If she were never born, then she couldn't go back in time in the first place.
kill her grandparents.
"All of these things have their root in philosophy," says Mallett.
"All of these things would be philosophy without experimentation," he says.
Heisenberg's Uncertainty Principle says that we cannot predict both the position of
an electron and its spin at any given moment.
Without this principle,
"the universe should have collapsed immediately after it was formed," says Mallett.
attracted to each other, collide, and destroy the atom.
Sun distorting spacetime.
can't make a definite prediction about anything that will happen next.
Therefore the parallel-universe theory works well. What will happen next can't be
predicted because in fact, everything happens next.
Back to Contents
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 1:19 am
Since the 1930’s, physicists have speculated about the existence of "wormholes"
in the fabric of space.
Wormholes are hypothetical areas of warped spacetime with great energy that can
he will never survive to emerge at the other end.
For years, scientists believed that the transit was physically impossible.
done using exotic materials capable of withstanding the immense forces involved.
return to a time before the wormhole was created.
Using wormhole technology would also require a society so technologically advanced
that it could master and exploit the energy within black holes.
Hermann Weyl
when 'folded' over, allows the formation of a wormhole bridge.
If the wormhole is traversable, then matter can 'travel' from one mouth to the other
by passing through the throat.
While there is no observational evidence for wormholes, spacetime containing wormholes
are known to be valid solutions in general relativity.
John Archibald Wheeler
The term wormhole was coined by the American theoretical physicist
John Archibald Wheeler in 1957.
electromagnetic field energy.
are presented in the picture below.
This is followed by more detail describing the science below.
Formalizing this idea leads to definitions such as the following, taken from Matt Visser's
Lorentzian Wormholes.
a 'baby' universe connected to its 'parent' by a narrow 'umbilicus'. One might like to regard
Schwarzschild wormholes
Diagram of a Schwarzschild Wormhole
by combining models of a black hole and a white hole.
While Schwarzschild wormholes are not traversable, their existence inspired Kip Thorne
to imagine traversable wormholes created by holding the 'throat' of a Schwarzschild wormhole
Wormholes would act as shortcuts
connecting distant regions of space-time.
By going through a wormhole, it might
be possible to travel between the two
regions faster than a beam of light
through normal space-time.
The possibility of traversable wormholes in general relativity was first demonstrated by
wormholes could have been naturally created in the early universe.
before it was made but this is disputed.
Faster-than-light travel
Special relativity only applies locally.
(slower-than-light) speeds are used. If two points are connected by a wormhole, the time
if it took a path through the space outside the wormhole.
may take longer than walking through a tunnel crossing it.
You can walk slowly while reaching your destination more quickly because
the distance is smaller.
Time travel
A wormhole could allow time travel.
This could be accomplished by accelerating one end of the wormhole to a high velocity
synchronized clocks at each mouth will remain synchronized to someone traveling through
which entered the accelerated wormhole mouth would exit the stationary one at
a point in time prior to its entry.
to the same region as the stationary mouth with the accelerated mouth's clock reading 2005
while the stationary mouth's clock read 2010. A traveler who entered the accelerated mouth
but now five years in the past.
Such a configuration of wormholes would allow for a particle's world line to form
protection conjecture.
after traveling through the wormhole, therefore preventing infinite accumulation.
one wormhole.
whether the semi-classical approach is reliable in this case.
Theories of wormhole metrics describe the spacetime geometry of a wormhole and
is the following:
In fiction
Wing Commander ships are configured
with jump drives to propel a spacecraft
between two connecting stellar systems.
Wormholes are features of science fiction as they allow interstellar
(and sometimes inter-universal) travel within human timescales.
(such as the Wing Commander games) often uses a "jump drive" to propel a spacecraft
between two fixed "jump points" connecting stellar systems.
Connecting systems in a network like this results in a fixed "terrain" with choke points that
by Larry Niven and Jerry Pournelle in The Mote in God's Eye and related novels are an example,
although the mechanism does not seem to describe actual wormhole physics.
interstellar travel in Lois McMaster Bujold's Vorkosigan Saga. They are also used to create
an Interstellar Commonwealth in Peter F. Hamilton's Commonwealth Saga.
In Jack L. Chalker's The Rings of the Master series, interstellar class spaceships are
capable of calculating complex equations and punching Wormholes in the fabric of
the Universe in order to enable rapid travel.
by Werner Herzog.
Mass Relay Map in the Video Game Mars Effect
that allow for near instantaneous, "faster-than-light" travel from one end to the other.
The Massively Multiplayer Online Game EVE Online utilizes wormholes extensively
in the game world.
the Star Trek universe.
Bajoran Wormhole in Star Trek
possible though limited, allowing connections between regions that would be otherwise
unreachable within conventional timelines.
the Deep Space Nine series. In 1979's Star Trek: The Motion Picture the USS Enterprise
the ship's warp engines when it first achieved faster-than-light speed.
described as "wormholes", they appear to share several traits in common with them.
The 1979 Disney film The Black Hole's plot centers around a massive black hole,
desultory mention of an Einstein-Rosen bridge.
Wormhole Transporter in the movie Contact.
In Carl Sagan's novel Contact and subsequent 1997 film starring Jodie Foster and
Matthew McConaughey, Foster's character Ellie travels 26 light years through
a series of wormholes to the star Vega.
There is actually another popular novel/movie about 'Contact' that similarly features
no physical ETs and has a spiritual message. This is Contact, written by Carl Sagan.
Sagan, it should be pointed out, was a major figure in the scientific community and
a major driving force behind the whole SETI movement, including the golden disk
carried by the Voyager spacecraft and the Arecibo message transmission, i.e.
the two SETI messages "answered" by the recent 2001 and 2002
'alien' crop formations.
Therefore, indications are that Contact may potentially
be viewed as a secondary primer.
This is an important development - because,
as those who have seen the movie should easily recall, the very first coded message, i.e.
First Contact, received by the radio telescopes in Contact was a video of Hitler
making his energetic speech at the Olympic Games.
In other words, the momentous First Contact event was somehow overlaid with
the dark imagery of Nazi Germany, just as the 2002 alien crop formation seems to
relate to this regime of terror!
This is obviously a strong confirmatory clue that
urges us to continue this line of investigation.
Another interesting aspect of Contact is the hyperdimensional transporter - a 'stargate' -
constructed from the blueprint found encoded in the extraterrestrial radio signal.
It was suggested that it was through hyperdimensional technology that mankind makes
the next evolutionary jump. Mastering the (meta-) physics of hyperspace - the realm
beyond our normal 3 spatial and 1 temporal dimensions - was therefore portrayed
as the coming gateway through which the next New World opens up for the human race.
Is it, then, just a coincidence that the Third Reich has been found heavily entangled
with antigravity and 'flying saucer' technology? The modern UFO phenomenon, in fact,
started during the time of World War II with the mysterious 'foo-fighters' and such.
Even the legendary Roswell UFO crash incident took place in 1947, just two years
after the war (when a lot of Nazi technology had been brought over to the U.S.).
Could Nazi Germany have possibly developed antigravity/hyperdimensional technology?
Could they have possessed such "Ultra-Tech" knowledge?
for analyzing the physics of wormholes.
John Crichton's presence in the far reaches of our own galaxy, and in the Stargate series,
In the pilot episode it was referred to as an "Einstein-Rosen-Podolsky bridge".
Wormhole in movie Donnie Darko
could be perceived as wormhole technology.
to convey infantry and vehicles behind enemy lines.
In the Invader Zim episode, "A Room with a Moose" Zim utilizes a wormhole to send his
the science club found at their school.
In an episode called "wormhole" in the 13th season of the long running American series
Emperor Grumm goes through one.
In the video game "Supreme Commander" the UEF faction utilizes aether-gates for
long distance military strikes.
Black hole in video game Spore
In the video game "Spore", the player can travel through various black holes, which act
called the "Kali Region" or "Galileo Region" to arrive at exo-solar destinations.
This idea is abandoned after the second episode.
away from Earth use wormholes to travel to Earth.
Back to Contents
Cosmic Strings
closed time-like curves permitting backwards time travel.
Some scientists have suggested using "cosmic strings" to construct a time machine.
a black hole – it is theoretically possible to create a whole array of "closed time-like curves."
able to emerge anywhere, anytime!
space and time.
they may explain strange effects seen in distant galaxies.
of the universe's evolution.
on static surrounding matter.
clumping of matter into galactic superclusters.
A cosmic string's vibrations, which would oscillate near the speed of light, can cause part of
via gravitational radiation.
Observational evidence
of random, Gaussian fluctuations.
two identical, undistorted images of the galaxy.
in observations of the "double quasar" called Q0957+561A,B.
of this intermediate galaxy bends the quasar's light so that it follows two paths of different lengths
the other (about 417.1 days later).
the two images occurred simultaneously on four separate occasions.
oscillating with a period of about 100 days.
gravitational waves.
String theory and cosmic strings
that the expanding Universe could have stretched a "fundamental" string (the sort which
superstring theory considers) until it was of intergalactic size.
Such a stretched string would exhibit many of the properties of the old "cosmic" string variety,
one-dimensional D-branes (known as "D-strings").
As theorist Tom Kibble remarks,
"string theory cosmologists have discovered cosmic strings lurking everywhere in
the undergrowth".
Scientists at the LIGO Livingston Observatory in
Louisiana are searching for evidence of gravitational waves.
has been performed.
Back to Contents
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 2:02 am
Tipler Cylinder
in a way to achieve subluminal time travel to the past.
a few billion revolutions per minute and see what happens.
would immediately find itself on a "closed, time-like curve."
you need to steer well clear of them in your timeship.
the middle of the cylinder, you should survive the trip!
to be a potential mode of time travel - an approach that is conceivably functional within
humanity's current understanding of physics, specifically the theory of general relativity,
if its length would appear infinite.
Frank Tipler
Tipler showed in his 1974 paper, "Rotating Cylinders and the Possibility of Global Causality
Violation" that in a spacetime containing a massive, infinitely long cylinder which was spinning
in the cylinder's proximity become tilted, so that part of the light cone then points backwards
along the time axis on a space time diagram.
Therefore a spacecraft accelerating sufficiently in the appropriate direction can travel backwards
through time along a closed timelike curve or CTC.
Closed timelike curve formation
using rotating cylinder model
CTC's are associated, in Lorentzian manifolds which are interpreted physically as spacetimes,
the Novikov self-consistency principle.
pressureless fluid or dust).
the region contains no exotic matter with negative energy.
Tipler's original solution involved a cylinder of infinite length, which is easier to analyze
A spirallohedron of 6 hyperstrings
from 6 parallel universes
But Hawking argues that because of his conjecture,
a finite time machine, you need negative energy."
Hawking's proof appears in his 1992 paper on the chronology protection conjecture,
where he examines,
curvature singularities" and proves that "there will be a Cauchy horizon that is
which will be incomplete. One can define geometrical quantities that measure
weak energy condition must be violated on the Cauchy horizon."
Back to Contents
Casimir Effect
could stabilize a wormhole to allow faster than light travel.
arising from a quantized field.
do affect the virtual photons which constitute the field, and generate a net force - either an attraction
time travel are presented in the picture below.
This is followed by more detail describing the effect below.
of a quantized field in the intervening space between the objects.
or virtual particles of quantum fields.
Hendrik Casimir
Philips Research Labs.
within 15% of the value predicted by the theory.
when the distance between the objects is extremely small.
between uncharged conductors. In fact, at separations of 10 nm - about 100 times the typical size
of an atom - the Casimir effect produces the equivalent of 1 atmosphere of pressure (101.3 kPa),
the precise value depending on surface geometry and other factors.
microtechnologies and nanotechnologies.
Vacuum energy
for the particular field in question.
must be made in relation to this model of the vacuum.
the vacuum energy or the vacuum expectation value of the energy.
or zero-point energy that such an oscillator may have is
a Theory of Everything.
large value causes trouble in cosmology.
The Casimir Effect
Simulation of Casimir Force
Casimir's observation was that the second-quantized quantum electromagnetic field,
the energy of the electromagnetic field in the cavity is then
- it is the same 1/2 as appears in the equation...
create finite expressions.
the energy level, and,
This value is finite in many practical calculations.
Casimir's calculation
In the original calculation done by Casimir, he considered the space
between a pair of conducting metal plates at distance a apart.
the standing waves are
where ψ stands for the electric component of the electromagnetic field, and, for brevity,
the polarization and the magnetic components are ignored here. Here,
of the energy per unit-area of the plate is
In the end, the limit
Expanding this, one gets
where polar coordinates
The Casimir force per unit area,
(hbar, ħ) is the reduced Planck constant, c is the speed of light,
a is the distance between the two plates.
quantum-mechanical origin.
More recent theory
Concept of zero-point energy module
using the Casimir Effect
can be calculated numerically using the tabulated complex dielectric functions of
the bounding materials.
plates composed of ideal metals in vacuum, the results reduce to Casimir’s.Measurement
contradiction with the Casimir theory, but with large experimental errors.
California at Riverside.
The heat kernel or exponentially regulated sum is
where the limit
which is shape-dependent. The Gaussian regulator
may be used as well. The zeta function regulator
at s=4.
metals become transparent to photons (such as x-rays), and dielectrics show a frequency-dependent
cutoff as well.
experimental setup for the conversion of
vacuum energy into mechanical-energy.
the theme of the Casimir effect.
the topological winding number of the pion field surrounding the nucleon.
Casimir effect and wormholes
could be used to stabilize a wormhole to allow faster than light travel. This concept has been
used extensively in Science Fiction.
Repulsive forces
though this is controversial because these materials seem to violate fundamental causality
constraints and the requirement of thermodynamic equilibrium. An experimental demonstration
silicon integrated circuit technology based micro- and nanoelectromechanical systems,
and so-called Casimir oscillators.
Classical 'Critical' Casimir Effect
had been indirect.
fluctuations and that exhibit random Brownian motion.
internal reflection microscopy is used to detect displacements of the ball.
nanoelectromechanical systems, its dependence upon a very specific temperature presently
limits its usefulness.
Back to Contents
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 2:44 pm
by William Henry
William Henry’s book
The Stargate of Atlantis
will be published in Spring 2006 from WilliamHenry Website
The British historian and novelist H.G.Wells put it best when he once observed,
“There is magic in names and the mightiest among these words of magic is Atlantis…
it is as if this vision of a lost culture touched the most hidden thought of our soul.”
Of course, by far the most illustrious of all the voices in the Atlantis choir was Plato (c. 427-347 BC)
who, repeating the story of his cousin’s excursion into Egypt, reintroduced the epic story of Atlantis
to the collective human imagination. He is the father of ‘Atlantology’.
According to Manly P. Hall, Plato, whose real name was Aristocles, was initiated in the mysteries
in Egypt at the age of 49. His tale of Atlantis appears in Timaeus, in which Critias tells Socrates how,
visiting the Egyptian capital Plato’s ancestor Solon (c. 640BC) was told by a priest:
“Oh, Solon, Solon, you Greeks are all children, and there is no such thing as an old Greek. …
hoary with age. … In our temples we have preserved from earliest times a written record of
whether it occurred in your part of the world or here or anywhere else; whereas with you
what happened in our part of the world or in yours in early times. So these genealogies
of your own people which you were just recounting are little more than children's stories. …
The age of our institutions is given in our sacred records as eight thousand years …..."
from 'Prolegomenon To Amenemope'
Plato, who is considered one of the world’s greatest scholars, left little room to doubt that
he subscribed wholeheartedly to the historicity of Atlantis and repeated cataclysms.
Nine thousand years before Plato’s conversation was recorded (c. 400 B.C.) a war took place
between an ancient pristine Athens and Atlantis. At that time
Atlantis was an island ‘larger than Libya and Asia put together’ that was overcome by earthquakes.
It is the source, says Plato, of the impenetrable mud, which prevents passage beyond the Pillars
of Heracles and across the Atlantic.
Plato’s description of Atlantis came shortly after the Jews were in exile in
Babylon (c.600 B.C.) and were taking history lessons from Sumerian texts
that contained the missing pre-history to the Hebrew Book of Genesis.
These texts speak of a massive cataclysm that destroyed an advanced race. They tell
how the Sumerian gods Enki and Ninharsag intervened in the evolution of humanity
and created an advanced civilization that was destroyed and how they assisted in
the long march to renewing civilization. These beings were the Shining Ones of Eden and
early biblical times. In Plato’s Atlantis story Enki became Poseidon, the ruler of the Atlantis.
For more than three thousand years, people have been magnetically attracted and bedazzled
by Plato’s story of Enki/Poseidon’s island Empire of Atlantis and have either dismissed it
as mere legend or have transformed this story into true hidden history.
Many feel that Atlantis is purely fable or a metaphor and that the ‘water’ that destroyed
it is simply a symbol for a new wisdom that replaced the old.
Those who dismiss the tale of Atlantis are of Aristotle’s school. He compared his teacher's story
with that of Homer’s narrative of the wall which the Greeks were said to have constructed to
protect mythical Troy, but which was destroyed by divine intervention. Aristotle’s belief was
that both Homer’s tale of Troy and Plato’s Atlantis were inventions of storytellers seeking to
embellish their story lines.
Aristotle claimed that Plato sank the island so that it could never be found. With Homer’s Iliad
as his guide, Heinrich Schliemann went hunting for ancient Troy in 1870. When he found
it new life was breathed into the belief that Atlantis was also an actual place.
Balancing Aristotle's view on Atlantis was Crantor (c. 300 B.C.), the first editor of Plato's Timaeus.
To him Plato’s story was literally and historically accurate. According to some sources,
he even sent investigators to Egypt to verify the sources. Allegedly, Egyptian priests claimed
records found on still standing ‘pillars’ verified the story of Atlantis.
Egypt is certainly the land of pillars. The stout columns of Karnak are unforgettable.
Truly awe-inspiring are those three mysterious ancient pillars we call the pyramids of Giza,
clumped together on the plateau of the gods. They represent a high science and industry
capable of creating a nearly indestructible edifice. Are these the pillars of record?
Despite the fact that nearly two thousand books have been written about Atlantis in
the twentieth century -- many written about the Atlantean origin of the Egyptian, Sumerian,
Indo-Aryan, and native South American civilizations -- we may never be able to prove to
some that Atlantis existed. Still, Atlantis reminds us of all that was once great about
the human race, and can be great again. It is a state of mind, guided by the gods, glued together
by far-flung ideas and a large measure of hope. Here’s the essential story of Atlantis as told by Plato.
“Once upon a time,” Plato begins in Critias, “ the gods divided up the Earth between them.”
Each took a territory and having done so populated it with humans, “theircreatures and children.”
The gods looked after human kind as shepherds look after their flocks, he notes, using mental
telepathy to guide and persuade the mortal creatures in their care.
Poseidon’s share of the god’s earthly spoils was Atlantis and he settled the children
born to him by a mortal woman in a particular district of it. At the center of the island,
near the sea, on the most beautiful plain was a hill. Here there lived one of the original earth
born inhabitants
called Evenor, and his wife Leucippe. They had an only child, a daughter
named Cleito. She was just of marriageable age when her parts died, and Poseidon was attracted
by her and had intercourse with her. He fortified the hill where she was living by enclosing it in
concentric rings of sea and land, making the place inaccessible to other humans. He equipped
the central island with godlike lavishness.
Poseidon begot five pairs of male twins, brought them up and divided the island of Atlantis
into ten parts, which he distributed between them. His oldest son, Atlas, was given his mother’s
home district. Atlantis is named for Atlas. In the center was a shrine to Poseidon and Cleito,
surrounded by a golden wall through which entry was forbidden.
For many generations, Plato tells us, a ‘divine element’ in the nature of the hybrid children of
Atlantis survived. They retained a certain greatness of mind and enjoyed a high standard of
living and lives of impeccable character.
But then, the divine element in them became weakened by frequent admixture with mortal stock
and their human traits became predominant. They ceased to be able to carry their prosperity with
moderation, says Plato. The degenerative strain began to covet power and unbridled ambition.
The god of gods, Zeus, whose eye can see such things, became aware of the wretched state of this
admirable stock. He decided to punish them and reduce them to order by discipline.
the center of the universe and looks out over the whole realm of change, and when they had
assembled addressed them as follows.
Here, Plato’s dialog cuts off. From this brief synopsis we have learned that the gods came to earth,
mated with humans, created a new race of hybrid god-men, and built a protective enclosure for them
at the center of Atlantis. After this race achieved a high degree of civilization it began to degenerate
because of a dilution of ‘divine essence’.
Zeus, living in the center of the universe, destroys Atlantis.
I have investigated this ‘divine essence’ in my book Oracle of the Illuminati. It was also the primary
subject of Gnostic text known as the Hypostasis of the Archons or The Reality of the Rulers.
In short, the Gnostics believed humans possess a divine particle that is jealously coveted by
a class of beings called archons or rulers. We have it. They don’t.
They want it.
The magical name ‘Atlantis’, I will contend, refers to more than just a vanished land (bridge)
between Europe and America or a global kingdom that may soon arise. It is our constant craving,
an irrepressible ideal; a word-symbol that conjures visions of ancient glory – a divine element --
that was lost.
Atlantis is meant to be the guiding myth of human civilization. It is the great Phoenix-bird of myths --
immolating and reconstituting over and over again. Like a psychological angel or demon relegated
to the deepest recesses of the subconscious, it will rise again.
The question is when.
As evidenced by their use of stellar symbolism in their religious art the initiates of ancient times
knew of the precession of the equinoxes, a time-keeping system which divides a 26,000 year
‘Great Year’ into 12 astrological ‘new ages’ of approximately 2,150 years each. They predicted
that humanity would make a quantum leap to a new rung of evolution’s
golden spiral during the Age of Pisces, which commenced during the time of Jesus
and another in the second millennium AD at the beginning of the Age of Aquarius.
Time reveals everything.
Our ability to grasp the importance of the Atlantis myth to current events is matched only by
our ability to absorb the astounding. Atlantis is the missing piece in the puzzle, it is the beginning
and ending of all that is, the compliment to the biblical Eden and book of Revelation’s New
Jerusalem, the 12-gated city that will descend from the sky.
The theory of Atlantis has traditionally been studied almost entirely as a topic of archaeology,
geology and history. This, however, is not the whole story. Atlantis is a creation myth ala
the Old Testament book of Genesis. This means it is susceptible to many levels of interpretation,
including the historical, but also the metaphorical and allegorical.
To use a modern word, Atlantis is a meme, a “catch all” phrase or code-symbol. Like ‘America’
it is a place name, but also a mental program, a matrix or realm of possibility.
It is a spiritual goal.
Intriguingly, the word meme is composed of ME (pronounced ‘may’),
the name of the lost tablets of creation of Sumerian myth.
The Anunnaki lord Enki (Poseidon in Atlantis) guarded these tablets at this temple at Eridu.
When Plato repeated this story he unleashed a mind-altering idea. Passing through the millennia
and winding down road after road, through culture after culture like a river, this tune has gained
resonance. As a river seeks to find the ocean, I believe the Atlantis story is itself a stargate
leading us to the cosmic ocean.
If Atlantis only exists as a brain pattern or frequency nabbed out thin air by billions of antennas
embedded in central nervous systems and replicated billions of times throughout history so be it.
Some claim ‘Jesus Christ’ is also a meme. Look at the impact such ‘vibes’ have on humanity.
Curiously, both Jesus and Atlantis are symbolized by a cross.
As is true of the meme of the Cross the symbolic representation of and substitute for Jesus,
the story of Atlantis has been passed along vertically throughout generations and horizontally
among groups of like-minded individuals, the great initiates. Both stories symbolize the triumph
of enlightenment over ignorance
. Both teachings have been in a continuous state of alteration
and vigorous promotion. Both deserve their acclaimed moniker of “the greatest story ever told.
Transmitted along with the meme for Atlantis is its symbol or icon known as
the Cross of Atlantis. As with the ‘Living Cross’, the Cross of Atlantis should be
viewed as a living structure, both metaphorically and technologically.
The capital of Atlantis was a maritime city with an enormous port, having alternating zones of
land and sea, divided into three zones. In the innermost ring was a sacred mountain, possibly
a volcano, where the original race of Atlas arose. The Atlanteans built a royal palace atop this hill
and it became a ‘marvel to behold for its size and beauty’. In the middle of the citadel was a temple
dedicated to Poseidon and Cleiton.
This description of two great circles of land around an island, and three great circles of water
around the land reveals the symbol for Atlantis known as the Cross of Atlantis.
The Cross of Atlantis
A symbol is a sign. As the great symbolist Jordan Maxwell notes in the introduction to
Stellar Theology, a symbol indicates direction or it informs one of ownership. A sign is
a logo that says this item belongs to this group. This Cross of Atlantis sign says
“this place belongs to the gods. Humans keep out.”
Schliemann’s City of the Golden Gates with Atlantis logo (1906).
Its main characteristic is the shape of the water canals and
the land zones together
with the bridges forming the “Cross of Atlantis”.
The Center represents the capitol of Atlantis.
Although Plato does not mention this center’s name other traditions do record
the name of the capital of Atlantis.
Meru is the name of this center.
The 4th century B.C. Greek historian Theopompus tells us
that one of the names of the people
who inhabited Atlantis
were the Meropes, the people of Merou.
The Atlantis logo is composed of a sun disk or Cross of Light
embedded in concentric rings. According to Laurence Gardner, writing in
The Magdalene Legacy
, a cross within a circle is called a Rosi-Crucis –
the Dew Cup – and is the original symbol for the Holy Grail.
The concentric rings.
The concentric rings primarily symbolize vibration. In addition, they can represent a vortex.
As in the example from NASA below, the concentric rings also may represent a
two-dimensional expression of a three-dimensional experience: that of traveling through
interstellar passageways called stargates or wormholes.
This fact offers extraordinary possibilities when interpreting the Atlantis meme and is one that
I’d like to go into here.
This hypothetical spacecraft with a “negative energy” induction ring was inspired by
hyperfast transport to reach distant star systems. In the 1990s, NASA Glenn Research Center
lead the Breakthrough Propulsion Physics Project.
NASA’s primary effort to produce near-term, credible, and measurable progress toward
the technology breakthroughs needed to revolutionize space travel and enable
interstellar voyages.
The 2-D rings symbolize a 3-D vortex.
In the past few years the scientifically rooted concept of wormholes and star gates, also called
the Einstein-Rosen bridge, have become popular topics of such television shows as Star Trek:
The Next Generation
and Sliders and movies such as Stargate and Contact
which feature ancient stargate technology for opening wormholes in space/time.
When the concentric rings of Atlantis are interpreted as a vibration -- a ring or a stargate --
it suggests that as a creation tale one of the most important concepts that the theory of
Atlantis embodies is the stargate.
A modern depiction of a wormhole.
Theoretically, physicists view wormholes as time machines that may
open gateways to parallel dimensions.
They are the subjects of intense
scientific research in America and Europe.
If we apply the three-dimensional approach to the two-dimensional concentric rings
of Atlantis, we may hypothesize that it symbolizes a vortex. The Great Cross of Atlantis
may be thought of as a Great Crossing place, a place of passage, to pass through one realm
to another. Perhaps even to pass over the stars or pass through the galaxy.
As mentioned, the symbol of Atlantis is derived from the description of the enclosure
constructed by the gods to protect the original sacred hill of Atlantis. As noted by Plato,
the gods built this enclosure to keep humans out. Therefore, the sign of Atlantis could read
Atlantis: Property of the gods. Humans keep out.”
In this capacity the Cross of Atlantis is identical to the Gate of Eden. Because Adam and Eve
were disobedient to Yahweh, the god of Eden sent them out of the Garden of Eden (Genesis 3:23).
He placed at the east of Eden Cherubims, and a flaming sword which turned every way
(it rotated gyroscopically), to keep the way of the way of the tree of life.
Significantly, the Tree of Life is equated with the Cross. Therefore, the description of
the mysterious Gate of Eden is of a rotating gate. The description of the rotating gate of Eden
brings to mind a gyroscope and also the alien Stargate Machine featured in
the Warner Brothers movie Contact (1996).
A gyroscope.
The gyroscopic Stargate Machine from the movie Contact.
Two Nubian figures stand beside a pillar in the middle of the Egyptian symbol for gate.
The pillar at the center of the image is the one featured in
the drawings on the left and right.
It belonged to Osiris. It was called the Stairway to Heaven.
For more on this pillar please see my article
The Ark’s Missing Piece.
The Sumerian sun god entered Earth
through a gateway with a tree
that resembles Osiris’ pillar beside it.
I can hear the reader thinking are you bleeping kidding me? The average person, let alone
a credentialed archaeologist, when discussing such a phenomenon as Atlantis can barely
accept the idea of vanished civilization. Adding to this mix the possibility that the inhabitants
were advanced, possibly even vastly advanced from our hyper-technological civilization is
nonsense, babbling (‘Babel’ in Hebrew, a word which originally meant ‘gate’).
This is as unique as it is revolutionary. It is also quite hopeful as the concept of Atlantis rising
is the jewel of many prophecy hunters. As we will see, wormhole symbolism is rampant in
the story of Atlantis. It suggests that the rising of Atlantis has to do with the rise of
theoretical physics.
For instance, in his discourse Phaeado, Plato records the astonishing revelation of
Socrates in the last moments before his execution that,
“the true Earth itself looks from above, if you could see it,
like those twelve-patched leather balls”.
A twelve-patched leather ball describes a dodecahedron. The dodecahedron with twelve five-
sided faces was used as a teaching tool to instruct the initiate to know him or herself as
an energy system like the Earth. Incredibly, a new study of only recently available scientific data
hints that the universe is roughly shaped like a soccer ball, a dodecahedron.
Remarkably, Plato is describing Earth as a three-dimensional pentagonal web into which
the soul incarnates. The ancient Greeks likely learned from the Egyptians
that the human body is ideally structured geometrically to interface with
the dodecahedron and its pentagonal grid.
Plato was not alone in his understanding of the 12-sided Earth. The Cherokee held
a complimentary belief. In addition, the second century AD, a group of Christian Gnostics
described the sphere of Earth being surrounded by a 12-angled pyramid.
These 12 angles are described as “eyes”, “pipes,” and even more fascinating
to our investigation, as “holes” or “halls” in the Earth! These, it appears,
are the gates of the New Jerusalem.
Today, most researchers into sacred Earth energies concur that the Planetary Grid
has 12 primary vortex zones, “halls” or “holes” with the Great Pyramid at the apex control.
A network of temple sites throughout the Earth marks the Grid. These sites roughly include:
• the Great Pyramid of Giza and Heliopolis (“the City of the Sun”)
• the 1,000 foot high pyramids of southern China, 90 degrees opposite of Giza.
• Easter Island, 180 degrees opposite the Great Pyramid.
• the City of the Sun at prehistoric Cuzco, Peru.
• Teotihuacan in Mexico.
• the vortices of the Four Corners area of the USA.
• the Mississippi valley of the southeastern USA.
• Oak Island, Nova Scotia.
• the South of France, centered at Rennes-le-Chateau, France.
The 12-fold form is found in numerous mythic structures including these:
Meru, the center of Atlantis,
enfolded within a 12-petaled lotus.
Arthur’s rather Atlantean Roundtable.
The Mount of Salvation at the center of a zodiac.
The activation of this 12-fold Planetary Grid is called the ‘quickening’ of the Earth,
an increase in the vibratory levels of humanity and Earth which is thought to spark
higher intelligence and increased linkage between minds. The Quickening of our civilization
is a result of the “quick-beams” or “quick-rays” emanating from the core of our galaxy
and likely radiating through the Meru or Center of Atlantis.
Meru is called the World Mountain. Turning to Buddhist imagery we find
a remarkable image of Meru.
The logo for Atlantis (left). Mandala of Meru, the center of Atlantis,
from the top down perspective (Samuel Beal, London, 1882).
Meru appears as a concentric ring.
One wonders if this is the ring at the top of the Meru symbol .
The hourglass (or wormhole-shaped) World Mountain, Meru,
rises from the great ocean of space, (Japan, 1678).
The wormhole shape suggests Meru was a gateway to another dimension.
The Buddha’s death caused the collapse of the world-pillar Meru.
In the lower rightit shatters into curious cubes that suggest crystals
or even other dimensions.
The collapse of a wormhole.
Given the puns we have explored, could the world mountain Meru actually be a whirled,
whirling or spinning, mountain? Interestingly, the Meru is believed to have been
the model for the Tower of Babel, located in Su-Meru or Sumer. Further, as spinning is
the same as turning (from tour), does this indicate the Tower of Babel was a spinning
or turning pillar?
It is said that once Buddha achieved enlightenment he placed a victory banner on
the summit of Mt. Meru, symbolizing his victory over the entire universe. Again,
Mount Meru
here is believed to be the central axis supporting the world.
The flag of victory also denotes Buddha’s triumph over Mara, who personifies hindrances
These are:
1). The Mara of Emotional Defilement
2). Mara of Passion
3). Mara of the Fear of Death
4). Mara of Pride and Lust!
over ignorance, and achieve nirvana.
Here, it is worth a brief diversion to note that in
calling Jesus ‘the Way’ the early Christians were following in the footsteps of the Buddhists.
Just as converts to Christianity living long after the fact altered the images of Jesus,
a similar artistic make-over was applied to Buddha. Early Buddhist art consisted of
symbolic representations of Buddha.
the Anunnaki symbol of king ship.
The Dharma Wheel symbol of
the 8-fold Path of Buddha or the Universal Law.
Early Christian 8 rayed wheels.
The 8-rayed symbol of the Nibiru (gate).
Wherever archaeologists discovered remains of the early Sumerian civilizations the symbol
became Poseidon in the story of Atlantis.
Thus, dinger E.A indicated the ‘Shining Lord of Waters’. Sitchin assigns the Babylonian
version of the symbol of the Cross of Light to Enki’s home world the Nibiru, which means
‘crossing place’ or ‘gate’.
The particle accelerator of Atlantis
He told me he was asleep through most of my presentation… until I “putup that blueprint.”
“Blueprint? What blueprint?” I asked.
In my talk I had presented it as an example of a possible ‘Grailtuner’ and a candidate for the rod
shown in Jesus’ hand in early Christian art. (For more please see my article
Jesus, FDR and the Meru Superantenna.)
"That blueprint,” he responded.
a living. Laser cannons. Light sabers.
“Your Meru drawing is a blueprint for a particle accelerator and
a particle beam weapon,” he said knowingly.
designed to rotate on the circular platform. The ‘horns’modulate the pulse emanating
from the weapon.
Late in 2004 digital artist Jack Andrews rendered the Meru drawing in 3-D on a computer.
A cyclotron consisted of two large dipole magnets designed to produce
a semi-circular region of uniform magnetic field, pointing uniformly downward.
a particle accelerator designed by Ernest O. Livermore in 1929 and developed in
the development of the A-bomb.
Glancing at the drawings it is easy to see why the scientist saw a similarity
between the two designs. The question is:
What in the world is a blueprint for a particle accelerator doing in a 2nd century
Chinese manuscript?!
Victoria LePage, Lama Anagarika Govinda, says Meru is,
This astonishing drawing was among the famed Dunhuang Manuscripts discovered in
the secret repository the Cave of the Thousand Buddhas early in the 20th century by
Hungarian-born explorer Marc Aurel Stein. According to the Order of Nazorean Essenes
(Essenes.net), numerous 3rd century Manichaean-Christian manuscripts were part of
the Dunhuang ‘hall of records’.
Historians claim monks in hidden monasteries in Central Asia trained Issa (“Jesus”) during
his 18 ‘missing’ silent years between 12 and 30. He traveled the mountain passes and taught
widely in the monasteries and markets.On the next page, we see a comparison of apex of
the Meru pillar with the cone featured in a scene etched onto a golden plate and presented
to King Tutankhamun.
Author Z. Sitchin proposes this an Anunnaki space craft on a silo.
Bottom Right. Top right: A drawing etched onto a gold plate presented
to King Tutankhamen made by Nubian goldsmiths.
Z. Sitchin proposes it is an Anunnaki craft on a silo.
The stargate model proposes it is a warp craft instead.
Note the pillar at the bottom center of the drawing.
Detail of the base of the Meru pillar rendered in 3-D by Jack Andrews.
It creates a ring or a vortex. Note, the arrangement of
the Central (or Whirling) Mountain
surrounded by four subsidiary peaks.
The Cross of Atlantis with the Meru pillar in the center by Dana Augustine (left).
The apex of the Meru pillar, the ‘warp craft’, by Jack Andrews (right).
My working hypothesis is that this diagram, schematic or blueprint is a mandala whose symbols
point to the function of the actual thing… an instrument of some kind capable of conducting
the spiritual energies of the planet.
Mandalas are “read” or memorized for visualization during meditation. They are blueprints of
the three-dimensional palace of the deity. The purpose of the mandala is two-fold: to acquaint
the student with the symbols of the deity (i.e. Buddha, Christ) and to allow the student to
“enter into the mandala”; that is, to enter the frequency or vibration in which the deity lives, i.e.
This is how the mystics of Shambhala were raised from a man to a god-man.
They mastered the Meru symbolism. This raised the vibration of their mind and body
and he entered the mysterious realm of Shambhala.
In the diagram of the base of the Meru pillar we notice it has four sides. The arrangement of
the Central (or Whirling) Mountain surrounded by four subsidiary peaks is found in the Navajo
sand painted mandalas also corresponds to that of Mt. Meru,flanked by its Four Guardians
and their corresponding four subsidiary mountain peaks. In India, as with the Navajos,
the Four Guardians are also represented as either gods (Lokapalas), mountains,
snakes (Nagas) or birds (Garudas), which are frequent animal shapes assumed by them.
Moreover, each of the Four Hindu Guardians is also associated with a heraldic color,
in exact correspondence to the Navajo ones: White, Yellow, Black and Red. These, of course,
are the four races of humanity, which originated in Eden or Atlantis, according to legend.
The Four Guardians are sometimes represented by the Four Trees or by Winds, Suns, Moons,
Archangels, Bats (Vampires), Thunderbolts, as well as by Coiled or by Standing Serpents
resembling zigzag lightning and ball lightning.
The four guardians. Egypt (left).
Meru (right).
Each of these four drawings match what we would see viewing the four ‘mountains’
and the Meru particle accelerator from the top down.
Solomon’s Temple (left). The INRI cross of Christ (right).
The Meru pillar (left). Rose-Apple Land (Right).
According to a conversation I had with a Tibetan White Sand Mandala designer, the concentric rings
at the base of the Meru pillar represent water. This is confirmed by Philip Rawson in his book
The Art of Tantra
in which he presents a painting (above right) of Rose-apple land. I am most intrigued
by this place name ‘Rose-apple land’. When we apply this place name to the perceived function of
the Meru antenna or accelerator, it suggests that it created a force field.
A particle accelerator is an electrical device that generates charged particles, such as electrons,
protons and ions, at high energy. So-called “nuclear accelerators” are used to split the atom
They are used to manufacture a myriad of products including semiconductors. They are also
One of the early particle accelerators responsible for development of the atomic bomb.
Built in 1937 by Philips of Eindhoven it currently resides
in the National Science Museum in London, England.
Particle accelerator.
The Large Hadron Collider (LHC) is being built at CERN, the European laboratory for
nuclear research. Costing $3 billion and expected to be completed in 2007, it will be the largest
particle accelerator in the world for nuclear research. While making 17-milerings at nearly
the speed of light, protons will be made to collide into other particles 10 million times per second.
The main purpose is to find the Higgs boson, considered by some scientists to be the fundamental
element of matter and often called the “God Particle.
In addition, physicists hope to use the collider to answer the following questions:
• What is mass? (We know how to measure it - but what is it?)
• What’s the origin of mass of particles? (In particular, does the Higgs Boson exist?)
• Why do elementary particles have
different masses? (I.e., do particles interact with a Higgs field?)
• We know that 95% of the universe‘s mass is not made of matter as we know it.
• What is it? (I.e. what is dark matter, dark energy?)
• Do supersymmetric (SUSY) particles exist?
• Are there extra dimensions, as predicted by various models inspired by string theory, and
can we “see” them?
• Are there additional violations of the symmetry between matter and antimatter?
As of June, 2005 no experiment has definitively detected the existence of the Higgs bosons.
The search for the hypothetical elementary ‘God Particle’ is reminiscent of ancient metaphysics.
According to the Zohar, manna, the food of the gods, is made (detected/produced) by
a Manna machine strikingly reminiscent of the Merupillar/particle accelerator. It starts its
journey as dew and falls to the ‘field of holy apples’.
The people who preserved these secrets called themselves the Reapers of the Holy Field.
The fact that manna was called ‘blessing’ or ‘mercy’ (meru-cy) tell us the ‘holy field’
is a beneficial vibration. The reapers are the ones who know how to harvest the (blue) apples
from this field. They use particle accelerators. (Later, we will compare the human body with
these devises.)
It is CERN’s search for extra dimensions that is most intriguing to us.
What will they find?
In addition, in the next few decades, the possibility of black hole production at
are accurate (Scientific American, May 2005). A black hole is an area of space-time with
As we can see from the comparison of the spinning black hole presented here there is
a striking similarity with the symbol of Atlantis.
I’ll leave you to contemplate the similarity.
This computer rendering shows matter whirling into a spinning black hole,
whose shape is distorted and not spherical.
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 4:32 pm
by William Henry
from HyddenMysteries Website
Reconstruction: The Hanging Gardens of Babylon (Iraq)
A lot of wild speculation in metaphysical circles these days concerns the possible return of
Planet X, a mysterious roaming planet that swings to the far side of our solar system,
and is expected to return to our part of the solar system soon. In Sumerian mythology,
Planet X is called “The Lord” and is the home of a group of beings that the bible calls
Shining Ones.
According to my interpretation of Sumerian myths presented in my book
Ark of the Christos: Thee Mythology, Symbolism and Prophecy of the Return of Planet X
and the Age of Terror, these wise beings, the ‘angels’ of the Old Testament,
wielded enormously advanced technology.
They operated a (star) gateway linking Heaven and Earth. They genetically altered the human body
as a “resurrection machine” of “soul flower” to operate in conjunction with this gateway.
They possessed the secret science of alchemy, the transmutation of the elements, through
which they could create weapons of mass destruction and advanced human beings.
The Sumerian Sun god enters Earth through a gateway.
The Shining Ones of Planet X were last here in 3760 B.C., which is the time of Anu’s,
the ruler of Planet X’s, last visit
. In a morning- time ceremony in which Anu departs Earth,
E.A. and Enlil, two sons of Anu, await Anu at what is called the “golden supporter.”
They hold several objects: “that which opens up the secrets” (most certainly the Egyptian
Key of Life), “the Sun disks,”and the “splendid shining posts.” The “golden supporter”
device is sheathed in a golden skin.
The Egyptian Pillar or Tree of Life
Anu and his wife, Antu, stand before the golden supporter,
which can only be thegolden Pillar or Tree of Life, a 45- foot tall
device lined with a gold alloy the Egyptians said could ‘drill’
holes in space. This device (left) was mounted on a platform
that resembles in form and likely function the Ark of
the Covenant.
The device comes alive, the gate swings open and
Anu and Antu enter the Abyss
(sometimes called the Fish of Isis).
Anu entering the gateway to return to Planet X? Z.Sitchin
When Time Began
Incredibly, Sumerian scholar Zecharia Sitchin has recovered
what may be depictions of this scene. In this scene (right)
we see two people flanking an entrance to a gateway
in which a third person makes an entrance (or exit).
The sun and moon symbols can be seen above
this gateway.
The two guards hold devices, long poles with circular tops,
which Sitchin concludes served an astronomical purpose.
He also equates them with golden pillars at
the entrance to Solomon’s Temple in Jerusalem. 1
This being so, can we see these devices as “golden needles”?
Is the story of this golden needle woven deep into
the mythology of the ancients? Is the ‘silver thread’ woven
by this needle actually a wormhole linking Earth with the Planet of the Lord?
While it is uncertain if Planet X is headed this way in the immediate future,
one thing is certain. The return of this planet centers on the recovery of
a technology once housed at Solomon’s Temple that is used to open
a gateway linking Earth with far off regions of space. Recent military
and political activity suggests that the world powers are jockeying
for position as if the return of Planet X is imminent. The stakes are high.
This planet is at the center of a biblical prophecy known as
the “Day of the Lord.” The man in the catbird seat of this milieu
is Saddam Hussein, the mass-murdering Iraqi dictator with
the Cheshire cat smile.
Sitchin, one of the few experts on Planet X, indicates that, according to prophecy,
the last time Planet X was visible was in the 6th century B.C. The “Day of the Lord”
(thereturn of Planet X) occurred c. 550 B.C. when prophecies said that Planet X
was visible. 550 B.C. is an extraordinary date. In 576 B.C. the Babylonian king
Nebuchadnezzar looted and leveled the Temple of Solomon in Jerusalem,
took three temple wise men hostage, and as I will detail momentarily,
appears to have negotiated a deal with these temple priests to open a gateway to Heaven.
As the book of Daniel describes, 2 he put the three wise men from the Temple of Solomon into
a “fiery furnace.” When they reappeared not only were they in pristine condition,
they were not alone. They had the Son of God in tow.
I have interpreted this ‘fiery furnace’ as a zone of frequency or vibration that is the mouth of
a stargate or a wormhole. When the three wise men entered this gateway they traveled to
a distant place, possibly to the center of our Milky Way galaxy, or possibly to Planet X,
and returned with the Son of God. This remarkable story is of far greater significance in
the Age of Terror than most realize.
It is well known that Iraqi President Saddam Hussein has connected himself with Nebuchadnezzar,
spending over $500 million during the 1980s on the reconstruction and the re-establishment of
ancient Babylon, the capitol of Nebuchadnezzar. Over sixty million bricks have been made to place
in the walls of Babylon, each engraved with the inscription,
“To King Nebuchadnezzar in the reign of Saddam Hussein”.
In essence, as has been widely reported by prophecy watchers and international news
organizations alike, Saddam is saying he is the reincarnated Nebuchadnezzar.
He is attempting to recreate and outdo the feats of the biblical king.
As we shall see through this investigation, Saddam controls an asset infinitely more
important and powerful than oil, or even, nuclear weapons. He controls access to
the temples that housed the history humanity’s origins, and potentially, the secrets of stargates.
Buried deep beneath the sand s of Iraq are the secrets of the Shining Ones of Planet X.
Saddam’s actions reveal that he knows the political value of these secrets.
Beforeexploring the eye-opening story of Nebuchadnezzar’s opening of a stargate
it is important to lay a foundation for this event.
Saddam is currently engaged in a massive program to convert Iraq into
a “Disneyland of the ancient Shining Ones” (my term).
Included in this program
is the recreation of ancient Sumerian temples dedicated to the Shining Ones,
the copying of ancient cuneiform tablets concerning the Shining Ones,
and the retrieval of ancient Babylon’s famed Ishtar Gate from Berlin.
One of the most startling antiquities moves Iraq is set to launch is a campaign to “revive”
the Ashurbanipal Library, the earliest systematically collected and catalogued library in
the ancient world. 3 This was the library of the king who said he could read the texts
from before the Flood!
Excavated by British archaeologists in the mid 19th century at Nineveh, the 25,000 cuneiform
tablets assembled by King Ashurbanipal are almost all now in the British Museum.
These include the famous 7th-century BC Flood Tablet, which relates part of the Epic of Gilgamesh
and contains the “backstory” to the account of the flood given in the Book of Genesis.
Gilgamesh tomb believed found
Last Updated: Tuesday, 29 April, 2003
07:57 GMT 08:57 UK
the city of Uruk, from which Iraq gets its name.
the last resting place of its famous King.
but it looks very similar to that described in the epic," Jorg Fassbinder,
of the Bavarian department of Historical Monuments in Munich,
told the BBC World Service's Science in Action programme.
Gilgamesh was believed to be two-thirds god, one-third human.
In the book - actually a set of inscribed clay tablets - Gilgamesh was described as having been
river parted following his death.
He said the amazing discovery of the ancient city under the Iraqi desert
had been made possible by modern technology.
"By differences in magnetization in the soil, you can look into
the ground," Mr Fassbinder added. "The difference between mudbricks and sediments in the Euphrates river gives a very detailed structure."
Who can compare with him in kingliness? Who can say, like
Gilgamesh, I am king?
The Epic Of Gilgamesh
This creates a magnetogram, which is then digitally mapped, effectively giving
a town plan of Uruk.
'Venice in the desert'
Iraq has long been the site of some of the most important historical finds
"The most surprising thing was that we found structures already
described by Gilgamesh," Mr Fassbinder stated.
"We covered more than 100 hectares. We have found garden structures
and field structures as described in the epic, and we found
Babylonian houses." But he said the most astonishing find was
an incredibly sophisticated system of canals.
"Very clearly, we can see in the canals some structures showing that flooding destroyed some houses, which means it was a highly developed system. "[It was] like Venice in the desert."
In April 2002 Iraqi archaeologists asked the British Museum if it would allow casts to be
made of the cuneiform tablets. Although individual tablets have in the past been copied
as casts, this would be the first time that any substantial number had been made.
British Museum keeper John Curtis, who received the request during his visit to Baghdad
last month, told The Art Newspaper that “the museum would do its best to cooperate.” 4
The proposed reconstructed library at Nineveh would hold copies of all of the British Museum’s
tablets, and it is planned as both a scholarly center and tourist attraction. Alongside the library,
the Saddam [Hussein] Institute for Cuneiform Studies will be set up as part of the University
of Mosul. Plans are also being made to excavate one of the wings of King Ashurbanipal’s palace,
in Kuyunjik Mound, where it is hoped that thousands of other tablets lie buried.
There are 10,000 archaeological sites scattered through the country, most of them not fully
excavated. In Iraq historical monuments are a matter of national security.
This is because they mark the locations of the secrets of the Shining Ones.
Hussein's rebuilt
Ishtar Gate, Babylon
It’s absolutely impossible to get close to the legendary ziggurat of Ur
without a letter of authorization. Ur, the Biblical city of the Chaldeans,
is the land of the prophetAbraham, father of the three great monotheist religions:
Judaism, Christianity and Islam.
What is presented as the ruins of his house from around 4000 BC
can also be seen near the ziggurat. The ziggurat was re-engineered by
Nebuchadnezzar. A monumental staircase - rebuilt by order of Saddam Hussein
- allows the visitor to ascend to the second stage. The facade of the ziggurat
still bears traces of American bombing during the Gulf War - or
“Mother of All Battle s” as it’s known in Iraq.
According to archaeologists, gang leaders sometimes drive through
provincial towns with trucks and shovels, recruiting people to dig at poorly
guarded sites.5 What are they after? As in the example set by
a recent robbery the monument robbers want knowledge.
Thieves smashed the doors of an Iraqi museum and a glass display case,
absconding with cuneiform tablets and cylinders from the 6th century B.C.
They left behind gold jewelry that might have tempted amateurs
Cylinders from the 6th century B.C.are more valuable than gold for the information they record.
Sample of early cylinder seal.
The looting began in the tumultuous immediate aftermath of the Gulf War.
In the Kurdish areas of northern Iraq, and the southern area populated by
Iraq’s restless Shiite Muslims, most of the provincial museums were ransacked.
Iraqi authorities charge - and their accusations are backed up by some archaeologists
abroad - that Sumerian antiquities are
smuggled out of the country
by diplomats and U.N.relief workers.
Last summer, a landlord was cleaning out a Baghdad villa that had been recently
vacated by a diplomat. Inside, he found two cartons of archaeologist fragments.
The Iraqi government hasn’t named the diplomat or his country. Saddam believes
he is to unite the Arab world under Islam.
By recreating ancient Babylon, and by uncovering and disseminating the texts
from ancient Iraq he can simultaneously duplicate the feats of Nebuchadnezzar
and potentially pull the intellectual rug out from underneath Judaism and Christianity.
The pre-history of both of these religions is found in Iraq. Saddam controls
the temples that housed the “back story” of Judaism and Christianity.
He seeks to retrieve the rest of the story and destroy these religions.
If Saddam launches a storm of ancient knowledge it could turn current biblical scholarship
into gumbo, and release a potentially debilitating thought virus into the Global Mind.
This may explain why the pace of archaeological work being done by in Iraq is increasing
from the little activity of the past decade due to the ongoing international crisis
over Baghdad’s weapons programs and the economic effects of sanctions.
Saddam assuredly knows that the release of documentation proving
Judaism and Christianity as derivatives or copies of an ancient Sumerian religion
could have a devastating effect on global affairs. Millions of people who partake in
the Christian ceremony of communion may be surprised to learn that this procedure
is derived from an alchemical teaching of E.A
., the god of wisdom of ancient Sumeria,
modern-day Iraq.
An expedition sponsored this past winter by the Deutsches Archaeologisches Institut
(German Archaeology Institute) in Berlin sent a team of researchers to make a partial map
of a buried Mesopotamian city using a magnetometer. The sensitive instrument is able to
detect the presence of man-made objects beneath the soil and reveal the remnants of walls,
canals, and residential districts. (see below report).
The team zeroed in on the legendary city of Uruk, immortalized in a famous Sumerian epic poem --
“The Song of Gilgamesh.” The poem, which today is the earliest surviving work of literature,
tells the story of a Sumerian hero, Gilgamesh, whom many researchers believe may have been
one of Uruk’s early kings. In the story, Gilgamesh goes in search of the Stairway to Heaven
and the Abode of the Gods.
Akkadian seal impression (2340-2180 BC) depicting Gilgamesh’s journey beyond
the gates at the end of the Earth guarded by doorkeepers tending ringed (or ringing) posts. He met Utnapishtim (the Sumerian Noah) at the source of the waters of
British Museum.
According to prophecy, additional primary targets of Saddam’s duplication
of Nebuchadnezzar’s feats involve London, New York and Egypt.
Each of these locations is home to obelisks known as “the images of Bethshemesh”.
These “images” are referred to in Jeremiah 43:9-13. Bethshemesh (literally
“House of the Son God”) is in the land of Egypt. “The houses of the gods of
the Egyptians shall he burn with fire,” says Jeremiah.
“Bethshemesh, that is in the land of Egypt” is the city of Heliopolis,
which is 6 miles NE of Cairo, Egypt. It was the center of an ancient cult of
an Egyptian Sun of God, who was symbolized by the phoenix or heron.
Heliopolis was the location of the Temple of the Phoenix (or heron), the
Egyptian sun god and savior. The symbolism of the phoenix or heron was
later attached to Jesus, including the hieroglyph for the heron ,
which was duplicated in the fish symbol of Jesus
Something of profound significance appears to reside at Heliopolis.
Located just across the Nile from Giza and the pyramids, Heliopolis was
the center of Egyptian religion. It is a place of enormous mystery.
In the Bible the name given to Heliopolis was “On” or “An.” Sumerian texts
record this is also one of the names of Planet X, and was derived from
“Anu,” the name of the ruler of Planet X.7
The Greek Heliopolis means “City of Helios,” literally the “city of the sun god Helios”
(“light of life”) 8 being the sun/son of An or Anu. Heliopolis, An, or Tula,
as it was also known, became the center for the priesthood of the sun god,
Ra, sometime around 3350 BC. 9
Thutmose III originally erected the obelisks of Heliopolis about 1500 BC.
Thuthmosis III is known as the ‘Napoleon of ancient Egypt’. Historians note
that his martial accomplishments matched precisely the impressive resume of
the biblical King David, the ancestor of Jesus, and father of King Solomon.
His rulership would witness the founding one of the most mysterious dynasties
in all Egyptian history, a dynasty that included such illustrious names as Akhenaton
and Tutankhamun. According to Laurence Gardner, it was also Tuthmosis III
who established a mystery school of the original Rosicrucians, the Essene Therapeutate
– meaning ‘physicians of the soul’. 10 The Essenes later adopted this name.
If Tuthmosis III was the original biblical ‘King David’, as some scholars now suspect, 11
this would mean that the descendents of David, including Solomon and Jesus,
would have carried the ’sang azure’, the royal blue blood of the Pharaohs.
The obelisks in New York and London are the property of this family.
In this profound scenario, Jesus, who came from the Royal House of David,
potentially emerges as one of the last, if not the last, of the Egyptian Pharaohs. 12
If the Davids were a group of people (possibly pharaohs) it would not be easy for them
to simply disappear. Is there any evidence of their continued influence in worldly affairs,
even as absurd as it sounds, in the affairs of America?
As it turns our there is evidence of the pharaohs continued existence as the Celtic Druids.
Scholars debate the origins of the word Druid. In Gaelic druidh means ‘wise man’ or ‘instructor’.
This is another appellation of the Shining Ones. Larousse’s World Mythology says Druid
came from daru-vid, meaning ‘skilled’. One art in which the Druids were highly skilled was
the transmutation of the elements. One classical scholar from the third century,
Diogenes Laertius, said the Druids were the cult of the Magi, the sect of the Three
Wise Men who sought out the Christ child Jesus. Does the Druid connection to
the line of David explain why they sought the Christ child?
It was at Heliopolis that the Pyramid Texts were discovered. The Pyramid Texts are
hieroglyphic writings written on the walls of the pyramids that contain the instructions
for the rebirth and resurrection of the pharaohs. It was also a center of an alchemical
priesthood that guarded the secrets of transmuting the elements. Nebuchadnezzar’s
expedition into Egypt gave the ancient fulfillment of this prophecy. However,
modern fulfillment of this prophecy will be seen in New York City and London.
The “images of Bethshemesh” (43:13) are literally, theobelisks of Heliopolis”.
These obelisks are also known as Cleopatra’s Needles.
Cleopatra’s Needles are two ancient obelisks presented by the khedive of Egypt
to Great Britain (1878) and the United States (1880). Each weighs about 200 tons
and stands about 70 feet tall. The British installed their obelisk on the Thames (River)
Embankment in London (1878).
The Americans installed their obelisk in Central Park in New York City (1881).
Jeremiah prophesied the destruction of these obelisks by the “king of Babylon”. 13
The concern the United States must contend with is that this modern Nebuchadnezzar
can obtain nuclear weapons - almost at will - through the Russian or Chinese black market.
According to the blueprint provided by biblical prophecy, he may choose to use these weapons
against these three targets. The primary target is Jerusalem. Nebuchadnezzar is
the only foreign invader to destroy Jerusalem. Saddam believes he must match him.
The United States and Israel is prepared to use nuclear force against Iraq if necessary.
Revelation 18:21-23, in fact, tells of the future and utter annihilation of the
City of Babylon,
the bridegroom and of the bride shall be heard no more at all in thee"
Bible prophecies concerning Babylon’s destruction have not yet been fulfilled. Isaiah 13:19 says,
shall be as when God overthrew Sodom and Gomorrah.”
Isaiah’s “burden” for Babylon, in chapter thirteen also included a terse warning:
the Almighty,” (verse six).
Sodom and Gomorrah were erased from the map by a premeditated and preventable thunderbolt
from the sky of atomic proportions. In a scene reminiscent of the obliteration of Hiroshima
and Nagasaki, at dawn one morning, as Abraham looked upon the valley below, fire came down
from “the Lord out of heaven.”
22 “The smoke of the land went up like the smoke of a furnace” 23
Sodom and Gomorrah were gone. God promised Israel that they would someday take up this taunt
against the King of Babylon (Saddam Hussein?), “How hath the oppressor ceased!
the golden city ceased!” (Isaiah 14:4.)
Here, the utter destruction of the city of Babylon is linked to,
1.) God’s overthrow of Sodom and Gomorrah (a blast of light from heaven) and
2.) the Day of the Lord (the return of Planet X).
If and when such an event took place, every thing Saddam has rebuilt could suddenly
reduce to vitrified green glass that no one could even go near for thousands of years.
As we can see, Saddam is a far more complex figure than the comic book head of
the ‘axis of evil’ presented on the evening news. He is a man in search of himself and
the alchemical secrets of the ancient past. He may also be in a race against time.
If Planet X is due to make a rendezvous with Earth in the near future
he does not have much time.
Let us now turn the ultimate quest of this man: the duplication of Nebuchadnezzar’s stargate
encounter. Nebuchadnezzar’s stargate encounter began in 576 BC when he conquered Jerusalem, 14
flattened its walls, stripped Solomon’s Temple of all its treasure 15, set the city ablaze, and
returned home to Babylon with the treasure of the Temple16 and a group of royal prisoners of war. 17
The Temple priests supposedly were forewarned before the attack. To save the Ark of the Covenant
the priests took it to ‘Solomon’s Vault’ beneath the Temple, sealed themselves inside, and
committed ritual suicide so no one would know where they hid it. Nebuchadnezzar also took
captive thousands and thousands of Jerusalem’s citizens, including the holy men at the Temple,
and forcibly moved them to Babylon, the ruins of which are buried beneath the sands of Iraq
about twenty miles from modern-day Baghdad. During this Babylonian Captivity many
strange things happened.
Included among the captives were three wise men from the Temple, a young man and
‘master magician’ named Daniel, and another prominent prophet, Ezekiel (who had visions of
‘the kingdom of Heaven on Earth’ while imprisoned in Babylon and later left the planetin
what many consider to be a starship
). Surprisingly, the Jews discovered that the Babylonians
possessed long sought answers concerning their past. This is because the Jewish and
Babylonian histories emerged from the same original source in Sumeria.
From the Sumerian stories the Hebrews found missing pieces to their own Flood story
and story of Creation. With a few name changes here and there both traditions match.
Most scholars now believe it was here in Babylon during the captivity of Nebuchadnezzar
that the first five books of the Old Testament, including Daniel and Ezekiel, were constructed
(with a lot help from the original Sumerian stories). Most Christians are shocked to learn
the stories that form the foundation of their religion are copies of original stories that
belonged to another time, place and people. Only the names have been changed.
As important as it is to realize the context in which these books were assembled --
the captivity of their authors -- it is more important to realize that they are
a compilation of actual history, mythology, literary devices and fond memories
of a past that never was Hebrew, but Sumerian.
Separating Hebrew from Sumerian is crucial.
The original stories provide valuable and accurate knowledge.
The marriage between the Sumerian and Hebrew mythologies was a match made in heaven.
It was as if each carried the missing half to the other’s message. What both sides apparently
wanted was access to the stargate of the Shining Ones. This was the gift of
the gods of Planet X.
As we shall see, Nebuchadnezzar’s story bears this out.
On entering ancient Babylon the visitor passed the E-mah, the temple of the mother goddess
Ninmah or Ninharsag, which has recently been restored by Saddam. 18 E-mah is a highly
significant word. It is the Hebrew word for ‘terror’. Beyond the Emah was Babylon’s most
important temple, the Esagila, the dwelling-place of the sun god Marduk, the Babylonian name
for Planet X. Nebuchadnezzar says that he covered its wall with sparkling gold in order to make it
shine like the sun. In this temple was found a chapel or sanctuary for Marduk’s father, E.A,
whom Zecharia Sitchin upholds as the geneticengineer responsible for the creation of humanity.
Model of Babylon
Second only to Nebuchadnezzar’s famous Hanging Gardens Babylon’s most famous monument
was the staged tower or ziggurat, Etemenaki, ‘the house that is the foundation of heaven and earth’,
situated north of Marduk’s temple. The Marduk temple housed the golden image of Bel (‘the Lord’)
and a strange golden table, which combined weighed nearly fifty thousand pounds of solid gold!
Nebuchadnezzar’s Hanging Gardens of Babylon were one of the seven wonders of the ancient world.
Growing on a huge seventy-five-foot high artificial seven-stage mountain known as of the fantastic
ziggurat of Marduk, the well-known Tower of Babel, which Nebuchadnezzar restored,
the Hanging Gardens could be seen for fifty miles across the flat desert. The seven terraces held trees,
vines and flowers and were watered by a system of wells and fountains. King Nebuchadnezzar had
this wonder built for his queen who longed to return to her mountain homeland.
Babylon must have been a spectacular, perhaps unbelievable, sight to Daniel and the rest of
the Jewish captives, sort of like placing a war-torn refugee child in Disneyland today.
In its glory the city of Babylon was the greatest city in Mesopotamia --
the center of the new world order.
It was a veritable playground for the gods.
Babel originates from the word Bab-li, which in the Babylonian language meant ‘Gate of God’.
This is our first tip-off that Nebuchadnezzar attempted to construct a means -- perhaps even
a stargate -- to transcend earth life and travel the cosmos. Our primary interest is in the image
of gold Nebuchadnezzar set-up in Babylon. 19
This isn’t some ancient status symbol the king kept on his desk. The image is a massive
three score (60) cubits high and six cubits wide. A cubit is 18 inches, making the image 540 inches high
(three score or thirty times 18 inches high). 540 inches is 45 feet high, about the size of a four and
a half story building! Undoubtedly, this massive structure could be seen from miles around.
Nebuchadnezzar could not make this gleaming image (the Pillar of Osiris) work. This was a major failure.
Like the tribal leader David, who ruled Jerusalem five hundred years before him, the king had planned
to unify his kingdom, and the golden image was the unifying force. He tried using music to get it to work.
He demanded that when the people heard the music play they were to fall down and worship
the golden image (as if this act would impress the lifeless heap). 20 If they didn’t they would be tossed
into a burning fiery furnace. 21
Nebuchadnezzar acknowledged that Daniel had immense prophetic gifts, including the ability to
interpret dreams. In chapter four of Daniel, he is asked to interpret a dream in which Nebuchadnezzar saw:
‘a tree in the midst of the Earth, and the height thereof was great. The tree grew,
and was strong, and the height thereof reached into heaven, and the sight thereof
to the end of the Earth’. 22
There was great fruit in this tree and the birds of Heaven lived in its branches. From this tree
the king saw a “watcher” and a “holy one” from Heaven emerge. They told him to destroy the tree,
and leave its ‘stump’ in the Earth. This was a confusing dream to the king, but not to us.
The “watchers” is anothername for the Shining Ones. It is also the Egyptian name for
“divine being” or “god” NTR, or neter, which means “one who watches”. Neter-neter land is
the name of theplace in the stars where these beings dwell. Sumeria, another earthly land
of the Shining Ones, was known as the land of ‘ones who watch’.
Why didn’t the Watchers want Nebuchadnezzar to join them in Neter-neter land
(Peter Pan’s Never Never Land)? Could it be that it was because Nebuchadnezzar
wasnot one of them
(but Daniel was, which explains why he could interpret their symbols)?
What did they mean by leaving the ‘stump’ of the ‘tree’ in the ground?
Nebuchadnezzar wanted to know. Did this dream foretell disaster of a project represented
by the tree? If so, what is the specific project that is in danger?
The answer to this question is found in the fact that Old Testament scholars universally agree
that Daniel was compiled over a long period of time and does not represent the visions of
one particular person. Daniel (‘God is my judge’) was not apersonal name. The question
who or what then is the Daniel takes on paramount importance.
In her Woman’s Encyclopedia of Myths and Secrets, 23 Barbara Walker answers this question
by saying ‘Daniel’ was a title used to distinguish a group of people, “a person of the Goddess
Dana or Diana”. Dana was Jacob’s daughter, his 13th child. Her name means ‘light of An’.
There’s your trouble. That is exactly the same meaning as the Celtic Tuatha De’ Danann
(‘Children of the Goddess Diana’). In Irish history, the mystical Tuatha De’Danann, are described
as heaven-sent ‘gods, and not-gods’. They are compared with the Sanskrit deva
(shining one, god) and adeva (Devil)
, which became daeva (devil) in Persian.
As we have seen, divas also links with terror.
These connections are important not only for their value in decoding the story of Daniel,
but also for another important reason. According to Sir Laurence Gardner, MaryMagdalene,
as the Miriam, was the Head Sister of the Order of Dan.
24 Her order appears to be the continuation
of the mysterious Tuatha De’ Danann. Mary or Mari’s title ‘Magdalene’ means ‘she of
the temple-tower’, a reference to Jerusalem’s temple and its three towers. 25 Ultimately,
as Nebuchadnezzar’s story continues, along came three wise Jews from Jerusalem. 26
Unfortunately for Nebuchadnezzar, they refuse to worship the hulking image or the god of
the Babylonian king. What is more, the three insult Nebuchadnezzar by betting the king that
their god will save them from the fiery furnace.27
Clearly, the three wise men from the Temple of Solomon possess crucial knowledge
that Nebuchadnezzar needs to make this golden gadget work. He was successful in firing
up the fiery furnace component of the ‘image’. But beyond that he was stuck.
He needed the ‘open sesame’. What is this gadget, this golden image of which I speak?
This holy object is likely the Axis Mundi, the Pillar of God.
If it is correct to associate Pillar with the forty five-foot ‘tree’ bearing the ‘great fruit’ of
Nebuchadnezzar’s dream, it now makes perfect sense why Nebuchadnezzar would wish to
involve Daniel in this project. It was the sons of the Shining Ones of D’Anu, the people of Daniel,
who had originally brought this device to Earth. The angel who appeared to the king was related to
the Daniel. There is no way in hell they would want Nebuchadnezzar to enter their realm uninvited.
The Trial of the Three Wise Men from the Temple of Solomon. The Three Wise Men are depicted rejecting the Image of Baal -- a head atop a pillar. From the Catacomb of Sts.
Mark and Marcellian, Rome, 4th century.
In the story from Daniel the three wise men refuse to spill the beans to
Nebuchadnezzar, what is undoubtedly the ‘open sesame’ to open the (star) gateway.Furious, the king orders that the three be cast into the ‘fiery furnace’.
“The three men put on their coats, their hats and their other garments, and were cast in the midst of the burning fiery furnace” says Daniel 3:21. “Their coats, their hats, and their other garments,” you say? This is an immensely meaningful statement.
Why put on any clothes at all if your body is about to be
translated into a toasted marshmallow by the fiery furnace?
These garments turn out to be more than just standard-issue loungewear at the Temple of Solomon or the garb of hostages
in Babylon.
That is, if they turn out to be anything like the coat, the hat and
the other garments the goddess Mari is wearing in The Goddess
with a Vase discovered at her temple at Mari in 1934.
Mari is shown wearing her Shugurra helmet (‘a hat’). Literally
translated Shugurra means ‘that which makes go go far into
the universe’. 29 It may be more than coincidence, or sheer poetry,
that Shu-gurr-a resolves to Sgr A, the name of the radio source
believed to lay at the exact Core of our galaxy
. Also resident
at the Galactic Core is a black hole. It is possible this is also
the “helmet of salvation” described in Ephesians 6:17.
Mari’s Shugurra helmet.
Mari also wears a heavy full- length coat and other garments. This coat is called
the PALA garment. This entire get- up is fantastically similar in description to that
described in chapter 6 of Ephesians. There, in addition to the “helmet of salvation,”
spiritual questers are encouraged to “put on the whole armor of God, that ye may be
able to stand against the wiles of the Devil.
wickedness in high places.” 30
The principalities and powers are the angelic spiritual forces that work as heavenly governors
and messengers in the heavenly realms (i.e. galactic beings). This is exactly the angelic level of
the Shining Ones. Apparently, some of these are harmful creatures that seek to attach themselves
to human souls. At Armageddon, Jesus promises to send his angels to sever the wicked from
among the just. And then shall cast them (both?) into the fire. 31
Does the “armor of God” uniform here described -- including the Shugurra Helmet of Salvation
and the PALA coat
-- simultaneously help to protect us from harmful spirits, and make the cosmic
connection with a stargate? It appears so, for Ephesians next describes a person standing
in front of the Ark of Covenant, the soul-transportation device that opens this fiery furnace!
We know this because the person is wearing the Breastplate of Righteousness. Their feet are
“shod with preparation for the Gospel of Peace.” Above all they take the shield of faith,
the Helmet of Salvation and the (S)word of the Spirit, which is the word of God. All of these
appear to be necessary for soul travel through the stargate to Tula.
What happens to those who don the “armor of God” get-up and walk through the fiery furnace?
Where do they go? Through the black hole? This detail is omitted. However, after the three wise men
from Solomon’s Temple entered the fiery furnace, Nebuchadnezzar and all the king’s men cautiously
approached the lethal furnace. He asks that the three men appear to him. When they do,
the king (and I’m certain all the assembled) stands utterly astonished.
He’s expecting nasty flamebroiled corpses. Instead, he sees the three wise men are in perfect condition!
“Did we not cast three men bound into the midst of the fire?” asks the baffled king. 32 He certainly did.
To add to the high strangeness of this event, a fourth person now accompanies them!
However, this is not just any man. Nebuchadnezzar believes this fourth man is an angel.
Not just any angel either. The fourth man is like the Son of God! 33 Is this Jesus, the Son of God?
Is Nebuchadnezzar telling us the three wise men returned from their stargate travels with Jesus
in tow five hundred years before his appearance in the New Testament?
It is quite conceivable because, understandably, at this point Nebuchadnezzar was convinced:
the god of the three wise Jews is the God. He proclaims that if anyone speaks against this God,
he will cut them to pieces, and their houses will be made into dunghills. 34 Next,
he promoted the three wise men.
The Bible does not say what happened after this Son of God arrived. I believe, however,
that tremendous knowledge must have been gained from his appearance. This knowledge is
capable of altering the balance of power in the world. If Saddam Hussein truly believes himself
to be Nebuchadnezzar, he most certainly would be interested in acquiring this knowledge,
which is among the highest secrets of the
Shining Ones.
In Ark of the Christos I take a closer look at this exotic occurrence, and the possible stargate
knowledge gleaned from this episode. Understanding the science of stargates makes one
a master of the laws of nature. It also provides one the capability of manufacturing weapons systems
that make nuclear weapons look like firecrackers in comparison. This is just one more reason
Saddam is in the crosshairs of the world.
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 5:06 pm
"Imagine this scenario. The U.S. government obtains intelligence that hidden somewhere
In this scenario, when Nibiru (the alleged "twelfth planet"--J.T.) is closest to Earth,
the Anunnaki" will "take the opportunity to travel to Earth through that same stargate
and will set up their encampment in Iraq."
"With time running out, President Bush invades Iraq. American scientists raid
the (Iraqi national) museum and close the stargate
, thus frustrating the grandiose
and making the world safe for the New World Order."
it is probably exactly what happened!"
"Dr. Salla, an Australian national, obtained his M.A. in philosophy from the University
of Melbourne and then his doctorate (Ph.D.) in Government from the University of
Queensland in 1993. After spending two years as an Associate at the Centre for Middle Eastern
and Central Asian Studies in Australia, he joined the faculty of the Department of Political Science
at Australian National University in Canberra as a lecturer. In 1996, he came to the U.S. and
gained an academic appointment at the School of International Service at American University
in Washington, D.C. where he remained until 2001."
"Dr. Salla's Iraq stargate theory is expounded in a paper written for publication."
"He believes that the U.S., Russia, Germany and France have been aware that the Anunnaki
left behind some very high-tech apparatus, and possibly weaponry, when they abandoned
the Earth around 1,700 B.C., and that Saddam Hussein had been getting assistance from
Russian, German and French archaeological teams for years in an attempt to unravel,
and perhaps reverse-engineer, this apparatus, which Salla claims is probably far in
advance of any technology we might have obtained from the Grays from Zeta Reticuli"
and which is supposedly warehoused at Area 51 in Nevada.
Some in the Middle East's UFO community theorize that Task Force 20, which has been
conducting commando raids north of Baghdad in recent weeks, is looking for this stargate
in addition to elusive former dictator Saddam Hussein.
According to Mahmoud al-Diwaniyahi, the alien stargate may be hidden away in one of
several locations. Some possible locations include:
• An ancient crypt beneath the Sumerian ziqqurat (pyramid) of Dur Kurigalzu, near Baghdad.
• The so-called "Dark Ziqqurat" of Enzu, located in the as-Zab as-Saghir (Little Zab) river valley, which once was the lair of Gimil-ishbi, a Sumerian sorcerer of 3,000 B.C.
• Beneath the ancient fortress of Qalaat-e-Julundi, near Zarzi in the Little Zab River valley, north of Mosul.
• Saddam's reputed underground base in Al-Ouja, 3 kilometers (2 miles) north of Tikrit, which was built for him by "the Zarzi aliens," extraterrestrials whose UFO crashed in Iraq in December 1998 and who were
given sanctuary in Zarzi by Saddam.
(Editor's Comment: Dr. Salla's paper puts a whole new spin on that UFO "crash."
Did the Zarzi aliens land in Iraq deliberately? Were they planning all along to reactivate
the ancient Anunnaki stargate?)
"Task Force 20 has been relentless in their search," Mahmoud reported, "But Saddam
continues to elude them. He is said to never spend more than three or four hours in
any one location. Saturday (August 9, 2003) he released another tape urging the Iraqi people
to resist the occupation."
(Editor's Comment: Another audiotape!? Saddam has more recordings out than Mariah Carey.)
As always, Baghdad continues to seethe with rumors and urban legends.
"As a U.S. soldier peered out of a passing tank, a young engineering student
and a retired accountant contemplated one of the more common questions on
"'With those sunglasses, he can definitely see through women's clothes,'
said the engineering student, Samer Hamid. 'It makes me angry. We are afraid
to take our families out on the street.'"
"The retired accountant, Hekmet Tinber Hassan, smiled and said it was a
baseless rumor, just like the widespread story that Saddam Hussein had been
secretly working for America and was now at a CIA safe house.
"'I do not believe Saddam is in America,' Hassan said, 'I heard he went to Tel Aviv.'"
"In the urban legends flourishing in Baghdad, the soldiers triumphed thanks to
Saddam's treachery and to U.S. technology. The legend about the X-ray sunglasses
might have evolved from reports about the soldiers' night- vision goggles,
or maybe just from the imposing Terminator image of the soldiers."
"Compared with the (Iraqi) residents, who cope with the 120-degree heat by
staying in the shade and dressing in light clothes and sandals, the soldiers
have the look of robotic aliens as they patrol in the midday sun wearing
combat boots, helmets and armored vests."
"Some Iraqis say the soldiers take special pills that keep them cool, but the most
common theory is that they have portable air-conditioners--usually said to be
placed inside the vests (flak jackets - J.T.), but sometimes placed in the helmet
or even the underwear."
"'There is fluid circulating throughout the underwear,' said Hamid the engineering
student. 'I am not sure of the exact mechanism, but we all know the Americans have
very sophisticated technology.'"
American "GIs are said to be so demoralized that 30 percent of them have already
abandoned their posts and paid $600 apiece to escape by an underground railroad
to Turkey or Syria.""Others have supposedly converted to Islam and fled to
marry women in Saudi Arabia."
Most disturbing of the urban legends is the one dealing with "concealed casualties."
"They say Saddam's alien friends used their bio- engineering to create
giant scorpions. There were rumors of these creatures in the as-Zab
as-Saghir before the war,
" Mahmoud reported.
"They say some Americans have been killed by these creatures.
The rumor in Fallujah is that the Americans are hiding the casualties by
dumping large numbers of soldiers' bodies each night into the Tigris River."
(Editor's Comment: There are also rumors that Saddam and Elvis ride around in
a shibriyeh on the back of a giant scorpion
. I'll believe in these cow-sized scorpions
when I see a photo of one beside a Humvee. On the other hand, the Pentagon has been
known to deep-six unwelcome casualties before. In May 1944, the U.S. Army buried
3,000 men in unmarked graves in southern England. The men were casualties of
the ill-fated Exercise Tiger, a rehearsal for D-Day.)
Iraqi teenagers have taken a liking to the GI sunglasses.
"Like Zahra Thaer, 13. She was walking down a sidewalk in Baghdad wearing
a new pair of wraparound sunglasses."
"'These are the latest style,' she said, explaining that she had been lucky
to get one of the last pairs left in the store."
"Did she believe the soldiers' glasses gave them X- ray vision?"
"'I am not so sure about their sunglasses,' she said, 'But I know about the
helmet. Inside each helmet is a map showing the soldier the location of
every house in Iraq. My friends at school told me about it.'"
(See Atlantis Rising No. 41 for September/October 2003, "The Exopolitical Factor".
Also the Duluth, Minn. News-Tribune for August 8, 2003, "Mistrust of U.S. soldiers runs
deep-right down to the underwear," pages 1A and 12A. Also thanks to
Mahmoud al-Diwaniyahi
for the additional information.)
Go Back to Saddam or Go Back to Stargate
Edited by FAS
STAR GATE was one of a number of "remote viewing programs" conducted under
a variety of code names, including SUN STREAK, GRILL FRAME, and CENTER LANE
by DIA and INSCOM, and SCANATE by CIA.
separately from the operational unit.
This effort was initiated in response to CIA concerns about reported Soviets investigations
of psychic phenomena. Between 1969 and 1971, US intelligence sources concluded that
the Soviet Union was engaged in "psychotronic" research. By 1970, it was suggested that
achieved breakthroughs, even though the matter was considered speculative, controversial
and "fringy."
the NSA and at the time a Scientologist.
The effort initially focused on a few "gifted individuals" such as New York artist Ingo Swann,
an OT Level VII Scientologist. Many of the SRI "empaths" were from the Church of Scientology.
Individuals who appeared to showed potential were trained and taught to use talents for
"psychic warfare."
in the later stages of the training effort, this accuracy level was "often consistently exceeded."
GONDOLA WISH was a 1977 Army Assistant Chief of Staff for Intelligence (ACSI) Systems
Exploitation Detachment (SED) effort to evaluate potential adversary applications of remote viewing.
Building on GONDOLA WISH, an operational collection project was formalized under
Army intelligence as GRILL FLAME in mid-1978. Located in buildings 2560 and 2561
at Fort Meade, MD, GRILL FLAME, (INSCOM "Detachment G") consisted of soldiers and
a few civilians who were believed to possess varying degrees of natural psychic ability.
The SRI research program was integrated into GRILL FLAME in early 1979, and hundreds
of remote viewing experiments were carried out at SRI through 1986.
In 1983 the program was re-designated the INSCOM CENTER LANE Project (ICLP).
Ingo Swann and Harold Puthoff at SRI developed a set of instructions which theoretically
allowed anyone to be trained to produce accurate, detailed target data. used this new collection
methodology against a wide range of operational and training targets. The existence of this
highly classified program was reported by columnist Jack Anderson in April 1984.
In 1984 the National Academy of Sciences' National Research Council evaluated
the remote viewing program for the Army Research Institute. The results were unfavorable.
When Army funding ended in late 1985, the unit was re-designated SUN STREAK and transferred
to DIA's Scientific and Technical Intelligence Directorate, with the office code DT-S.
Under the auspices of the DIA, the program transitioned to Science Applications International
Corporation [SAIC] in 1991 and was renamed STAR GATE. The project, changed from a SAP
(Special Access Program) to a LIMDIS (limited dissemination) program, continued with
the participation of Edwin May, who presided over 70% of the total contractor budget
and 85% of the program's data collection.
Over a period of more than two decades some $20 million were spent on STAR GATE and
related activities, with $11 million budgeted from the mid-1980's to the early 1990s.
Over forty personnel served in the program at various times, including about 23 remote viewers.
At its peak during the mid-1980s the program included as many as seven full-time viewers and
as many analytical and support personnel. Three psychics were reportedly worked at FT Meade
for the CIA from 1990 through July 1995. The psychics were made available to other government
agencies which requested their services.
Participants who apparently demonstrated psychic abilities used at least three different
techniques various times:
writing was introduced in 1988, though it proved controversial and was
regarded by some as much less reliable.
By 1995 the program had conducted several hundred intelligence collection projects involving
thousands of remote viewing sessions. Notable successes were said to be "eight martini" results,
so-called because the remote viewing data were so mind-boggling that everyone has to go out
and drink eight martinis to recover.
Reported intelligence gathering successes included:
• Joe McMoneagle, a retired Special Project Intelligence Officer for SSPD,
SSD, and 902d MI Group, claims to have left Stargate in 1984 with a Legion
of Merit Award for providing information on 150 targets that were
unavailable from other sources.
• In 1974 one remote viewer appeared to have correctly described an airfield
with a large gantry and crane at one end of the field. The airfield at the
given map coordinates was the Soviet nuclear testing area at Semipalatinsk
-- a possible underground nuclear testing site [PNUTS]. In general, however,
most of the receiver's data were incorrect or could not be evaluated.
• A "remote viewer" was tasked to locate a Soviet Tu-95 bomber which had
crashed somewhere in Africa, which he allegedly did within several miles of
the actual wreckage.
• In September 1979 the National Security Council (NSC) staff asked about a Soviet
submarine under construction. The remote viewer reported that a very large,
aft end would be launched in 100 days. Two subs, one with 24 launch tubes
reportedly sighted in 120 days.
• One assignment included locating kidnapped BG James L. Dozier, who had been
kidnapped by the Red Brigades in Italy in 1981. He was freed by Italian
police after 42 days, apparently without help from the psychics. [according
to news reports, Italian police were assisted by "US State and Defense
Department specialists" using electronic surveillance equipment, an apparent
reference to the Special Collection Service]
• Another assignment included trying to hunt down Gadhafi before the 1986
bombing of Libya, but Gadhafi was not injured in the bombing.
• In February 1988 DIA asked where Marine Corps COL William Higgins was being
held in Lebanon. A remote viewer stated that Higgins was in a specific
building in a specific South Lebanon village, and a released hostage later
said to have claimed that Higgins had probably been in that building at that time.
• In January 1989 DOD was said to have asked about Libyan chemical weapons
work. A remote viewer reported that ship named either Patua or Potua would
sail from Tripoli to transport chemicals to an eastern Libyan port.
Reportedly, a ship named Batato loaded an undetermined cargo in Tripoli and
brought to an eastern Libyan port.
• Reportedly a remote-viewer "saw" that a KGB colonel caught spying in South
Africa had been smuggling information using a pocket calculator containing a
communications device. It is said that questioning along these lines by
South African intelligence led the spy to cooperate.
• During the Gulf War remote-viewers were reported to have suggested the
whereabouts of Iraq's Saddam Hussein, though there was never an independent
verification of this finding.
• The unit was tasked to find plutonium in North Korea in 1994, apparently
without notable success.
• Remote viewers were also said to have helped find
SCUD missiles and secret
biological and chemical warfare projects, and to have located and identified
the purposes of tunnels and extensive underground facilities.
The US program was sustained through the support of Sen. Claiborne Pell, D-R.I., and
Rep. Charles Rose, D-N.C., who were convinced of the program's effectiveness. However,
by the early 1990s the program was plagued by uneven management, poor unit morale,
divisiveness within the organization, poor performance, and few accurate results.
The FY 1995 Defense Appropriations bill directed that the program be transferred to CIA,
with CIA instructed to conduct a retrospective review of the program. In 1995 the American Institutes
for Research (AIR) was contracted by CIA to evaluate the program. Their 29 September 1995 final report
was released to the public 28 November 1995.
A positive assessment by statistician Jessica Utts, that a statistically significant effect had been
demonstrated in the laboratory [the government psychics were said to be accurate about 15 percent
of thetime], was offset by a negative one by psychologist Ray Hyman [a prominent CSICOP psychic
debunker]. The final recommendation by AIR was to terminate the STAR GATE effort.
CIA concluded that there was no case in which ESP had provided data used to
guide intelligence operations.
Enhancing Human Performance 1988, the National Press Academy
REMOTE VIEWERS: The secret history of America's psychic spies by Jim Schnabel
U.S. Secretly Used Psychics in Spy Cases Source: AP -- Byline: Richard Cole -- Date: 11/29/95
Psi Explorer - Project StarGate
Farsight Institute
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 5:33 pm
ซีไอเอ มักจะมี "อาวุธลับ" ประหลาดๆ อยู่เสมอ "สายลับพลังจิต" ก็เป็นหนึ่งในอาวุธลับที่ "ซีไอเอ"
แอบซ่อนไว้ใช้งานเวลาที่เข้าตาจนไม่สามารถที่จะใช้วิธีการตามแบบปรกติได้ "สายลับพลังจิต"
คาดว่ารัฐบาลสหรัฐฯ ได้เริ่มศึกษาเรื่องการติดต่อสื่อสารโดยพลังจิตหรือที่เรานิยมเรียกกันย่อๆ ว่า
อีเอสพี (ESP - Extra-sensory perception) อย่างเป็นจริงเป็นจังก็ในราว
ช่วงสงครามโลกครั้งที่ 2 เมื่อมีข่าวกรองระบุว่า จอมเผด็จการ อดอล์ฟ ฮิตเลอร์ ได้วางแผนการรบ
โดยอาศัยคำแนะนำของ นักไสยศาสตร์ และนายแพทย์ทัศนา(หมอดู) ด้วยความกลัวว่าจะตกเป็นรอง
เยอรมันในการทำสงคราม สหรัฐฯ จึงได้จัดตั้งหน่วยเฉพาะกิจขึ้นเพื่อการศึกษาและวิจัยการใช้พลังจิต
ในปี ค.ศ. 1952 กระทรวงกลาโหมสหรัฐฯ ได้รับรายงานว่า มีความเป็นไปได้ที่จะพัฒนาเอา "พลังจิต"
มาใช้เป็นอาวุธสงคราม ทำให้มีการศึกษา ค้นคว้ากันมาอย่างต่อเนื่องจนกระทั่งในปี ค.ศ. 1962
มีรายงานการวิจัยฉบับหนึ่งไปเตะตาหัวหน้าหน่วยให้บริการทางเทคโนโลยีของซีไอเอ เข้าให้จังเบ้อเร่อ
สตีเฟ่น ไอ แอบรัมส์ (Stephen I.Abrams) ผู้อำนวยการห้องทดลองค้นคว้าพลังจิต
ของมหาวิทยาลัยอ็อกฟอร์ด ในประเทศอังกฤษ สตีเฟ่นจึงได้ทดลองเรื่องการใช้พลังจิตภายใต้การอุปถัมภ์
ของโครงการ อัลทรา (ULTRA) ซึ่งเป็นโครงการลับโครงการหนึ่งของซีไอเอ ที่ทำการค้นคว้าเกี่ยวกับ
ซีไอเอ ได้เคยทำการทดลองใช้สารเสพติด เพื่อควบคุมพฤติกรรมของมนุษย์
การทดลองนั้นก็เป็นส่วนหนึ่งของโครงการอัลทรา ที่มีชื่อเรียกว่า
โครงการเอ็มเคอัลทรา (MKULTRA)
โครงการอัลทรา ได้ถูกซอยย่อย
ตลอดเวลา 10 ปีที่ผ่านไป สตีเฟนก็ได้ทดลองและจดบันทึก เขาไม่สามารถที่จะควบคุมและ
หาคำตอบเรื่องการใช้พลังจิตได้ ทำให้ดูเหมือนว่า โครงการวิจัยเรื่องปรากฏการณ์พลังจิต
เหนือธรรมชาตินี้จะล้มเหลว จนกระทั่งมีนักฟิสิกส์มือดี 2 คนมาปลุกผีโครงการนี้ขึ้นอีกครั้ง
ในต้นทศวรรษที่ 1970
ดร. รัสเซลล์ ทาร์ก (Dr. Russell Targ) และ ดร. ฮาโรลด์ อี พิวทอฟฟ์ (Harold E. Puthoff)
มีความสนใจในเรื่องพลังจิตเป็นอย่างมาก ในเดือนเมษายน ปี ค.ศ. 1972 ดร. ฮาโรลด์
ได้แนะนำชายคนหนึ่งกับเจ้าหน้าที่ของซีไอเอ โดยกล่าวว่า ชายคนนั้นมีหลักฐานการทดลองเรื่อง
การใช้พลังจิตเคลื่อนย้ายวัตถุ (Psychokinesis) ของรัสเซียอยู่ในมือ หลังจากที่ซีไอเอได้เห็นหลักฐาน
ที่เป็นภาพยนตร์ก็เกิดความสนใจและตกลงให้ ดร. ฮาร์โรลด์ และ ดร. รัสเซลล์ ทำการค้นคว้า
สแกนเอท (Scanate) ซึ่งมาจากคำว่า Scan by Coordinate
เป็นโครงการที่ทำการค้นคว้าเรื่องปรากฏการณ์พลังจิตเหนือธรรมชาติ การทดลองได้เริ่มขึ้นเป็นครั้งแรก
เมื่อเดือนสิงหาคม ค.ศ. 1972 ชายที่อ้างว่าตนมีพลังจิตคนหนึ่ง ได้ถูกทดสอบโดยให้บรรยายลักษณะ
ของวัตถุที่ถูกซ่อนอยู่โดยเจ้าหน้าที่ของซีไอเอ และเขาก็บอกรายละเอียดได้อย่างแม่นยำ
ในฤดูร้อนปี ค.ศ. 1973 ซีไอเอ ได้ทำการทดลอง
การมองระยะไกลโดยอาศัยพลังจิต (Remote Viewing)
Remote Viewing Timeline
พวกเขาได้ฝึกนักพลังจิตชื่อ แพท ไพรซ์ (Pat Price) โดยพวกเขาได้บอกเพียงตำแหน่งที่ตั้งของ
สิ่งปลูกสร้างชนิดหนึ่งให้แพททำการเพ่งกระแสจิตไปโดยไม่ใช้แผนที่ ซึ่งเป้าหมายนั้นเป็นบ้านพัก
ตากอากาศแห่งหนึ่งในฝั่งตะวันออกของสหรัฐฯ แล้วให้บอกทันทีว่าสิ่งนั้นเป็นอะไร แต่แพทกลับ
เพื่อเป็นการพิสูจน์ผลการทดสอบ เจ้าหน้าที่ซีไอเอ ได้ขับรถไปยังบ้านพักตากอากาศแห่งนั้น
มีอาคารของทางราชการที่มีระบบรักษาความปลอดภัยอย่างเข้มงวดตั้งอยู่ เขาจึงรีบขับรถกลับมา
เพื่อขอให้แพท ระบุรายละเอียดของอาคารนั้นอีกครั้ง
แพทสามารถระบุรายละเอียด รูปร่างของอาคารได้อย่างถูกต้อง ซึ่งก็ทำให้ ซีไอเอพอใจ
ในการทดลองครั้งนี้เป็นอย่างมาก จนได้มีการพัฒนาการทดลองในวิธีการต่างๆ อย่างต่อเนื่อง
เพื่อเป็นการเสริมสร้างพลังจิตให้กับนักพลังจิต อีกทั้งยังเป็นการสร้างความมั่นใจในความถูกต้อง
ของข้อมูลที่ได้ ก่อนที่จะนำมันมาใช้งานจริง
สายลับพลังจิต แพท ไพรซ์(คนกลาง) และ ดร. ฮาโรลด์ อี พิวทอฟฟ์(ริมขวา)
การทดลองมองระยะไกลที่ได้ผลลัพธ์น่าทึ่งที่สุดก็คือ การทดลองให้แพท บรรยายถึง
เมืองเซมิพาลาทินส์ก (Semipalatinsk) ที่อยู่ในประเทศคาซักสถาน
มันเป็นสถานที่ที่ไม่มีข้อมูลซึ่งยังต้องรอการสำรวจที่ ซีไอเอ ใช้รหัสเรียกว่า
URDF-3 (Unidentified Research and Development Facility - 3)
แพท ระบุว่าเขาเห็นปั้นจั่นขนาดใหญ่มากอันหนึ่ง มันใหญ่ขนาดเขาเห็นคนสูงเพียงแค่เพลาล้อของมัน
ซึ่งมันก็มีอยู่ที่นั่นจริงๆ แม้ว่าข้อมูลของแพทยังไม่ละเอียดนัก เพราะว่ายังมีแท่นขุดเจาะที่
แพทไม่ได้กล่าวถึง แต่ก็มากพอที่จะทำให้ ดร. เคนเน็ท เอ เครสส์ (Kenneth A. Kress)
ผู้เชี่ยวชาญทางด้านปรากฏการณ์เหนือธรรมชาติของซีไอเอ ทึ่งในความสามารถของแพท
เมื่อ ดร. รัสเซลล์ และ ดร. ฮาโรลด์ มาแพทมาพบกับ ดร. เคนเน็ท
ดร. เคนเน็ทต้องการทดสอบความสามารถของแพท โดยถามแพทว่า รู้ไหมว่าเขาเป็นใคร
แพทตอบอย่างไม่ลังเลว่า "รู้" ดร. เคนเน็ท จึงถามต่อว่าแล้วรู้จักชื่อของเขาไหม แพทก็ตอบว่า
"เคน เครสส์" แล้วอาชีพของเขาล่ะ แพทตอบอย่างมั่นใจว่า "ทำงานให้กับ ซีไอเอ"
การที่ แพทตอบคำถามเหล่านั้นได้อย่างถูกต้องไม่ใช่เรื่องธรรมดาแน่ เพราะ ดร. เคนเน็ท
เป็นเจ้าหน้าที่ลับของซีไอเอ เขาไม่ได้มีชื่ออยู่ในบัญชีรายชื่อเจ้าหน้าที่ซีไอเอ ดร. เคนเน็ท
ตัดสินใจยอมรับ แพท ไพรซ์ เข้าเป็น "สายลับพลังจิต" ของซีไอเอ จากนั้น ดร. เคนเน็ท
ก็หยิบภาพใบหนึ่งขึ้นมาวางบนโต๊ะ แล้วถามแพทว่าเคยเห็นสถานที่นี้ไหม? แพทก็ตอบว่า
"แน่นอน เขาเคยเห็น" ดร. เคนเน็ท จึงถามต่อ ถ้าอย่างนั้นทำไมเขาไม่ระบุถึงแท่นขุดเจาะ
4 แท่นนี้ ตอนที่เขาถูกทดสอบ? แพทตอบว่า "รอเดี๋ยว ขอให้ได้ตรวจสอบอีกที"
แพทหลับตาลง แล้วหยิบแว่นตาขึ้นมาใส่ซึ่งเขาบอกว่ามันทำให้เขา "เห็น" ได้ชัดเจนขึ้น
เพียงแค่ไม่กี่วินาที แพทก็ตอบว่าที่เขาไม่เห็นแท่นขุดเจาะก็เพราะว่ามันไม่ได้อยู่ที่นั่น
หลังจากนั้นมาอีก 2-3 สัปดาห์ก็มีการตรวจเช็คกลับไปที่ URDF-3 ก็พบว่าแท่นขุดเจาะ 2 แท่น
ได้ถูกรื้อถอนไปแล้ว แต่ก็ยังพอมีร่องรอยของแท่นขุดเจาะเหลืออยู่ ทำให้พอจะสรุปได้ว่า
ข้อมูลที่ได้จากการมองระยะไกลของแพทนั้นมีทั้งส่วนที่เชื่อถือได้ และส่วนที่เชื่อถือไม่ได้
ปลายปี ค.ศ. 1974 แพทได้ถูกสั่งให้ใช้พลังจิตของเขาตรวจสอบที่ซ่อนของฐานกำลังของ
กองทัพใต้ดินในประเทศลิเบีย แพทได้ชี้จุดที่คาดว่าเป็นที่ซ่อนของสถานีทดลองขีปนาวุธ SA-5
ซึ่งก็ตรงกับที่ข่าวกรองที่กองทัพลิเบียได้รับแจ้ง และแพทยังได้ระบุถึงสถานที่ที่พวกกองทัพใต้ดิน
กองทัพลิเบียได้ขอให้แพท ช่วยบอกรายละเอียดว่าในสถานที่ซ่อนของพวกกองทัพใต้ดินนั้น
มีการเคลื่อนไหวเป็นอย่างไรบ้าง คำถามต่างๆ ที่ลิเบียอยากรู้เกี่ยวกับพวกกองทัพใต้ดิน
ได้ถูกเขียนส่งไปให้กับแพท แต่เป็นที่น่าเสียดายที่แพทได้เสียชีวิตเพราะหัวใจวายเสียก่อน
ปฏิบัติการครั้งนั้นจึงถูกยกเลิกและไม่มีการส่งข้อมูลเพิ่มเติมจากซีไอเอ ให้กับลิเบียอีก
กองทัพสหรัฐฯ ได้นำการมองระยะไกลมาใช้ในสงครามเวียดนาม โดยให้นักพลังจิตเป็น
"ผู้นำทาง" (Point Man) ทำหน้าที่นำกองทหารระหว่างที่อยู่ในเขตของข้าศึก เพื่อหลีกเลี่ยงกับดัก
และการซุ่มโจมตี ซึ่งมันมีผลเป็นอย่างมาก ต่อขวัญและกำลังใจของพวกทหารที่ตกอยู่ในสภาวะ
แต่บางครั้ง "สายลับพลังจิต" ก็ต้องการผู้นำทางเช่นกัน ผู้นำทางนี้ใช้รหัสเรียกว่า "บีคอน" (Beacon)
เขามีหน้าที่ในการท่องไปในพื้นที่ต่างๆ เพื่อทำหน้าที่เป็น "ตา" ให้กับสายลับพลังจิตที่อยู่ห่างออกไป
หลายร้อยหลายพันไมล์ สายลับพลังจิตจะมองผ่านตาของผู้นำทาง แล้วบรรยายลักษณะของสถานที่นั้นๆ
อาจเป็นการบอกเล่าไปเรื่อยๆ แล้วให้คนคอยจดบันทึก แต่ส่วนใหญ่แล้ว สายลับพลังจิตจะวาดภาพร่าง
ของสถานที่นั้นๆ เอง
ตัวอย่างเช่นการทดสอบพลังจิตของ ดร. เอ็ดวิน ซี เมย์ (Dr. Edwin C. May) ในปี ค.ศ. 1987
"สายลับพลังจิต" โจเซฟ แมคมอนอีเกิล (Joseph McMoneagle)
ได้ถูกสั่งให้ติดตาม "ผู้นำทาง" คนหนึ่ง เขาได้รับคำบอกกล่าวเพียงแค่ว่า ผู้นำทางอยู่ห่างจาก
ศูนย์ทดลองเป็นรัศมีราว 100 ไมล์ และข้อมูลส่วนตัวของผู้นำทางก็มีเพียงแค่หมายเลขบัตรประกันสังคม
ดร. เอ็ดวิน สั่งให้โจเซฟ บรรยายถึงสิ่งที่ "ผู้นำทาง" เห็นในเวลา 16:00 น. โจเซฟได้วาดภาพนั้นออกมา
ที่อาจมองไม่เห็นด้วยตาเปล่า และเมื่อนำภาพถ่ายที่ "ผู้นำทาง" บันทึกไว้ตอนเวลา 16:00 น.
มาเปรียบเทียบกับภาพร่างของโจเซฟ เราจะพบว่ามันมีความคล้ายคลึงกันเป็นอย่างมาก
"สตาร์เกท" (Stargate Project)
ซึ่งเป้าหมายหลักของโครงการนี้แบ่งเป็น 3 ขั้นตอนคือ"การปฏิบัติการ" (Operations) เป็นการใช้การมอง
ระยะไกลเพื่อเก็บข้อมูลข่าวกรองของประเทศต่างๆ "การวิจัยและพัฒนา" (Research and Development)
เป็นการศึกษาภายในห้องทดลองเพื่อหาวิธีการใหม่ๆ ในการปรับปรุงการมองระยะไกลเพื่อนำมาใช้
ในงานสืบราชการลับ และ "การประเมินต่างประเทศ" (Foreign Assessment) เป็นการวิเคราะห์การเคลื่อนไหว
ของประเทศต่างๆ ที่มีการปรับปรุงและพัฒนา การใช้พลังจิตในรูปแบบใดๆ ที่อาจส่งผลกระทบต่อ
ความมั่นคงของชาติ (สหรัฐฯ)
เดวิด มอร์เฮาส์ (David Morehouse)
เดวิดเป็นสายลับพลังจิตของซีไอเอ ที่ออกมาตีแผ่ความลับของโครงการ สตาร์เกท ผ่านรายการสารคดี
ดิสคัฟเวอรีแชนแนล (Discovery Channel) เมื่อหลายปีก่อน เดวิดได้รับมอบหมายให้สืบหาข้อมูล
การทำจารกรรมในอดีตที่รัฐบาลสหรัฐฯ ยังคงมืดแปดด้านไม่รู้จะหาข้อมูลจากที่ไหนได้นอกจากการใช้
"สายลับพลังจิต" ย้อนอดีตกลับเข้าไปอยู่ในเหตุการณ์ เช่น คดีที่เครื่องบินรบของรัสเซียเข้าโจมตี
เครื่องบินพาณิชย์เที่ยวบิน 007 ของสายการบินเกาหลีว่า เป็นการบินที่จงใจล่วงล้ำเข้าไป
หรือคดีที่ทหารอิรักเข้าลอบวางเพลิงบ่อน้ำมันในคูเวต ก่อนที่จะพ่ายแพ้ถอนทัพกลับเพื่อดูว่า
ก่อนที่เดวิด จะมาเป็นสายลับพลังจิต เขาก็เป็นเพียงแค่ทหารพรานที่มีผลงานดีเด่นคนหนึ่ง
แต่จากการซ้อมรบร่วมกับทหารพรานของจอร์แดน ในกลางปี ค.ศ. 1980 เขาก็ประสบอุบัติเหตุ
ถูกยิงเข้าที่ศรีษะ แม้ว่าเขาจะไม่ได้รับบาดเจ็บร้ายแรง เนื่องจากกระสุนปืนไม่สามารถเจาะผ่าน
หมวกเหล็กที่เขาสวมใส่อยู่ขณะนั้นได้ แต่แรงกระแทกก็ทำให้เขาสลบไปนานพอดู
ขณะที่เขาสลบไสลอยู่นั้น เขาเห็นอะไรบางอย่างที่เขาก็ไม่รู้ว่าจะเรียกมันว่า เทวดา หรือ ปีศาจ ดี
แต่ที่แน่ๆ มันไม่ใช่มนุษย์และนั่นเป็นครั้งแรกที่เขาเริ่มรู้สึกว่า มีบางอย่างผิดปรกติเกิดขึ้นกับเขา
หลังจากนั้นมาเขาก็เริ่มเห็นภาพแปลกๆ ที่เขาเรียกมันว่าฝันร้ายอยู่เรื่อยๆ ในที่สุดเขาก็ทนไม่ไหว
ต้องไปพบแพทย์ของกองทัพ และเมื่อเขาเล่าสิ่งที่เกิดขึ้นกับเขาให้กับหมอฟัง เรื่องราวและเหตุการณ์
ประหลาดที่เกิดขึ้นกับเดวิด ได้ถูกส่งต่อไปยังเจ้าหน้าที่ชั้นสูง และในที่สุดเจ้าหน้าที่ของโครงการสตาร์เกท
ก็ให้ความสนใจและนั่นก็คือที่มาของการได้เข้าร่วมกับ โครงการสตาร์เกท ในฐานะ "สายลับพลังจิต"
เรื่องเล่าของเดวิด อาจฟังดูเหมือนนิยาย เช่นเดียวกับความเป็นมาของ โจเซฟ แมคมอนอีเกิล ย้อนกลับไป
ในทศวรรษที่ 1970 โจเซฟพบกับเหตุการณ์เหนือธรรมชาติเป็นครั้งแรกก็ตอนที่เขาถูกส่งตัวเข้าโรงพยาบาล
ของกองทัพยุโรป เขาพบกับปรากฏการณ์ "เฉียดตาย" และหลังจากที่เขาถูกรักษาจนหายดี ปรากฏการณ์นั้น
กลับยังคงอยู่และทำให้เขามี "พลังจิต" เหนือคนธรรมดาสามัญทั่วไป
เดวิดทำหน้าที่ "สายลับพลังจิต" นานกว่า 10 ปีก่อนที่เขาจะตัดสินใจถอนตัวออกมา ในปี ค.ศ. 1993
เดวิดได้ละเมิดข้อตกลงในการป้องกันความลับของกองทัพรั่วไหล โดยออกมาเปิดเผยเรื่องราวของเขา
ขณะที่ทำงานให้กับโครงการสตาร์เกท เขาได้ให้สัมภาษณ์กับสื่อมวลชนทุกแขนง ทั้งหนังสือพิมพ์,
วิทยุ และโทรทัศน์ เท่านั้นยังไม่พอเขายังเขียนหนังสือ "นักรบพลังจิต" (Psychic Warrior) อีกด้วย
เรื่องราวของ "สายลับพลังจิต" และโครงการสตาร์เกท ไม่ได้เป็นเพียงแค่จินตนาการ
เพราะในวันที่ 17 เมษายน ค.ศ. 1995 ประธานาธิบดี บิลล์ คลินตัน ได้ลงนามในหนังสือคำสั่ง
จากเบื้องบนเลขที่ เอ็นอาร์ 1995-4-17 (Executive Order Nr. 1995-4-17) อนุญาตให้เผยแพร่เอกสาร
ข้อมูลของ โครงการสตาร์เกท ออกสู่สายตาของสาธารณชนได้
และทั้งหมดนี้ก็คือเรื่องราวอีกเรื่องหนึ่งที่ ซีไอเอ เคยปฏิเสธนักหนาว่า "ไม่จริง" แต่ท้ายที่สุดก็โผล่หาง
ออกมาให้เห็น แต่เชื่อได้เลยครับว่าสิ่งที่ ซีไอเอ ยอมเปิดเผยออกมานั้นเป็นเพียงแค่ "ปลายจวัก"
ความลับส่วนใหญ่ก็ยังคงถูกซุกซ่อนอยู่ในตึกห้าเหลี่ยมที่ชื่อว่า "เพนตากอน" (The Pentagon)
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
ตั้งหัวข้อ hacksecret on Sun Mar 21, 2010 5:36 pm
และ "เรา" ไม่ใช่คนเพียงกลุ่มเดียวที่สนใจในความลึกลับของมัน ที่ก้นบึ้งของมหาสมุทรบริเวณนี้
มี "ฐานทัพ" ขนาดใหญ่มหึมาซ่อนอยู่ อยากจะรู้ไหมล่ะครับว่า พวกเขาเป็นใคร? และกำลังทำอะไรกัน?
พื้นที่ 51 แห่งคาริบเบียน
เราก็คงจะทราบกันดีอีกเช่นกันว่า พื้นที่ 51 (AREA 51) นั้นเป็นฐานทัพทหารของกองทัพสหรัฐฯ
ที่สร้างขึ้นมาโดย CIA เพื่อทำการทดลองเครื่องบินจารกรรม (Stealth Plane) และยิ่งไปกว่านั้น
มันยังเป็นสถานที่ใช้ศึกษาและวิจัยยานอวกาศของมนุษย์ต่างดาว ถ้าพื้นที่ 51 เป็นพื้นที่ที่มีระบบ
รักษาความปลอดภัยที่เข้มงวด และลึกลับที่สุดบนพื้นโลกแล้วละก็ คงจะไม่ผิดนักถ้าเราจะเรียก
พื้นที่ใต้มหาสมุทรแอตแลนติก บริเวณสามเหลี่ยมเบอร์มิวดาว่า (พื้นที่ 51 แห่งแคริบเบียน)
The Atlantic Undersea Test and Evaluation Center หรือศูนย์ทดลองและพัฒนาใต้ทะเลแอตแลนติก
ส่วนชื่อเล่นก็คือ AUTEC หรือ ออเทค มันถูกสร้างขึ้นบน เกาะแอนดรอส
ซึ่งเป็นส่วนหนึ่งของหมู่เกาะบาฮามาส์ ในทะเลแคริบเบียน เมื่อราว 30 ปีก่อน สิ่งที่ ออเทค
ทดลองและพัฒนาก็คือ อาวุธสงคราม
ที่ ออเทค มาตั้งฐานทัพที่นี่ก็เพราะว่าเกาะแอนดรอส อยู่ใกล้กับ "ลิ้นของมหาสมุทร"
(The Tongue of the Ocean) หรือที่มีชื่อย่อว่า "โตโต" (TOTO)
โตโต เป็นแอ่งน้ำที่มีความลึกมาก ที่แห่งนี้มีขนาดยาวประมาณ 110 ไมล์ทะเล (204 กิโลเมตร)
กว้าง 20 ไมล์ทะเล (37 กิโลเมตร) และลึกราว 700-1,100 ฟาทัม (1,280-2,012 กิโลเมตร)
จริงอยู่ที่ ออเทค เป็นศูนย์บัญชาการที่มีขนาดเพียงแค่ 1 ตารางไมล์เท่านั้นเองจากที่เราเห็น
บนเกาะแอนดรอส แต่อย่าลืมว่ามันถูกล้อมรอบด้วย "ลิ้นของมหาสมุทร" ที่เรามองไม่เห็นซึ่งมีพื้นที่
ทั้งหมดถึง 1,670 ตารางไมล์ และมีข่าวลือกันอย่างหนาหูว่า ออเทค มีการรักษาความปลอดภัย
อย่างเข้มงวดไม่แพ้ "พื้นที่ 51" ตัวจริงที่อยู่ในรัฐเนวาดา นี่ขนาดตั้งอยู่กลางมหาสมุทร
ยังมีการรักษาความปลอดภัยที่เข้มงวด ผมว่ามันก็ไม่ธรรมดาซะแล้วละ ในปี ค.ศ. 1997
มีพรานล่านกเป็ดน้ำหลงเข้าไปใน "เขตหวงห้าม" ระหว่างนั้นเองพวกเขาก็เจอกำแพงต้นไม้
ที่หนาแน่นผิดปกติ ทันใดนั้นเองพวกเขาก็ถูก "อัด" ที่ท้องและถูกบังคับให้นอนราบลงกับพื้น!
เหล่านายพรานที่โชคร้าย รู้แทบในทันทีว่าเจ้ากำแพงต้นไม้ที่พวกเขาพบนั้นเป็น ต้นไม้ปลอม
ที่ใช้พรางค่ายทหาร ทหารอีกกลุ่มหนึ่ง "แหวก" กำแพงต้นไม้ออกมา "หิ้ว" พวกเขาเข้าไปในค่าย
จากนั้นพวกเขาก็ถูกจับเข้ากรงขัง หลายชั่วโมงต่อมาก็มีทหารมาไขกุญแจห้องขังให้และบอกว่า
พวกเขาเชื่อว่า พวกนายพรานไม่มีเจตนาที่จะบุกรุกเข้ามาในเขตหวงห้าม จึงปล่อยตัวพวกเขาไป
มีหลายคนอ้างว่า ได้เห็นวัตถุบินลึกลับบินอยู่บริเวณเกาะแอนดรอสบ่อยครั้ง ซึ่งแต่ละครั้งที่พบนั้น
มันจะโชว์ลีลาการบินที่คุณจะไม่เคยเห็นที่ไหนมาก่อน มันสามารถหักเลี้ยวเป็นมุมแคบแบบเฉียบพลัน
ได้โดยทันทีทันใดดั่งใจนึก ครั้งหนึ่งขณะที่นักธุรกิจชาวเวียดนามคนหนึ่งกำลังแล่นเรือยอชท์
อยู่บริเวณชายฝั่งของเกาะแอนดรอส เขาเหลือบไปเห็นอะไรบางอย่างซึ่งเขาคิดว่าเป็นปลาวาฬ
อยู่ห่างจากเขาไปราว 2 ไมล์
แต่เมื่อเขาแล่นเรือเข้าไปใกล้ราวครึ่งไมล์ เขาก็พบว่ามันไม่ใช่ปลาวาฬ แต่กลับเป็น "สิ่งประดิษฐ์"
บางอย่างที่มีรูปร่างลักษณะที่ล้ำสมัยมาก ทันใดนั้นเองมันก็ออกตัวไปด้วยความเร็วชนิดที่เรียกว่า
"เร็วchipหาย" ไปบนผิวน้ำฝ่าคลื่นลูกใหญ่และหายวับไปต่อหน้าต่อตา
ดร. ไมเคิล ไพรซิงเกอร์ (Dr. Michael Preisinger) นักประวัติศาสตร์และนักประดาน้ำชาวเยอรมัน
ได้ถูกบริษัทที่เขาทำงานอยู่ส่งตัวมาทำหน้าที่เป็นผู้ฝึกสอนการดำน้ำให้กับลูกค้าที่เมือง นาส์ซอ
ซึ่งเป็นเมืองหลวงของบาฮามาส์ ดร.ไมเคิล และครอบครัวได้เดินทางมาที่บาฮามาส์ในปี ค.ศ. 1995
ลูกค้าของ ดร.ไมเคิล ก็คือ เหล่าบรรดาเจ้าหน้าที่ของสายการบินต่างๆ ของประเทศเยอรมัน พวกเขามาฝึก
การดำน้ำแบบสกูบา (Scuba Diving) เพื่อนำไปใช้ในการพัฒนาโปรแกรมท่องเที่ยวของสายการบิน
ระหว่างที่ ดร.ไมเคิล พาลูกค้าของเขานั่งเรือไปยังจุดที่จะทำการฝึกสอนลูกค้าหลายคนได้บอกว่า
เข็มทิศของพวกเขามีปัญหา มันชี้มั่วไปหมดจนไม่รู้ว่าทิศไหนเป็นทิศไหน ด้วยความเป็นนักประวัติศาสตร์
ดร.ไมเคิล ก็ได้ทำการจดบันทึกตำแหน่งของเรือตอนที่เข็มทิศมีอาการผิดปกติ โดยหวังว่าวันหนึ่ง
ดร.ไมเคิล ก็กลับมาที่นั่นจริงๆ เขาพบว่าเข็มทิศของเขาชี้มั่วเหมือนกับที่ลูกค้าเขาบอก
ดร.ไมเคิลไม่เข้าใจว่า มันเกิดอะไรขึ้นกับเข็มทิศ และอะไรเป็นสาเหตุให้มันเป็นเช่นนั้น
เขาได้แต่เก็บความสงสัยไว้ จนกระทั่งหลายเดือนต่อมาเขาได้นำปัญหานี้ไปปรึกษากับ
นักฟิสิกส์หลายคนทั่วโลกและก็มีนักฟิสิกส์ท่านหนึ่งได้ให้ข้อสังเกตว่า สิ่งเดียวที่สามารถทำให้
เข็มทิศผิดปกติได้คือ "รูหนอน" (Wormhole)
"รูหนอน" นี้ก็เหมือนกับ "หลุมดำ" (Black hole)
คือ มันเกิดขึ้นแล้วก็ดับสลายไป แล้วก็เกิดใหม่ แล้วก็ดับไป เวียนว่ายไปไม่มีที่สิ้นสุด
ผมจะขอธิบายการทำงานของ "รูหนอน" แบบง่ายๆ โดยให้คุณนำกระดาษมาหนึ่งแผ่น
ซึ่งสมมุติว่าเป็นพื้นโลก จากนั้นให้คุณวาดจุดลงบนปลายกระดาษด้านบนหนึ่งจุด และด้านล่างหนึ่งจุด
โดยให้ชื่อว่าจุด A และจุด B เสร็จแล้วให้งอกระดาษเป็นรูปตัวยู (U) เอาละครับทีนี้สมมุติว่า
เราจะเดินทางจากจุด A ไปยังจุด B เราก็ต้องเดินทางไปตามผิวโลก(แผ่นกระดาษ) แต่ถ้าหาก
ระหว่างจุด A และจุด B เกิดปรากฏการณ์ "รูหนอน" เราก็สามารถที่จะเดินทางผ่าน "รูหนอน"
จากจุด A มายังจุด B ได้โดยที่ใช้เวลาน้อยกว่ามาก
ร็อบ พลาเมอร์ (Rob Palmer) นักประดาน้ำระดับโลกชาวอังกฤษ
ผู้อำนวยการมูลนิธิหลุมฟ้า (The Blue Holes Foundation) ได้ศึกษาเรื่องถ้ำประหลาดที่เขาพบ
ใต้มหาสมุทร ร็อบเชื่อว่าถ้ำประหลาดเหล่านั้นเป็นทางเชื่อมต่อระหว่างมิติที่
พวกมนุษย์ต่างดาวใช้ในการเดินทาง ร็อบกล่าวว่า บริเวณหมู่เกาะบาฮามาส์ นี้
เต็มไปด้วยหินปูนขนาดใหญ่ ซึ่งหินปูนนี้เองที่เป็นตัวการในการก่อกำเนิดถ้ำ และจากการที่ไม่มี
แม่น้ำไหลผ่านบนเกาะจึงทำให้บรรดาหินปูนก่อตัวกันขึ้นเป็นถ้ำที่มีขนาดใหญ่ และยาวขึ้นเรื่อยๆ
การขาดแคลนน้ำจืดบนหมู่เกาะบาฮามาส์ ทำให้ถ้ำเหล่านี้ก่อตัวมีขนาดใหญ่ขึ้นเรื่อยๆ
โดยเฉพาะในช่วงเวลาที่น้ำในมหาสมุทรมีระดับต่ำมากเช่น ในช่วง
ยุคน้ำแข็ง (Ice age)
บางครั้งการถล่มของถ้ำได้เกิดต่อเนื่องขึ้นมาจนถึงตัวถ้ำที่อยู่บนพื้นเกาะ และในช่วงนี้เอง
เมื่อน้ำในมหาสมุทรมีปริมาณมากขึ้น บรรดาถ้ำเหล่านี้ก็จมอยู่ใต้ท้องมหาสมุทร ปากถ้ำส่วนใหญ่
จะจมอยู่ใต้ทะเลสาบบนเกาะหรือไม่ก็จมอยู่ในบริเวณที่เรียกว่า แนวปะการังรอบๆ เกาะ (Barrier Reef)
"หลุมฟ้า" มีอยู่เป็นจำนวนมากรอบๆ หมู่เกาะบาฮามาส์ โดยเฉพาะบริเวณที่เป็นลิ้นของมหาสมุทร
"หลุมฟ้า" เหล่านี้ถูกค้นพบเป็นครั้งแรกเมื่อราวปลายทศวรรษที่ 1950 โดย จอร์จ เบนจามิน
(George Benjamin) นักประดาน้ำชาวแคนาดา ได้มาดำน้ำสำรวจถ้ำใต้น้ำเหล่านี้ รอบๆ เกาะแอนดรอส
ซึ่งทำให้ผู้คนก็เริ่มรู้จักถ้ำประหลาดใต้น้ำ และเรียกมันว่า "หลุมฟ้าของเบนจามิน" (Benjamin's Blue Hole)
ภายในถ้ำยังมีช่องเล็กช่องน้อยนำเข้าไปสู่ถ้ำอื่นๆ ที่ซ่อนอยู่ในถ้ำใหญ่ จอร์จได้ดำน้ำสำรวจถ้ำใต้น้ำเหล่านี้
ไปจนกระทั่งที่ระดับความลึก 300 ฟุต และจากภาพยนตร์ที่เขาได้บันทึกเอาไว้ ทำให้ผู้คนเริ่มรู้จัก
ถ้ำใต้น้ำในบาฮามาส์ แต่การสำรวจของจอร์จได้สิ้นสุดลงในช่วงกลางทศวรรษที่ 1970
เนื่องจากเพื่อนที่ร่วมเดินทางสำรวจถ้ำคนหนึ่งเสียชีวิตภายในถ้ำใต้น้ำบริเวณ "ลิ้นของมหาสมุทร"
ในช่วงเวลาเดียวกันนี้เอง ก็มีนักประดาน้ำชาวอเมริกันชื่อ
เช็ก เอ็กซ์เลย์ (Sheck Exley)
ได้พบถ้ำใต้น้ำที่มีความยาวถึง 8 กิโลเมตร ซึ่งเป็นถ้ำใต้น้ำที่ยาวที่สุดในโลก และอีก 10 ปีต่อมา
"อุทยานแห่งชาติลูคายัน" (Lucayan National Park)
ส่วน ร็อบ พลาเมอร์ ได้เริ่มมาสำรวจถ้ำใต้น้ำเมื่อปี ค.ศ. 1981 เขาได้พบถ้ำใต้น้ำอีกจำนวนมาก
อีกทั้งเขายังพบสิ่งมีชีวิตหลายชนิดที่บรรดานักวิทยาศาสตร์คิดว่ามันสูญพันธ์ไปแล้วเมื่อ 150 ล้านปีก่อน
วันที่ 5 พฤษภาคม ค.ศ. 1997 ร็อบและเพื่อนร่วมทีมอีก 3 คน เดินทางไปสำรวจถ้ำใต้น้ำ
ในทะเลแดงที่ประเทศ อียิปต์
เมื่อเรือแล่นไปถึงจุดที่ร็อบจะดำสำรวจถ้ำใต้น้ำ เขาก็ทิ้งตัวลงจากเรือดำดิ่งไต่ความลึกลง
ตรงไปยังถ้ำใต้น้ำทันที เพื่อนร่วมทีมของเขาประหลาดใจที่ ร็อบ ทำเช่นนั้น เพราะโดยปรกติแล้ว
นักประดาน้ำจะว่ายน้ำจากเรือไปยังตำแหน่งของถ้ำใต้น้ำเรียกว่า "กำแพง" แล้วจึงค่อยดำลงไป
ทั้งๆ ที่ยังแปลกใจไม่หาย เพื่อนร่วมทีมทั้ง 3 คนก็รีบดำน้ำตาม ร็อบ ลงไป หนึ่งในนั้นตามไปถึง
แค่เพียงระดับความลึก 64 เมตรก็กลับ ส่วนอีก 2 คนที่เหลือยังคงดำน้ำตาม ร็อบ ไปจนถึงที่
ระดับความลึก 99 เมตรก็ยังไม่สามารถตามร็อบไปได้ทัน ทั้ง 2 คนจึงตัดสินใจกลับสู่ผิวน้ำ
เพื่อนๆ ได้รอคอยการกลับมาของ ร็อบ อยู่บนเรือเป็นเวลาหลายชั่วโมง แต่เขาก็ไม่ได้กลับขึ้นมาอีกเลย
อะไรเป็นสาเหตุของการเสียชีวิตของ ร็อบ นั้นยังเป็นปริศนาอยู่จนถึงปัจจุบัน ไม่มีใครตอบได้ว่า
ทำไมร็อบจึงตัดสินใจดำน้ำดิ่งตรงจากเรือ แทนที่จะค่อยๆ ว่ายน้ำไปยังจุดที่ต้องการก่อน
ดร.ไมเคิล เชื่อว่า ร็อบ พลาเมอร์ ถูก
"สั่งเก็บ" โดยเจ้าหน้าที่ของรัฐบาลสหรัฐฯ เพราะว่า
เขาเป็นอันตรายต่อความลับใต้ท้องมหาสมุทรในบาฮามาส์ ที่รัฐบาลพยายามปกปิด
เขาอาจโดนสะกดจิตให้ทำในสิ่งที่ต้องคร่าชีวิตเขาเอง หรือไม่ก็มีการปรับแต่ง
อุปกรณ์ประดาน้ำของเขา เนื่องจาก ดร.ไมเคิล เชื่อว่า "หลุมฟ้า" ที่ ร็อบ ทำการสำรวจนั้น
เป็นผลข้างเคียงที่เกิดจากการเกิด-ดับของ "รูหนอน"
มันคงไม่ใช่เรื่องง่ายๆ ที่จะหาข้อมูลของ "ออเทค" หรือ "พื้นที่ 51 แห่งแคริบเบียน"
เนื่องจากมันตั้งอยู่กลางมหาสมุทร แถมทั้งยังกินพื้นที่ลงไปใต้พื้นมหาสมุทรอีกด้วย
แต่ก็เป็นที่น่าสังเกตว่า ทำไมฐานทัพสหรัฐฯ จึงมักจะไปตั้งอยู่ในบริเวณที่ลับหูลับตาผู้คน
Registration date : 02/03/2010
ขึ้นไปข้างบน Go down
- Similar topics
Permissions in this forum: |
13361157ec09ce42 | Skip to main content
Physics LibreTexts
Identical Particles Revisited
For two identical particles confined to a one-dimensional box, we established earlier that the normalized two-particle wavefunction \(\psi (x_1, x_2)\), which gives the probability of finding simultaneously one particle in an infinitesimal length dx1 at x1 and another in dx2 at xas \(|\psi (x_1, x_2)|^2 dx_1 dx_2\), only makes sense if \(|\psi (x_1, x_2)|^2 = |\psi (x_2, x_1)|^2\), since we don’t know which of the two indistinguishable particles we are finding where. It follows from this that there are two possible wave function symmetries: \(\psi (x_1, x_2) = \psi (x_2, x_1) or \psi (x_1, x_2) = -\psi (x_2, x_1)\). It turns out that if two identical particles have a symmetric wave function in some state, particles of that type always have symmetric wave functions, and are called bosons. (If in some other state they had an antisymmetric wave function, then a linear superposition of those states would be neither symmetric nor antisymmetric, and so could not satisfy \(|\psi (x_1, x_2)|^2 = |\psi (x_2, x_1)|^2\).) Similarly, particles having antisymmetric wave functions are called fermions. (Actually, we could in principle have \(\psi (x_1, x_2) = e^{i\alpha} \psi (x_2, x_1)\), with α a constant phase, but then we wouldn’t get back to the original wave function on exchanging the particles twice. Some two-dimensional theories used to describe the quantum Hall effect do in fact have excitations of this kind, called anyons, but all ordinary particles are bosons or fermions.)
To construct wave functions for three or more fermions, we assume first that the fermions do not interact with each other, and are confined by a spin-independent potential, such as the Coulomb field of a nucleus. The Hamiltonian will then be symmetric in the fermion variables,
\[ H = \frac {\vec {p}_1^2}{2m} + \frac {\vec {p}_2^2}{2m} + \frac {\vec {p}_3^2}{2m} + \cdots + V(\vec {r_1}) + V(\vec {r_2}) + V(\vec {r_3}) + \cdots \]
and the solutions of the Schrödinger equation are products of eigenfunctions of the single-particle Hamiltonian \(H = \frac {\vec {p}}{2m} + V(\vec {r}) \). However, these products, for example \(\psi _a (1) \psi _b (2) \psi _c (3) \) do not have the required antisymmetry property. Here a, b, c, … label the single-particle eigenstates, and 1, 2, 3, … denote both space and spin coordinates of single particles, so 1 stands for \((\vec {r}_1, s_1)\). The necessary antisymmetrization for the particles 1, 2 is achieved by subtracting the same product wave function with the particles 1 and 2 interchanged, so \(\psi _a (1) \psi _b (2) \psi _c (3) \) is replaced by \(\psi _a (1) \psi _b (2) \psi _c (3) - \psi _a (2) \psi _b (1) \psi _c (3) \), ignoring overall normalization for now.
But of course the wave function needs to be antisymmetrized with respect to all possible particle exchanges, so for 3 particles we must add together all 3! permutations of 1, 2, 3 in the state a, b, c, with a factor -1 for each particle exchange necessary to get to a particular ordering from the original ordering of 1 in a, 2 in b, and 3 in c. In fact, such a sum over permutations is precisely the definition of the determinant, so, with the appropriate normalization factor:
\[ \psi _{abc} (1,2,3) = \frac {1}{\sqrt {3!}} \begin {vmatrix} \psi _a (1) & \psi _b (1) & \psi _c (1) \\ \psi _a (2) & \psi _b (3) & \psi _c (2) \\ \psi _a (3) & \psi _b (3) & \psi _c (3) \end {vmatrix} \]
where a, b, c label three (different) quantum states and 1, 2, 3 label the three fermions. The determinantal form makes clear the antisymmetry of the wave function with respect to exchanging any two of the particles, since exchanging two rows of a determinant multiplies it by -1.
We also see from the determinantal form that the three states a, b, c must all be different, for otherwise two columns would be identical, and the determinant would be zero. This is just Pauli’s Exclusion Principle: no two fermions can be in the same state. Although these determinantal wave functions (sometimes called Slater determinants) are only strictly correct for noninteracting fermions, they are a useful beginning in describing electrons in atoms (or in a metal), with the electron-electron repulsion approximated by a single-particle potential. For example, the Coulomb field in an atom, as seen by the outer electrons, is partially shielded by the inner electrons, and a suitable V(r) can be constructed self-consistently, by computing the single-particle eigenstates and finding their associated charge densities.
Space and Spin Wave Functions
Suppose we have two electrons in some spin-independent potential V(r) (for example in an atom). We know the two-electron wave function is antisymmetric. Now, the Hamiltonian has no spin-dependence, so we must be able to construct a set of common eigenstates of the Hamiltonian, the total spin, and the z-component of the total spin.
For two electrons, there are four basis states in the spin space. The eigenstates of S and Sz are the singlet state
\[X_s (s_1, s_2) = |S_{tot} = 0, S_z = 0 > = \left ( \frac {1}{\sqrt {2}} \right ) \left ( |\uparrow \downarrow > - | \uparrow \downarrow > \right ) \]
and the triplet states
\[X^1_T (s_1, s_2) = | 1, 1 > = |\uparrow \uparrow >, | 1, 0 > = \left ( \frac {1}{\sqrt {2}} \right ) \left ( | \uparrow \downarrow > + | \downarrow \uparrow > \right ) , |1, -1> = | \downarrow \downarrow >\]
where the first arrow in the ket refers to the spin of particle 1, the second to particle 2.
It is evident by inspection that the singlet spin wave function is antisymmetric in the two particles, the triplet symmetric. The total wave function for the two electrons in a common eigenstate of S, Sz and the Hamiltonian H has the form:
\[\Psi (\vec {r_1}, \vec {r_2}, s_1, s_2 ) = \psi (\vec {r_1}, \vec {r_2}) X (s_1, s_2 ) \]
and \(\Psi\) must be antisymmetric. It follows that a pair of electrons in the singlet spin state must have a symmetric spatial wave function, \( \psi (\vec {r_1}, \vec {r_2}) = \psi ( \vec {r_2}, \vec {r_1})\), whereas electrons in the triplet state, that is, with their spins parallel, have an antisymmetric spatial wave function.
Dynamical Consequences of Symmetry
This overall antisymmetry requirement actually determines the magnetic properties of atoms. The electron’s magnetic moment is aligned with its spin, and even though the spin variables do not appear in the Hamiltonian, the energy of the eigenstates depends on the relative spin orientation. This arises from the electrostatic repulsion energy between the electrons. In the spatially antisymmetric state, the two electrons have zero probability of being at the same place, and are on average further apart than in the spatially symmetric state. Therefore, the electrostatic repulsion raises the energy of the spatially symmetric state above that of the spatially antisymmetric state. It follows that the lower energy state has the spins pointing in the same direction. This argument is still valid for more than two electrons, and leads to Hund’s rule for the magnetization of incompletely filled inner shells of electrons in transition metal atoms and rare earths: if the shell is half filled or less, all the spins point in the same direction. This is the first step in understanding ferromagnetism.
Another example of the importance of overall wave function antisymmetry for fermions is provided by the specific heat of hydrogen gas. This turns out to be heavily dependent on whether the two protons (spin one-half) in the H2 molecule have their spins parallel or antiparallel, even though that alignment involves only a very tiny interaction energy. If the proton spins are antiparallel, that is to say in the singlet state, the molecule is called parahydrogen. The triplet state is called orthohydrogen. These two distinct gases are remarkably stable—in the absence of magnetic impurities, para–ortho transitions take weeks.
The actual energy of interaction of the proton spins is of course completely negligible in the specific heat. The important contributions to the specific heat are the usual kinetic energy term, and the rotational energy of the molecule. This is where the overall (space×spin) antisymmetric wave function for the protons plays a role. Recall that the parity of a state with rotational angular momentum l is (-1)l. Therefore, parahydrogen, with an antisymmetric proton spin wave function, must have a symmetric proton space wave function, and so can only have even values of the rotational angular momentum. Orthohydrogen can only have odd values. The energy of the rotational level with angular momentum l is \(E^{rot}_t = \frac {\hbar ^2 l (l +1)}{I}\), so the two kinds of hydrogen gas have different sets of rotational energy levels, and consequently different specific heats.
Symmetry of Three-Electron Wave Functions
Things get trickier when we go to three electrons. There are now 23 = 8 basis states in the spin space. Four of these are accounted for by the spin 3/2 state with all spins pointing in the same direction. This is evidently a symmetric state, so must be multiplied by an antisymmetric spatial wave function, a determinant. But the other four states are two pairs of total spin ½ states. They are orthogonal to the symmetric spin 3/2 state, so they can’t be symmetric, but they can’t be antisymmetric either, since in each such state two of the spins must be pointing in the same direction! An example of such a state (following Baym, page 407) is
\[X (s_1, s_2, s_3) = | \uparrow _1 > ( \frac {1}{\sqrt {2}}) (| \uparrow _2 \downarrow _3 > - | \downarrow _2 \uparrow _3 >)\]
Evidently, this must be multiplied by a spatial wave function symmetric in 2 and 3, but to get a total wave function with overall antisymmetry it is necessary to add more terms:
\[\Psi (1,2,3) = X (s_1,s_2,s_3) \psi ( \vec {r}_1, \vec {r}_2, \vec {r}_3) + X (s_2,s_3,s_1) \psi ( \vec {r}_2, \vec {r}_3, \vec {r}_1) + X(s_3,s_1,s_2) \psi (\vec {r}_3, \vec {r}_1, \vec {r}_2)\]
(from Baym). Requiring the spatial wave function \(\psi (\vec {r}_1, \vec {r}_2, \vec {r}_3 )\) to be symmetric in 2, 3 is sufficient to guarantee the overall antisymmetry of the total wave function Ψ. Particle enthusiasts might be interested to note that functions exactly like this arise in constructing the spin/flavor wave function for the proton in the quark model (Griffiths, Introduction to Elementary Particles, page 179).
For more than three electrons, similar considerations hold. The mixed symmetries of the spatial wave functions and the spin wave functions which together make a totally antisymmetric wave function are quite complex, and are described by Young diagrams (or tableaux). There is a simple introduction, including the generalization to SU(3), in Sakurai, section 6.5. See also §63 of Landau and Lifshitz.
Scattering of Identical Particles
As a preliminary exercise, consider the classical picture of scattering between two positively charged particles, for example α-particles, viewed in the center of mass frame. If an outgoing a is detected at an angle θ to the path of ingoing a #1, it could be #1 deflected through q, or #2 deflected through π - θ (see figure). Classically, we could tell which one it was by watching the collision as it happened, and keeping track.
However, in a quantum mechanical scattering process, we cannot keep track of the particles unless we bombard them with photons having wavelength substantially less than the distance of closest approach. This is just like detecting an electron at a particular place when there are two electrons in a one dimensional box: the probability amplitude for finding an a coming out at angle θ to the ingoing direction of one of them is the sum of the amplitudes (not the sum of the probabilities!) for scattering through θ and π -θ
Writing the asymptotic scattering wave function in the standard form for scattering from a fixed target,
\[\psi (\vec {r} ) \approx e^{ikz} + f( \theta) \frac {e^{ik \gamma}}{r} \]
the two-particle wave function in the center of mass frame, in terms of the relative coordinate, is given by symmetrizing:
\[ \psi (\vec {r}) \approx e^{ikz} + e^{-ikz} + ( f(\theta) + f (\pi - \theta)) \frac {e^{ik\gamma}}{r}\]
How does the particle symmetry affect the actual scattering rate at an angle θ? If the particles were distinguishable, the differential cross section would be
\[ \left ( \frac {d \sigma}{d \Omega} \right ) _{distinguishable} = | f (\theta )|^2 + | f (\pi - \theta) |^2\]
but quantum mechanically
\[ \left ( \frac {d \sigma}{d \Omega} \right ) = | f (\theta) + f (\pi - \theta )|^2\]
This makes a big difference! For example, for scattering through 90°, where \(f (\theta) = f (\pi - \theta)\), the quantum mechanical scattering rate is twice the classical (distinguishable) prediction.
Furthermore, if we make the standard expansion of the scattering amplitude f(θ) in terms of partial waves,
\[f(\theta) = \sum _{l=0}^{\infty} (2l + 1) a_lP_l (\cos \theta )\]
\[f(\theta) + f(\pi - \theta) = \sum _{l=0}^{\infty} (2l + 1)a_1 (P_l(\cos \theta) + P_l (\cos (\pi - \theta)))\]
\[ = \sum _{l=0}^{\infty} (2l + 1) a_1 (P_1 (\cos \theta ) + P_l (- \cos \theta ))\]
and since \(P_l (-x) = (-1)^l P_l (x)\) the scattering only takes place in even partial wave states. This is the same thing as saying that the overall wave function of two identical bosons is symmetric, so if they are in an eigenstates of total angular momentum, from \(P_l (-x) = (-1)^l P_l (x)\) it has to be a state of even l.
For fermions in an antisymmetric spin state, such as proton-proton scattering with the two proton spins forming a singlet, the spatial wave function is symmetric, and the argument is the same as for the boson case above. For parallel spin protons, however, the spatial wave function has to be antisymmetric, and the scattering amplitude will then be \(f (\theta) - f (\pi - \theta)\). In this case there is zero scattering at 90°!
Note that for (nonrelativistic) equal mass particles, the scattering angle in the center of mass frame is twice the scattering angle in the fixed target (lab) frame. This is easily seen in the diagram below. The four equal-length black arrows, two in, two out, forming an X, are the center of mass momenta. The lab momenta are given by adding the (same length) blue dotted arrow to each, reducing one of the ingoing momenta to zero, and giving the (red arrow) lab momenta (slightly displaced for clarity). The outgoing lab momenta are the diagonals of rhombi (equal-side parallelograms), hence at right angles and bisecting the center of mass angles of scattering. |
cf55eaec7faa9fc6 | Introduction to Inorganic Chemistry/Molecular Orbital Theory
From Wikibooks, open books for an open world
< Introduction to Inorganic Chemistry
Jump to: navigation, search
Chapter 2: Molecular Orbital Theory[edit]
The lowest unoccupied molecular orbital of the carbon monoxide molecule is a π antibonding orbital that derives from the 2p orbitals of carbon (left) and oxygen (right)
Valence bond (VB) theory gave us a qualitative picture of chemical bonding, which was useful for predicting the shapes of molecules, bond strengths, etc. It fails to describe some bonding situations accurately because it ignores the wave nature of the electrons.
Molecular orbital (MO) theory has the potential to be more quantitative. With it we can also get a picture of where the electrons are in the molecule, as shown in the image at the right. This can help us understand patterns of bonding and reactivity that are otherwise difficult to explain.
Although MO theory in principle gives us a way to calculate the energies and wavefunctions of electrons in molecules very precisely, usually we settle for simplified models here too. These simple models do not give very accurate orbital and bond energies, but they do explain concepts such as resonance (e.g., in the ferrocene molecule) that are hard to represent otherwise. We can get more accurate energies from MO theory by computational "number crunching."
While MO theory is more correct than VB theory and can be very accurate in predicting the properties of molecules, it is also rather complicated even for fairly simple molecules. For example, you should have no trouble drawing the VB pictures for CO, NH3, and benzene, but we will find that these are increasingly challenging with MO theory.
Learning goals for Chapter 2:
• Be able to construct molecular orbital diagrams for homonuclear diatomic, heteronuclear diatomic, homonuclear triatomic, and heteronuclear triatomic molecules.
• Understand and be able to articulate how molecular orbitals form – conceptually, visually, graphically, and (semi)mathematically.
• Interrelate bond order, bond length, and bond strength for diatomic and triatomic molecules, including neutral and ionized forms.
• Use molecular orbital theory to predict molecular geometry for simple triatomic systems
• Rationalize molecular structure for several specific systems in terms of orbital overlap and bonding.
• Understand the origin of aromaticity and anti-aromaticity in molecules with π-bonding.
2.1 Constructing molecular orbitals from atomic orbitals[edit]
We use atomic orbitals (AO) as a basis for constructing MO's.
LCAO-MO = linear combination of atomic orbitals. In physics, this is called this the tight binding approximation.
We have actually seen linear combinations of atomic orbitals before when we constructed hybrid orbitals in Chapter 1. The basic rules we developed for hybridization also apply here: orbitals are added with scalar coefficients (c) in such a way that the resulting orbitals are orthogonal and normalized. The difference is that in the MO case, the orbitals come from different atoms.
The linear combination of atomic orbitals always gives back the same number of molecular orbitals. So if we start with two atomic orbitals (e.g., an s and a pz orbital as shown in Fig. 2.1.1), we end up with two molecular orbitals. When atomic orbitals add in phase, we get constructive interference and a lower energy orbital. When they add out of phase, we get a node and the resulting orbital has higher energy. The lower energy MOs are bonding and higher energy MOs are antibonding.
Fig. 2.1.1. Sigma bonding and antibonding combinations of an s and p orbital
Fig. 2.1.2. Different components of endothelial cells are stained by blue, green, and red fluorescent dyes. For each dye the color of emitted light corresponds to the energy given off when an electron drops from the LUMO to the HOMO of the molecule.
Molecular orbitals are also called wavefunctions (ψ), because they are solutions to the Schrödinger equation for the molecule. The atomic orbitals (also called basis functions) are labeled as φ's, for example, φ1s and φ3pz or simply as φ1 and φ2.
In principle, we need to solve the Schrödinger equation for all the orbitals in a molecule, and then fill them up with pairs of electrons as we do for the orbitals in atoms. In practice we are really interested only in the MOs that derive from the valence orbitals of the constituent atoms, because these are the orbitals that are involved in bonding. We are especially interested in the frontier orbitals, i.e., the highest occupied molecular orbital (the HOMO) and the lowest unoccupied molecular orbital (the LUMO). Filled orbitals that are much lower in energy (i.e., core orbitals) do not contribute to bonding, and empty orbitals at higher energy likewise do not contribute. Those orbitals are however important in photochemistry and spectroscopy, which involve electronic transitions from occupied to empty orbitals. The fluorescent dyes that stain the cells shown in Fig. 2.1.2 absorb light by promoting electrons in the HOMO to empty MOs and give off light when the electrons drop back down to their original energy levels.
As an example of the LCAO-MO approach we can construct two MO's (ψ1 and ψ2) of the HCl molecule from two AO's φ1 and φ2 (Fig. 2.1.1). To make these two linear combinations, we write:
ψ1 = c1φ1 + c2φ2
ψ2 = c1φ1 - c2φ2
The coefficients c1 and c2 will be equal (or nearly so) when the two AOs from which they are constructed are the same, e.g., when two hydrogen 1s orbitals combine to make bonding and antibonding MOs in H2. They will be unequal when there is an energy difference between the AOs, for example when a hydrogen 1s orbital and a chlorine 3p orbital combine to make a polar H-Cl bond.
The wavefunctions φ and ψ are amplitudes that are related to the probability of finding the electron at some point in space. They have lobes with (+) or (-) signs, which we indicate by shading or color. Wherever the wavefunction changes sign we have a node. As you can see in Fig. 2.1.1, nodes in MOs result from destructive interference of (+) and (-) wavefunctions. Generally, the more nodes, the higher the energy of the orbital.
In this example we have drawn a simplified picture of the Cl 3pz orbital and the resulting MOs, leaving out the radial node. Recall that 2p orbitals have no radial nodes, 3p orbitals have one, as illustrated in Fig. 2.1.3. 4p orbitals have two radial nodes, and so on. The MOs we make by combining the AOs have these nodes too.
Fig. 2.1.3. Nodal structure of 2p and 3p orbitals
Normalization: We square the wave functions to get probabilities, which are always positve or zero. So if an electron is in orbital φ1, the probability of finding it at point xyz is the square[1] of φ1(x,y,z). The total probability does not change when we combine AOs to make MOs, so for the simple case of combining φ1 and φ2 to make ψ1 and ψ2,
ψ12 + ψ22 = φ12 + φ22
Fig. 2.1.4. The wavefunctions of atomic orbitals decrease exponentially with distance. Orbital overlap is non-zero when two atoms are close together, as illustrated for 1s orbitals in the upper figure. The lower figure shows orbitals that are too far away to interact. In this case both S and β are close to zero.
Overlap integral:
The spatial overlap between two atomic orbitals φ1 and φ2 is described by the overlap integral S,
S12 = φ12
where the integration is over all space (dτ = dxdydz).
Energies of bonding and antibonding MOs:
The energies of bonding and antibonding orbitals depend strongly on the distance between atoms. This is illustrated in Fig. 2.1.5 for the hydrogen molecule, H2. At very long distances, there is essentially no difference in energy between the in-phase and out-of-phase combinations of H 1s orbitals. As they get closer, the in-phase (bonding) combination drops in energy because electrons are shared between the two positively charged nuclei. The energy reaches a minimum at the equilibrium bond distance (0.74 Å) and then rises again as the nuclei get closer together. The antibonding combination has a node between the nuclei so its energy rises continuously as the atoms are brought together.
Fig. 2.1.5. Energy as a function of distance for the bonding and antibonding orbitals of the H2 molecule
At the equilibrium bond distance, the energies of the bonding and antibonding molecular orbitals (ψ1, ψ2) are lower and higher, respectively, than the energies of the atomic basis orbitals φ1 and φ2. This is shown in Fig. 2.1.6 for the MO’s of the H2 molecule.
Fig. 2.1.6. Molecular orbital energy diagram for the H2 molecule
The energy of an electron in one of the atomic orbitals is α, the Coulomb integral.
α = φ1Hφ1dτ = φ2Hφ2
where H is the Hamiltonian operator. Essentially, α represents the ionization energy of an electron in atomic orbital φ1 or φ2.
The energy difference between an electron in the AO’s and the MO’s is determined by the exchange integral β,
β = φ1Hφ2
β is an important quantity, because it tells us about the bonding energy of the molecule, and also the difference in energy between bonding and antibonding orbitals. Calculating β is not straightforward for multi-electron molecules because we cannot solve the Schrödinger equation analytically for the wavefunctions. We can however make some approximations to calculate the energies and wavefunctions numerically. In the Hückel approximation, which can be used to obtain approximate solutions for π molecular orbitals in organic molecules, we simplify the math by taking S=0 and setting H=0 for any p-orbitals that are not adjacent to each other. The extended Hückel method,[2] developed by Roald Hoffmann, and other semi-empirical methods can be used to rapidly obtain relative orbital energies, approximate wavefunctions, and degeneracies of molecular orbitals for a wide variety of molecules and extended solids. More sophisticated ab initio methods are now readily available in software packages and can be used to compute accurate orbital energies for molecules and solids.
We can get the coefficients c1 and c2 for the hydrogen molecule by applying the normalization criterion:
ψ1 = (φ12)/(2(1+S)) (bonding orbital)
ψ2 = (φ12)/(2(1-S)) (antibonding orbital)
In the case where S≈0, we can eliminate the 1-S terms and both coefficients become 1/2
Note that the bonding orbital in the MO diagram of H2 is stabilized by an energy β/1+S and the antibonding orbital is destabilized by β/1-S. That is, the antibonding orbital goes up in energy more than the bonding orbital goes down. This means that H212ψ20) is energetically more stable than two H atoms, but He2 with four electrons (ψ12ψ22) is unstable relative to two He atoms.
Bond order: In any MO diagram, the bond order can be calculated as ½ ( # of bonding electrons - # of antibonding electrons). For H2 the bond order is 1, and for He2 the bond order is zero.
Heteronuclear case (e.g., HCl) - Polar bonds
Here we introduce an electronegativity difference between the two atoms making the chemical bond. The energy of an electron in the H 1s orbital is higher (it is easier to ionize) than the electron in the chlorine 3pz orbital. This results in a larger energy difference between the resulting molecular orbitals ψ1 and ψ2, as shown in Fig. 2.1.7. The bigger the electronegativity difference between atomic orbitals (the larger Δα is) the more “φ2 character” the bonding orbital has, i.e., the more it resembles the Cl 3pz orbital in this case. This is consistent with the idea that H-Cl has a polar single bond: the two electrons reside in a bonding molecular orbital that is primarily localized on the Cl atom.
Fig. 2.1.7. Molecular orbital energy diagram for the HCl molecule
The antibonding orbital (empty) has more H-character. The bond order is again 1 because there are two electrons in the bonding orbital and none in the antibonding orbital.
Extreme case - Ionic bonding (NaF): very large Δα
In this case, there is not much mixing between the AO’s because their energies are far apart (Fig. 2.1.8). The two bonding electrons are localized on the F atom , so we can write the molecule as Na+F-. Note that if we were to excite an electron from ψ1 to ψ2 using light, the resulting electronic configuration would be (ψ11ψ21) and we would have Na0F0. This is called a charge transfer transition.
Fig. 2.1.8. Molecular orbital energy diagram illustrating ionic bonding in the NaF molecule
Summary of molecular orbital theory so far:
Add and subtract AO wavefunctions to make MOs. Two AOs → two MOs. More generally, the total number of MOs equals the number of AO basis orbitals.
• We showed the simplest case (only two basis orbitals). More accurate calculations use a much larger basis set (more AOs) and solve for the matrix of c’s that gives the lowest total energy, using mathematically friendly models of the potential energy function that is part of the Hamiltonian operator H.
More nodeshigher energy MO
Bond order = ½ ( # of bonding electrons - # of antibonding electrons)
Bond polarity emerges in the MO picture as orbital “character.”
• AOs that are far apart in energy do not interact much when they combine to make MOs.
2.2 Orbital symmetry[edit]
Fig. 2.2.1. Example of σ symmetry.
The MO picture for a molecule gets complicated when many valence AOs are involved. We can simplify the problem enormously by noting (without proof here) that orbitals of different symmetry with respect to the molecule do not interact.
AO’s must have the same nodal symmetry (as defined by the molecular symmetry operations), or their overlap is zero.
For example, in the HCl molecule, the molecular symmetry axis is z →, as shown in Fig. 2.2.2.
Fig. 2.2.2. Orbitals of σ and π symmetry do not interact.
Because these two orbitals have different symmetries, the Cl 3py orbital is nonbonding and doesn’t interact with the H 1s. The same is true of the Cl 3px orbital. The px and py orbitals have π symmetry (nodal plane containing the bonding axis) and are labeled πnb in the MO energy level diagram, Fig. 2.2.3. In contrast, the H 1s and Cl 3pz orbitals both have σ symmetry, which is also the symmetry of the clay pot shown in Fig. 2.2.1. Because these orbitals have the same symmetry (in the point group of the molecule), they can make the bonding and antibonding combinations shown in Fig. 2.1.1.
The MO diagram of HCl that includes all the valence orbitals of the Cl atom is shown in Fig. 2.2.3. Two of the Cl valence orbitals (3px and 3py) have the wrong symmetry to interact with the H 1s orbital. The Cl 3s orbital has the same (σ) symmetry as H 1s, but it is much lower in energy so there is little orbital interaction. The energy of the Cl 3s orbital is thus affected only slightly by forming the molecule. The pairs of electrons in the πnb and σnb orbitals are therefore non-bonding.
Fig. 2.2.3. Energy level diagram of the HCl molecule showing MOs derived from the valence AOs.
Note that the MO result in Fig. 2.2.3 (1 bond and three pairs of nonbonding electrons) is the same as we would get from valence bond theory for HCl. The nonbonding orbitals are localized on the Cl atom, just as we would surmise from the valence bond picture.
In order to differentiate it from the σ bonding orbital, the σ antibonding orbital, which is empty in this case, is designated with an asterisk.
2.3 σ, π, and δ orbitals[edit]
Fig. 2.3.1. The octachlorodirhenate(III) anion, [Re2Cl8]2−, which has a quadruple Re-Re bond.[3]
Inorganic compounds use s, p, and d orbitals (and more rarely f orbitals) to make bonding and antibonding combinations. These combinations result in σ, π, and δ bonds (and antibonds).
You are already familiar with σ and π bonding in organic compounds. In inorganic chemistry, π bonds can be made from p- and/or d-orbitals. δ bonds are more rare and occur by face-to-face overlap of d-orbitals, as in the ion Re2Cl82-. The fact that the Cl atoms are eclipsed in this anion is evidence of δ bonding.
Some possible σ (top row), π (bottom row), and δ bonding combinations (right) of s, p, and d orbitals are sketched below. In each case, we can make bonding or antibonding combinations, depending on the signs of the AO wavefunctions. Because pπ-pπ bonding involves sideways overlap of p-orbitals, it is most commonly observed with second-row elements (C, N, O). π-bonded compounds of heavier elements are rare because the larger cores of the atoms prevent good π-overlap. For this reason, compounds containing C=C double bonds are very common, but those with Si=Si bonds are rare. δ bonds are generally quite weak compared to σ and π bonds. Compounds with metal-metal δ bonds occur in the middle of the transition series.
Sigma-pi bonding.png
Delta bonding cartoon.jpg
Transition metal d-orbitals can also form σ bonds, typically with s-p hybrid orbitals of appropriate symmetry on ligands. For example, phosphines (R3P:) are good σ donors in complexes with transition metals, as shown at the right.
pπ-dπ bonding is also important in transition metal complexes. In metal carbonyl complexes such as Ni(CO)4 and Mo(CO)6, there is sideways overlap between filled metal d-orbitals and the empty π-antibonding orbitals of the CO molecule. This interaction strengthens the metal-carbon bond but weakens the carbon-oxygen bond. The C-O infrared stretching frequency is diagnostic of the strength of the bond and can be used to estimate the degree to which electrons are transferred from the metal d-orbital to the CO π-antibonding orbital.
The same kind of backbonding occurs with phosphine complexes, which have empty π orbitals, as shown at the right. Transition metal complexes containing halide ligands can also have significant pπ-dπ bonding, in which a filled pπ orbital on the ligand donates electron density to an unfilled metal dπ orbital. We will encounter these bonding situations in Chapter 5.
2.4 Diatomic molecules[edit]
Valence bond theory fails for a number of the second row diatomics, most famously for O2, where it predicts a diamagnetic, doubly bonded molecule with four lone pairs. O2 does have a double bond, but it has two unpaired electrons in the ground state, a property that can be explained by the MO picture. We can construct the MO energy level diagrams for these molecules as follows:
Li2, Be2, B2, C2, N2 O2, F2, Ne2
Diatomic MO1.jpg
Diatomic MO2.jpg
AO energies.jpg
We get the simpler picture on the right when the 2s and 2p AOs are well separated in energy, as they are for O, F, and Ne. The picture on the left results from mixing of the σ2s and σ2p MO’s, which are close in energy for Li2, Be2, B2, C2, and N2. The effect of this mixing is to push the σ2s down in energy and the σ2p up, to the point where the pπ orbitals are below the σ2p.
Why don't we get sp-orbital mixing for O2 and F2? The reason has to do with the energies of the orbitals, which are not drawn to scale in the simple picture above. As we move across the second row of the periodic table from Li to F, we are progressively adding protons to the nucleus. The 2s orbital, which has finite amplitude at the nucleus, "feels" the increased nuclear charge more than the 2p orbital. This means that as we progress across the periodic table (and also, as we will see later, when we move down the periodic table), the energy difference between the s and p orbitals increases. As the 2s and 2p energies become farther apart in energy, there is less interaction between the orbitals (i.e., less mixing).
A plot of orbital energies is shown at the right. Because of the very large energy difference between the 1s and 2s/2p orbitals, we plot them on different energy scales, with the 1s to the left and the 2s/2p to the right. For elements at the left side of the 2nd period (Li, Be, B) the 2s and 2p energies are only a few eV apart. The energy difference becomes very large - more than 20 electron volts - for O and F. Since single bond energies are typically about 3-4 eV, this energy difference would be very large on the scale of our MO diagrams. For all the elements in the 2nd row of the periodic table, the 1s (core) orbitals are very low in energy compared to the 2s/2p (valence) orbitals, so we don't need to consider them in drawing our MO diagrams.
2.5 Orbital filling[edit]
MO’s are filled from the bottom according to the Aufbau principle and Hund’s rule, as we learned for atomic orbitals.
Question: what is the quantum mechanical basis of Hund’s rule?
Consider the case of two degenerate orbitals, such as the π or π* orbitals in a second-row diatomic molecule. If these orbitals each contain one electron, their spins can be parallel (as preferred by Hund's rule) or antiparallel. The Pauli exclusion principle says that no two electrons in an orbital can have the same set of quantum numbers (n, l, ml, ms). That means that, in the parallel case, the Pauli principle prevents the electrons from ever visiting each other's orbitals. In the antiparallel case, they are free to come and go because they have different ms quantum numbers. However, having two electrons in the same orbital is energetically unfavorable because like charges repel. Thus, the parallel arrangement, thanks to the Pauli principle, has lower energy.
For O2 (12 valence electrons), we get the MO energy diagram below. The shapes of the molecular orbitals are shown at the right.
Oxygen molecular orbital diagram
O2 molecular orbitals
Red giant stars are characterized by the presence of C2 molecules in their atmospheres. Since C2 has a net bond order of two, it reacts rapidly as it cools from the gas phase to make other forms of carbon such as fullerenes, graphite, and diamond, all of which have four bonds for every two carbon atoms.
This energy ordering of MOs correctly predicts two unpaired electrons in the π* orbital and a net bond order of two (8 bonding electrons and 4 antibonding electrons). This is consistent with the experimentally observed paramagnetism of the oxygen molecule.
Other interesting predictions of the MO theory for second-row diatomics are that the C2 molecule has a bond order of 2 and that the B2 molecule has two unpaired electrons (both verified experimentally).
We can also predict (using the O2, F2, Ne2 diagram above) that NO has a bond order of 2.5, and CO has a bond order of 3.
The symbols "g" and "u" in the orbital labels, which we only include in the case of centrosymmetric molecules, refer to their symmetry with respect to inversion. Gerade (g) orbitals are symmetric, meaning that inversion through the center leaves the orbital unchanged. Ungerade (u) means that the sign of the orbital is reversed by the inversion operation. Because g and u orbitals have different symmetries, they have zero overlap with each other. As we will see below, factoring orbitals according to g and u symmetry simplifies the task of constructing molecular orbitals in more complicated molecules, such as butadiene and benzene.
The orbital shapes shown above were computed using a one-electron model of the molecule, as we did for hydrogen-like AOs to get the shapes of s, p, and d-orbitals. To get accurate MO energies and diagrams for multi-electron molecules (i.e. all real molecules), we must include the fact that electrons are “correlated,” i.e. that they avoid each other in molecules because of their negative charge. This problem cannot be solved analytically, and is solved approximately in numerical calculations by using density functional theory (DFT). We will learn about the consequences of electron correlation in solids (such as superconductors) in Chapter 10.
2.6 Periodic trends in π bonding[edit]
As we noted in Section 2.3, pπ-bonding almost always involves a second-row element.
We encounter π-bonding from the sideways overlap of p-orbitals in the MO diagrams of second-row diatomics (B2…O2). It is important to remember that π-bonds are weaker than σ bonds made from the same AOs, and are especially weak if they involve elements beyond the second row.
comparison of ethylene and silylene structures
Ethylene: Stable molecule, doesn't polymerize without a catalyst.
Silylene: Never isolated, spontaneously polymerizes. Calculations indicate 117 kJ/mol stability in the gas phase relative to singly-bonded (triplet) H2Si-SiH2.
White phosphorus (P4) is a soft, waxy solid that ignites spontaneously in air, burning with a bright flame and generating copious white P4O10 smoke. The sample shown here is photographed under water to prevent the oxidation reaction.
The large Ne core of Si atoms inhibits sideways overlap of 3p orbitals → weak π-bond.
Other examples: P4 vs. N2
comparison of elemental phosphorus and nitrogen structures
P cannot make π-bonds with itself, so it forms a tetrahedral molecule with substantial ring strain. This allotrope of P undergoes spontaneous combustion in air. Solid white phosphorus very slowly converts to red phosphorus, a more stable allotrope that contains sheets of pyramidal P atoms, each with bonds to three neighboring atoms and one lone pair.
N can make π-bonds, so N2 has a very strong triple bond and is a relatively inert diatomic gas.
Silicone polymers (R2SiO)n are used in non-stick cookware, like these muffin cups, in Silly Putty, and many other applications.
(CH3)2SiO vs. (CH3)2CO
comparison of polysiloxane and acetone structures
“RTV” silicone polymer (4 single bonds to Si) vs. acetone (C=O double bond). Silicones are soft, flexible polymers that can be heated to high temperatures (>300 °C) without decomposing. Acetone is a flammable molecular liquid that boils at 56 °C.
Also compare:
SiO2 (mp ~1600°C) vs. CO2 (sublimes at -78°C)
S8 (solid, ring structure) vs. O2 (gas, double bond)
P4O10 molecular structure.PNG
2nd row elements can form reasonably strong π-bonds with the smallest of the 3rd row elements, P, S, and Cl. Thus we find S=N bonds in sulfur-nitrogen compounds such as S2N2 and S3N3-, P=O bonds in phosphoric acid and P4O10 (shown at the left), and a delocalized π-molecular orbital in SO2 (as in ozone).
2.7 Three-center bonding[edit]
Many (but not all) of the problems we will solve with MO theory derive from the MO diagram of the H2 molecule (Fig. 2.1.5), which is a case of two-center bonding. The rest we will solve by analogy to the H3+ ion, which introduces the concept of three-center bonding.
We can draw the H3+ ion (and also H3 and H3-) in either a linear or triangular geometry.
Walsh correlation diagram for H3+:
walsh diagram for H3+
A few important points about this diagram:
• For the linear form of the ion, the highest and lowest MO’s are symmetric with respect to the inversion center in the molecule. Note that the central 1s orbital has g symmetry, so by symmetry it has zero overlap with the u combination of the two 1s orbitals on the ends. This makes the σu orbital a nonbonding orbital.
• In the triangular form of the molecule, the orbitals that derive from σu and σ*g become degenerate (i.e., they have identically the same energy by symmetry). The term symbol “e” means doubly degenerate. We will see later that “t” means triply degenerate. Note that we drop the “g” and “u” for the triangular orbitals because a triangle does not have an inversion center.
• The triangular form is most stable because the two electrons in H3+ have lower energy in the lowest orbital. Bending the molecule creates a third bonding interaction between the 1s orbitals on the ends.
MO diagram for XH2 (X = Be, B, C…):
This is more complicated than H3 because the X atom has both s and p orbitals. However, we can symmetry factor the orbitals and solve the problem by analogy to the H2 molecule:
MO diagram for XH2
Some key points about this MO diagram:
• In the linear form of the molecule, which has inversion symmetry, the 2s and 2p orbitals of the X atom factor into three symmetry classes:
2s = σg
2pz = σu
2px, 2py = πu
• Similarly, we can see that the two H 1s orbitals make two linear combinations, one with σg symmetry and one with σu symmetry. They look like the bonding and antibonding MO’s of the H2 molecule (which is why we say we use that problem to solve this one).
• The πu orbitals must be non-bonding because there is no combination of the H 1s orbitals that has πu symmetry.
• In the MO diagram, we make bonding and antibonding combinations of the σg’s and the σu’s. For BeH2, we then populate the lowest two orbitals with the four valence electrons and discover (not surprisingly) that the molecule has two bonds and can be written H-Be-H. The correlation diagram shows that a bent form of the molecule should be less stable.
An interesting story about this MO diagram is that it is difficult to predict a priori whether CH2 should be linear or bent. In 1970, Charles Bender and Henry Schaefer, using quantum chemical calculations, predicted that the ground state should be a bent triplet with an H-C-H angle of 135°.[4] The best experiments at the time suggested that methylene was a linear singlet, and the theorists argued that the experimental result was wrong. Later experiments proved them right!
2.8 Building up the MOs of more complex molecules: NH3, P4[edit]
MO diagram for NH3
We can now attempt the MO diagram for NH3, building on the result we obtained with triangular H3+.
molecular orbital diagram for the ammonia molecule
Notes on the MO diagram for ammonia:
• Viewed end-on, a p-orbital or an spx hybrid orbital looks just like an s-orbital. Hence we can use the solutions we developed with s-orbitals (for H3+) to set up the σ bonding and antibonding combinations of nitrogen sp3 orbitals with the H 1s orbitals.
• We now construct the sp3 hybrid orbitals of the nitrogen atom and orient them so that one is “up” and the other three form the triangular base of the tetrahedron. The latter three, by analogy to the H3+ ion, transform as one totally symmetric orbital (“a1”) and an e-symmetry pair. The hybrid orbital at the top of the tetrahedron also has a1 symmetry.
• The three hydrogen 1s orbitals also make one a1 and one (doubly degenerate) e combination. We make bonding and antibonding combinations with the nitrogen orbitals of the same symmetry. The remaining a1 orbital on N is non-bonding. The dotted lines show the correlation between the basis orbitals of a1 and e symmetry and the molecular orbitals
• The result in the 8-electron NH3 molecule is three N-H bonds and one lone pair localized on N, the same as the valence bond picture (but much more work!).
P4 molecule and P42+ ion:
By analogy to NH3 we can construct the MO picture for one vertex of the P4 tetrahedron, and then multiply the result by 4 to get the bonding picture for the molecule. An important difference is that there is relatively little s-p hybridization in P4, so the lone pair orbitals have more s-character and are lower in energy than the bonding orbitals, which are primarily pσ.
P4: 20 valence electrons
molecular orbital scheme for the P4 molecule
Take away 2 electrons to make P42+
Highest occupied MO is a bonding orbital → break one bond, 5 bonds left
Square form relieves ring strain, (60° → 90°)
structure of the P42+ion
2 π electrons
aromatic (4n + 2 rule)
2.9 Homology of σ and π orbitals in MO diagrams[edit]
The ozone molecule (and related 18e molecules that contain three non-H atoms, such as NO2- and the allyl anion [CH2-CH-CH2]-) is an example of 3-center 4-electron π-bonding. Our MO treatment of ozone is entirely analogous to the 4-electron H3- anion. We map that solution onto this one as follows:
constructing the ozone π-system by analogy to H3-
Professor Roald Hoffmann's ideas about orbital symmetry have helped explain the bonding and reactivity of organic and organometallic molecules, and also the structures and properties of extended solids.
The nonbonding π-orbital has a node at the central O atom. This means that the non-bonding electron pair in the π-system is shared by the two terminal O atoms, i.e., that the formal charge is shared by those atoms. This is consistent with the octet resonance structure of ozone.
This trick of mapping the solution for a set of s-orbitals onto a π-bonding problem is a simple example of a broader principle called the isolobal analogy. This idea, developed extensively by Roald Hoffmann at Cornell University, has been used to understand bonding and reactivity in organometallic compounds.[5] In the isolobal analogy, symmetry principles (as illustrated above in the analogy between H3- and ozone) are used to construct MO diagrams of complex molecules containing d-frontier orbitals from simpler molecular fragments.
The triiodide ion. An analogous (and seemingly more complicated) case of 3-center 4-electron bonding is I3-. Each I atom has 4 valence orbitals (5s, 5px, 5py, 5pz), making a total of 12 frontier orbitals, and the I3- anion has 22 electrons.
We can simplify the problem by recalling two periodic trends:
• The s-p orbital splitting is large, relative to the bond energy, after the second row of the periodic table. Thus, the 5s orbital is low in energy and too contracted to make bonds with its neighbors.
• π-overlap of 5p orbitals is very weak, so the 5px and 5py orbitals will also be non-bonding.
This leaves only the three 5pz orbitals to make bonding/nonbonding/antibonding combinations. Again, the problem is entirely analogous to ozone or H3-.
constructing the I3- MO diagram from frontier 5pz orbitals
Counting orbitals we obtain 9 lone pairs from the nonbonding 5s, 5px, and 5py orbitals, as well as one bond and one lone pair from the 5pz orbital combinations above. The total of 10 nonbonding pairs and one bond accounts for the 22 electrons in the ion. The non-bonding 5pz pair is localized on the terminal I atoms, giving each a -1/2 formal charge. This MO description is entirely consistent with the octet no-bond resonance picture of I3- that we developed in Chapter 1.
Octet triiodide.jpg
2.10 Chains and rings of π-conjugated systems[edit]
ethylene molecule
Ethylene: The π system is analogous to σ-bonding in H2
homology of H2 and ethylene orbitals
Viewed from the top or bottom, the ethylene π-orbitals look like the H2 σ orbitals. Thus we can map solutions from chains and rings of H atoms onto chains and rings of π-orbitals (as we did for the three-orbital case of O3).
Chains and rings of four H atoms or π-orbitals (H4 or butadiene):
MO diagram for H4 or butadiene
A few notes about this MO diagram:
• In the linear form of the molecule, the combination of AOs makes a ladder of evenly spaced energy levels that alternate g – u – g – u …. Each successive orbital has one more node. This is a general rule for linear chains of σ or π orbitals with even numbers of atoms.
• In the cyclic form of the molecule, there is one non-degenerate orbital at the bottom, one at the top, and a ladder of doubly degenerate orbitals in between. This is also a general rule for cyclic molecules with even numbers of atoms. This is the origin of the 4n+2 rule for aromatics.
• H4 has four valence electrons, and by analogy butadiene has four π-electrons. These electrons fill the lowest two MOs in the linear form of the molecule, corresponding to two conjugated π-bonds in butadiene (H2C=CH-CH=CH2).
• In the cyclic form of the molecule, the degenerate orbitals are singly occupied. The molecule can break the degeneracy (and lower its energy) by distorting to a puckered rectangle. This is a general rule for anti-aromatic cyclic molecules (4n rule). Thus cyclobutadiene should be anti-aromatic and have two single and two double bonds that are not delocalized by resonance.
antiaromaticity of cyclobutadiene
Cyclobutadiene is actually a very unstable molecule because it polymerizes to relieve ring strain. Sterically hindered derivatives of the molecule do have the puckered rectangular structure predicted by MO theory.
Benzene π-orbitals:
How do we get from a 4-atom to 6-atom chain?
By analogy to the process we used to go from a 2-atom chain to a 4-atom chain, we now go from 4 to 6. We start with the orbitals of the 4-atom chain, which form a ladder of g and u orbitals. Then we make g and u combinations of the two atoms that we are adding at the ends. By combining g's with g's and u's with u's, we end up with the solutions for a string of 6 atoms. Closing these orbitals into a loop gives us the π molecular orbitals of the benzene molecule. The result is three π bonds, as we expected. Benzene fits the 4n+2 rule (n=2) and is therefore aromatic.
constructing the pi MOs of benzene
Here we have used the isolobal analogy to construct MO diagrams for π-bonded systems, such as ethylene and benzene, from combinations of s-orbitals. It raises the interesting question of whether the aromatic 4n+2 rule might apply to s-orbital systems, i.e., if three molecules of H2 could get together to form an aromatic H6 molecule. In fact, recent studies of hydrogen under ultra-high pressures in a diamond anvil cell show that such structures do form. A solid hydrogen phase exists that contains sheets of distorted six-membered rings, analogous to the fully connected 2D network of six-membered rings found in graphite or graphene.[6]
It should now be evident from our construction of MO diagrams for four- and six-orbital molecules that we can keep adding atomic orbitals to make chains and rings of 8, 10, 12... atoms. In each case, the g and u orbitals form a ladder of MOs. At the bottom rung of the ladder of an N-atom chain, there are no nodes in the MO, and we add one node for every rung until we get to the top, where there are N-1 nodes. Another way of saying this is that the wavelength of an electron in orbital x, counting from the bottom (1,2,3...x,...N), is 2Na/x, where a is the distance between atoms. We will find in Chapters 6 and 10 that we can learn a great deal about the electronic properties of metals and semiconductors from this model, using the infinite chain of atoms as a model for the crystal.
2.11 Discussion questions[edit]
• Derive the molecular orbital diagrams for linear and bent H2O.
• Explain why the bond angles in H2O and H2S are different.
• We have derived the MO diagrams for the pi-systems of four- and six-carbon chains and rings. Repeat this exercise for a 5-carbon chain and 5-carbon ring (e.g., the cyclopentadienide anion), starting from the MO pictures for H2 and H3. This tricky problem helps us understand the electronic structure of ferrocene, and was the subject of a Nobel prize in 1973.
2.12 Problems[edit]
1. The ionization energy of a hydrogen atom is 1312 kJ/mol and the bond dissociation energy of the H2+ molecular ion is 256 kJ/mol. The overlap integral S for the H2+ molecular ion is given by the expression S = (1 + R/a0 + R2/3a02)exp(-R/a0), where R is the bond distance (1.06 Å) and a0 is the Bohr radius, 0.529 Å. What are the values of α and β (in units of kJ/mol) for H2+?
2. Compare the bond order in H2+ and H2 using the molecular orbital energy diagram for H2. The bond dissociation energy of the H2 molecule is 436 kJ/mol. Explain why this energy is less than twice that of H2+.
3. What is the bond order in HHe? Why has this compound never been isolated?
4. Would you expect the Be2 molecule to be stable in the gas phase? What is the total bond order, and how many net σ and π bonds are there?
5. Give a plausible explanation for the following periodic trend in F-M-F bond angles for gas-phase alkali difluoride (MF2) molecules. (Hint - it has something to do with a trend in s- and p-orbital energies)
Compound F-M-F angle (degrees)
BeF2 180
MgF2 158
CaF2 140
SrF2 108
BaF2 100
6. The most stable allotrope of nitrogen is N2, but the analogous phosphorus molecule (P2) is unknown. Explain.
7. Using molecular orbital theory, show why the H3+ ion has a triangular rather than linear shape.
8. Use MO theory to determine the bond order and number of unpaired electrons in (a) O2-, (b) O2+, (c) gas phase BN, and (d) NO-. Estimate the bond lengths in O2- and O2+ using the Pauling formula, and the bond length in the O2 molecule (1.21 Å).
10. Compare the results of MO theory and valence bond theory for describing the bonding in (a) CN- and (b) neutral CN. Is it possible to have a bond order greater than 3 in a second-row diatomic molecule?
11. The C2 molecule, which is a stable molecule only in the gas phase, is the precursor to fullerenes and carbon nanotubes. Its luminescence is also responsible for the green glow of comet tails. Draw the molecular orbital energy diagram for this molecule. Determine the bond order and the number of unpaired electrons.
12. Use the Pauling formula to estimate the bond order in C2 from the bond distance, 1.31 Å. The C-C single bond distance in ethane is 1.54 Å. Does your calculation agree with your answer to problem 11? What bond order would valence bond theory predict for C2?
13. Draw the MO diagram for the linear [FHF]- ion. The only orbitals you need to worry about are the frontier orbitals, i.e., the H 1s and the two F spz hybrid orbitals that lie along the bonding (z) axis. What is the order of the HF bonds? What are the formal charges on the atoms?
14. The cyclooctatetraene (cot) molecule (picture a stop sign with four double bonds) has a puckered ring structure. However in U(cot)2, where the oxidation state of uranium is 4+ and the cot ligand has a formal charge of 2-, the 8-membered rings are planar. Why is cot2- planar?
2.13 References[edit]
1. More precisely, in the case of a complex wavefunction φ, the probability is the product of φ and its complex conjugate φ*
2. Hoffmann, R. (1963). "An Extended Hückel Theory. I. Hydrocarbons.". J. Chem. Phys. 39 (6): 1397–1412. doi:10.1063/1.1734456. Bibcode1963JChPh..39.1397H.
3. Cotton, F. A.; Harris, C. B. Inorg. Chem., 1965, 4 (3), 330-333. DOI|10.1021/ic50025a015
4. C. F. Bender and H. F. Schaefer III, New theoretical evidence for the nonlinearity of the triplet ground state of methylene, J. Am. Chem. Soc. 92, 4984–4985 (1970).
5. Hoffmann, R. (1982). "Building Bridges Between Inorganic and Organic Chemistry (Nobel Lecture)". Angew. Chem. Int. Ed. 21 (10): 711–724. doi:10.1002/anie.198207113.
6. I. Naumov and R. J. Hemley, Acc. Chem. Res. 47, 3551–3559 (2014) |
c2351a396db5f056 |
Jacques Distler vs some QFT lore
Three weeks ago, in the article titled
What's going on? Indeed, textbooks and instructors often – and, according to some measures, always – say that quantum mechanics of one particle ceases to behave well once you switch to relativity – to theories covariant under the Lorentz transformations.
Are these statements right? Are they wrong? And are the correct statements one can make important? It depends what exact statements you have in mind.
What Distler discusses is the existence of the Hilbert space – and Hamiltonian – for one particle, e.g. the Klein-Gordon particle. Does it exist? You bet. If you believe that a Hilbert space of particles exists in free quantum field theory, do the following: Write a basis vector of that Hilbert space as the basis of a Fock space, i.e. in terms of the basis vector that are\[
a^\dagger_{\vec k_1} \cdots a^\dagger_{\vec k_n} \ket 0
\] And simply pick those basis vectors that contain exactly one creation operator. This one-particle subspace of the Hilbert space will evolve to itself under the empty-spacetime evolution operators. In fact, if you write the basis in the momentum basis as I did, the Hamiltonian for one real quantum of the real Klein-Gordon equation will be simply\[
H = \sqrt{|\vec k|^2 + m^2}.
\] This is something you may derive from quantum field theory. The operator above is perfectly well-defined in the momentum space. The energy is non-negative, the norms of states are positive, everything works fine.
So has Distler shown that all the statements of the type "one particle isn't consistent in relativistic quantum mechanics" are wrong?
Nope, he hasn't. In particular, he was talking about the statement
...replacing the [non-relativistic, e.g. one-particle] Schrödinger equation with Klein-Gordon make[s] no sense...
But this statement is right at the level of one-particle quantum mechanics because his equation for the evolution of the wave function is not the Klein-Gordon equation. You know, the Klein-Gordon equation is\[
\left(\frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial z^2} + m^2 \right) \Phi = 0.
\] That's a nice, local – perfectly differential equation. On the other hand, the replacement for the non-relativistic Schrödinger equation\[
i\hbar\frac{\partial}{\partial t} \psi = -\frac{\hbar^2}{2m} \Delta \psi + V(x) \psi
\] that he derived and that describes the evolution of one-particle states was\[
i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi
\] Because the square root has a neverending Taylor expansion, the function of the Laplace operator is a terribly non-local "integral operator" acting on the wave function \(\psi(x,y,z,t)\) in the position representation. So this equation for one particle, even though it follows from the Klein-Gordon quantum field theory, doesn't have the nice and local Klein-Gordon form. It isn't pretty and it isn't fundamental. If you wrote this equation in isolation, you should be worried that the resulting theory isn't relativistic because relativity implies locality and this equation allows the localized wave function packet to spread superluminally!
What the statements mean is that if you want to use some nice and local equation for a wave function for one particle – i.e. if you literally want to replace Schrödinger's equation by the similar Klein-Gordon equation – you won't find a way to construct (in terms of local functions of derivatives etc.) the probability current and density etc. that would have the desired positivity properties etc. And this statement is just true and important!
If you want to return to simple, fundamental, justifiable, beautiful equations, you can indeed use the Klein-Gordon, Dirac, Maxwell, and other equations. But you must appreciate that they're equations for (field) operators, not for wave functions.
This statement is important because it's not just a mathematical one. It's highly physical, too. In particular, if you consider any relativistic quantum mechanical theory of particles – quantum field theory or something grander, like string theory – it's unavoidable that when you confine particles to the distance shorter than the Compton wavelength \(\hbar / mc\) of that particle, you will unavoidably have enough energy so that particle-antiparticle pairs will start to be produced at nonzero probabilities. And in relativity, it's normal for a particular to move by a speed comparable to the speed of light, and then its wavelength is comparable to the Compton wavelength. You can't really trust the one-particle theory at distances comparable to its normal de Broglie wavelength! So the theory is wrong in some very strong sense.
The antiparticles (which are the same with the original particle in the real Klein-Gordon case, just to be sure) inevitably follow from relativity combined with quantum mechanics, and so does the pair production of particles and antiparticles. This physical statement has lots of nearly equivalent mathematical manifestations. For example, local observables in a relativistic quantum theory have to be constructed out of quantum fields. So the 1-particle Hilbert space doesn't have any truly local observables: You can't construct the Klein-Gordon field \(\Phi(x,y,z,t)\) out of operators acting on the 1-particle Hilbert space because the latter operators never change the number of particles while \(\Phi(x,y,z,t)\) does (by one or minus one – it's a combination of creation and annihilation operators). In fact, you can't construct the bilinears in \(\Phi\) and/or its derivatives, either, because while those operators in QFT contain some terms that preserve the number of particles, they also contain equally important terms that change the number of particles by two (particle-antiparticle pair production or pair annihilation) and those are equally important for obtaining the right commutators and other things. The mixing of creation operators for particles and the annihilation operators for antiparticles is absolutely unavoidable if you want to define observables at points (or regions smaller than the Compton wavelength).
There's one more statement that Distler made and that is really wrong. Distler wrote that the problems only begin when you start to consider interactions – and from the context, it's clear that he meant interactions involving several quanta of quantum fields, several particles in the quantum field theory sense. But that's not true.
Problems of "one-particle relativistic quantum mechanics" already appear if you consider the behavior of the single particle in external classical fields. Just squeeze a Klein-Gordon particle – e.g. a Higgs boson – in between two metallic plates whose distance is sent to zero. Will it make sense? No, as I mentioned, the walls start to produce particle-antiparticle quanta in general. Time-dependent Hamiltonians lead to particle production, if you wish. Similarly, if you place these particles in any external classical field, the actual Klein-Gordon field may react in a way to create particle pairs.
So the truncation of the Hilbert space of a quantum field theory to the one-particle subspace is inconsistent not only if you consider interactions of particles in the usual Feynman diagrammatic sense – but even if you consider the behavior of the particle in external classical fields. Whatever you try to with the particle that goes beyond the stupid simple single free-particle Hamiltonian will force you to acknowledge that the truncated one-particle theory is no good.
We want to do something more with the theory than just write an unmotivated non-local Hamiltonian of the kind \(H\sim \sqrt{m^2+p^2}\) if I use \(\hbar=c=1\) units here. And as soon as we do anything else – justify this ugly and seemingly non-local (and therefore seemingly relativity-violating) Hamiltonian by an elegant theory, study particle interactions, study the behavior of one particle in external classical fields – we just need to switch to the full-blown quantum field theory, otherwise our musings will be inconsistent.
One extra comment. I mentioned that the non-local differential operator allows the wave packet to spread superluminally. How is it possible that such a thing results from a relativistic theory? Well, quantum field theory has no problem with that because when you do any doable measurement, the processes in which a particle spreads in the middle gets combined with processes involving antiparticles. When you calculate the "strength of influences spreading superluminally", some Green's functions – which are nonzero for spacelike separations – will combine to the "commutator correlation function" which vanishes at spacelike separation. So the inseparable presence of antiparticles will save the locality for you. The truncation to particles-only (without antiparticles) would indeed violate locality required by relativity as long as you could experimentally verify it (you need at least some interactions of that particle with something else for that).
While Jacques is right about the possibility to truncate the Hilbert space of quantum field theories to the one-particle subspaces, he's morally wrong about all these big statements – and some of his statements are literally wrong, too. At least morally, the lore that drives him up the wall is right and there are ways to formulate this lore so that it is both literally true and important, too.
So students in Austin are encouraged to actively ignore their grumpy instructor's tirades against the quantum field theory lore and even more encouraged to understand in what sense the lore is true.
As I explain in the comments, many quantum field theory textbooks have wonderful explanations – usually at the very beginning – of the wisdom that Jacques Distler seems to misunderstand, namely why quantum fields and the mixing of sectors with different numbers of particles is unavoidable for consistency of quantum mechanics with special relativity.
The 2008 textbook by my adviser Tom Banks starts the explanation on Page 3, in section "Why quantum field theory?" It says that the probability amplitude for a particle emission at spacetime point \(x\) and its absorption at point \(y\) is unavoidably nonzero for spacelike separations and because it would only be only nonzero for one of the two time orderings of \(x,y\), and the ordering of spacelike-separated event isn't Lorentz-invariant, the Lorentz invariance would be broken and one must actually demand that only amplitudes where both orders are summed over are allowed. In other words, as argued on page 5, the only known consistent ways to solve this clash with the Lorentz invariance is to postulate that every emission source must also be able to act as an absorption sink and vice versa. When both terms are combined, the sum is still nonzero in the spacelike region but has no brutal discontinuities when the ordering gets reversed.
Also, when the particle carries charges, the emission and absorption in the two related processes must involve particles of opposite charges and one predicts (and Dirac predicted) the existence of antiparticles that are needed for things to work.
Weinberg QFT Volume 1 explains the negative probabilities and energies of the relativistic equations naively used instead of the non-relativistic Schrödinger equation on pages 7, 12, 15... Read it for a while. It's OK but, in my opinion, much less deep than Tom's presentation.
Peskin's and Schroeder's textbook on quantum field theory discusses the non-vanishing of the amplitudes in the spacelike region on page 14 and pages 27-28 discuss that the actual influence of one measurement on another is measured by the commutator of two field operators. And that vanishes for spacelike separations – again, because two processes that are opposite to each other are subtracted.
Without the mixing of creation operators (for particles) and annihilation operators (for antiparticles), you just can't define any observables that would belong to a point or a region and that would behave relativistically (respected the independence of observables that are spacelike separated). Quantum fields are the only known way to avoid this conflict between quantum mechanics and relativity. They are unavoidably superpositions of positive- and negative-energy solutions, and therefore are expanded in sums of creation and annihilation operators. That's why all local discussions make it necessary to allow emission and absorption at the same time – and, consequently, the combination of quantum mechanics and relativity makes it necessary to consider the whole Fock space with a variable number of particles. The one-particle truncation is inconsistent with relativistic dynamics such as time-dependent interactions, emission, or absorption.
In the mathematical language, fields and their functions are necessary for any local observables in relativistic quantum mechanical theories. They always contain terms that change the number of particles – except for the trivial constant operator \(1\). In the physical language, relativity and quantum mechanics simultaneously imply that emission and absorption are linked, antiparticle exists, and scattering amplitudes for particles and antiparticles have to obey identities such as the crossing symmetry.
The teaching of a quantum field theory course could be a good opportunity for Jacques to learn this basic stuff that is often presented on pages such as 3,5,7,12,14... of introductory textbooks.
Add to Digg this Add to reddit
snail feedback (0) : |
86b9483c74d49935 | September 7, 2012 § Leave a comment
Animals behave. Of course, one could say.
Yet, why do we feel a certain naturalness here, in this relation between the cat as an observed and classified animal on the one side and the language game “behavior” on the other? Why don’t we say, for instance, that the animal happens? Or, likewise, that it is moved by its atoms? To which conditions does the language game “behavior” respond?
As strange as this might look like, it is actually astonishing that physicists easily attribute the quality of “behavior” to their dog or their cat, albeit they rarely will attribute them ideas (for journeys or the like). For physicists usually claim that the whole world can be explained in terms of the physical laws that govern the movement of atoms (e.g. [1]). Even physicists, it seems, exhibit some dualism in their concepts when it comes to animals. Yet, physicists claimed for a long period of time, actually into the mid of the 1980ies, that behavioral sciences actually could not count as a “science” at all, despite the fact that Lorenz and Tinbergen won the Nobel prize for medical sciences in 1973.
The difficulties physicists obviously suffer from are induced by a single entity: complexity. Here we refer to the notion of complexity that we developed earlier, which essentially is built from the following 5 elements.
• – Flux of entropy, responsible for dissipation;
• – Antagonistic forces, leading to emergent patterns;
• – Standardization, mandatory for temporal persistence on the level of basic mechanisms as well as for selection processes;
• – Compartmentalization, together with left-overs leading to spatio-temporal persistence as selection;
• – Self-referential hypercycles, leading to sustained 2nd order complexity with regard to the relation of the whole to its parts.
Any setup for which we can identify this set of elements leads to probabilistic patterns that are organized on several levels. In other words, these conditioning elements are necessary and sufficient to “explain” complexity. In behavior, the sequence of patterns and the sequence of more simple elements within patterns are by far not randomly arranged, yet, it is more and more difficult to predict a particular pattern the higher its position in the stack of nested patterns, that is, its level of integration. Almost the same could be said about the observable changes in complex systems.
Dealing with behavior is thus a non-trivial task. There are no “laws” that would be mapped somehow into the animal such that an apriori defined mathematical form would suffice for a description of the pattern, or the animal as a whole. In behavioral sciences, one first has to fix a catalog of behavioral elements, and only by reference to this catalog we can start to observe in a way that will allow for comparisons with other observations. I deliberately avoid the concept of “reproducibility” here. How to know about that catalog, often called behavioral taxonomy? The answer is we can’t know in the beginning. To reduce observation completely to the physical level is not a viable alternative either. Observing a particular species, and often even a particular social group or individual improves over time, yet we can’t speak about that improvement. There is a certain notion of “individual” culture here that develops between the “human” observer and the behaving system, the animal. The written part of this culture precipitates in the said catalog, but there remains a large part of habit of observing that can’t be described without performing it. Observations on animals are never reproducible in the same sense as it is possible with physical entities. The ultimate reason being that the latter are devoid of individuality.
A behavioral scientist may work on quite different levels. She could investigate some characteristics of behavior in relation to the level of energy consumption, or to differential reproductive success. On this level, one would hardly go into the details of the form of behavior. Quite differently to this case are those investigations that are addressing the level of the form of the behavior. The form becomes an important target of the investigation if the scientist is interested in the differential social dynamics of animals belonging to different groups, populations or species. In physics, there is no form other than the mathematical. Electrons are (treated in) the same (way) by physicists all over the world, even across the whole universe. Try this with cats… You will loose the cat-ness.
It is quite clear that the social dynamics can’t be addressed by means of mere frequencies of certain simple behavioral elements, such like scratching, running or even sniffing at other animals. There might be differences, but we won’t understand too much of the animal, of course, particularly not with regard to the flow of information in which the animal engages.
The big question that arose during the 1970ies and the 1980ies was, how to address behavior, its structure, its patterning, and thereby to avoid a physicalist reduction?
Some intriguing answers has been given in the respective discourse since the beginning of the 1950ies, though only a few people recognized the importance of the form. For instance, to understand wolves Moran and Fentress [2] used the concept of choreography to get a descriptional grip on the quite complicated patterns. Colmenares, in his work about baboons, most interestingly introduced the notion of the play to describe the behavior in a group of baboons. He distinguished more than 80 types of social games as an arrangement of “moves” that span across space and time in a complicated way; this behavioral wealth rendered it somewhat impossible to analyze the data at that time. The notion of the social game is so interesting because it is quite close to the concept of language game.
Doing science means to translate observations into numbers. Unfortunately, in behavioral sciences this translation is rather difficult and in itself only little standardized (so far) despite many attempts, precisely for the reason that behavior is the observable output of a deeply integrated complex system, for instance the animal. Whenever we are going to investigate behavior we carefully have to instantiate the selection of the appropriate level we are going to investigate. Yet, in order to understand the animal, we even could not reduce the animal onto a certain level of integration. We should map the fact of integration itself.
There is a dominant methodological aspect in the description of behavior that differs from those in sciences more close to physics. In behavioral sciences one can invent new methods by inventing new purposes, something that is not possible in classic physics or engineering, at least if matter is not taken as something that behaves. Anyway, any method for creating formal descriptions invokes mathematics.
Here it becomes difficult, because mathematics does not provide us any means to deal with emergence. We can’t, of course, blame mathematics for that. It is not possible in principle to map emergence onto an apriori defined set of symbols and operations.
The only way to approximate an appropriate approach is by a probabilistic methodology that also provides the means to distinguish various levels of integration. The first half of this program is easy to accomplish, the second less so. For the fact of emergence is a creative process, it induces the necessity for interpretation as a constructive principle. Precisely this has been digested by behavioral science into the practice of the behavioral catalog.
1. This Essay
Well, here in this essay I am not interested mainly in the behavior of animals or the sciences dealing with the behavior of animals. Our intention was just to give an illustration of the problematic field that is provoked by the “fact” of the animals and their “behavior”. The most salient issue in this problematic field is the irreducibility, in turn caused by the complexity and the patterning resulting from it. The second important part on this field is given by the methodological answers to these concerns, namely the structured probabilistic approach, which responds appropriately to the serial characteristics of the patterns, that is, to the transitional consistency of the observed entity as well as the observational recordings.
The first of these issues—irreducibility—we need not to discuss in detail here. We did this before, in a previous essay and in several locations. We just have to remember that empiricist reduction means to attempt for a sufficient description through dissecting the entity into its parts, thereby neglecting the circumstances, the dependency on the context and the embedding into the fabric of relations that is established by other instances. In physics, there is no such fabric, there are just anonymous fields, in physics, there is no dependency on the context, hence form is not a topic in physics. As soon as form becomes an issue, we leave physics, entering either chemistry or biology. As said, we won’t go into further details about that. Here, we will deal mainly with the second part, yet, with regard to two quite different use cases.
We will approach these cases, the empirical treatment of “observations” in computational linguistics and in urbanism, first from the methodological perspective, as both share certain conditions with the “analysis” of animal behavior. In chapter 8 we will give more pronounced reasons about this alignment, which at first sight may seem to be, well, a bit adventurous. The comparative approach, through its methodological arguments, will lead us to the emphasis of what we call “behavioral turn”. The text and the city are regarded as behaving entities, rather than the humans dealing with them.
The chapters in this essay are the following:
Table of Content (active links)
2. The Inversion
Given the two main conceptual landmarks mentioned above—irreducibility and the structured probabilistic approach—that establish the problematic field of behavior, we now can do something exciting. We take the concept and its conditions, detach it from its biological origins and apply it to other entities where we meet the same or rather similar conditions. In other words, we practice a differential as Deleuze understood it [3]. So, we have to spend a few moments for dealing with these conditions.
Slightly re-arranged and a bit more abstract than it is the case in behavioral sciences, these conditions are:
• – There are patterns that appear in various forms, despite they are made from the same elements.
• – The elements that contribute to the patterns are structurally different.
• – The elements are not all plainly visible; some, most or even the most important are only implied.
• – Patterns are arranged in patterns, implying that patterns are also elements, despite the fact that there is no fixed form for them.
• – The arrangement of elements and patterns into other patterns is dependent on the context, which in turn can be described only in probabilistic terms.
• – Patterns can be classified into types or families; the classification however, is itself non-trivial, that is, it is not supported.
• – The context is given by variable internal and external influences, which imply a certain persistence of the embedding of the observed entity into its spatial, temporal and relational neighborhood.
• – There is a significant symbolic “dimension” in the observation, meaning that the patterns we observe occur in sequence space upon an alphabet of primitives, not just in the numerical space. This symbolistic account is invoked by the complexity of the entity itself. Actually, the difference between symbolic and numerical sequences and patterns are much less than categorical, as we will see. Yet, it makes a large difference either to include or to exclude the methodological possibility for symbolic elements in the observation.
Whenever we meet these conditions, we can infer the presence of the above mentioned problematic field, that is mainly given by irreducibility and—as its match in the methodological domain—the practice of a structured probabilistic approach. This list provides us an extensional circumscription of abstract behavior.
A slightly different route into this problematic field draws on the concept of complexity. Complexity, as we understand it by means of the 5 elements provided above (for details see the full essay on this subject), can itself be inferred by checking for the presence of the constitutive elements. Once we see antagonisms, compartments, standardization we can expect emergence and sustained complexity, which in turn means that the entity is not reducible and in turn, that a particular methodological approach must be chosen.
We also can clearly state what should not be regarded as a member of this field. The most salient one is the neglect of individuality. The second one, now in the methodological domain, is the destruction of the relationality as it is most easy accomplished by referring to raw frequency statistics. It should be obvious that destroying the serial context in an early step of the methodological mapping from observation to number also destroys any possibility to understand the particularity of the observed entity. The resulting picture will not only be coarse, most probably it also will be utterly wrong, and even worse, there is no chance to recognize this departure into the area that is free from any sense.
3. The Targets
At the time of writing this essay, there are currently three domains that suffer most from the reductionist approach. Well, two and a half, maybe, as the third, genetics, is on the way to overcome the naïve physicalism of former days.
This does not hold for the other two areas, urbanism and computational linguistics, at least as far as it is relevant for text mining and information retrieval1. The dynamics in the respective communities are of course quite complicated, actually too complicated to achieve a well-balanced point of view here in this short essay. Hence, I am asking to excuse the inevitable coarseness regarding the treatment of those domains as if they would be homogenous. Yet, I think, that in both areas the mainstream is seriously suffering from a mis-understood scientism. In some way, people there strangely enough behave more positivist than researchers in natural sciences.
In other words, we follow the question how to improve the methodology in those two fields of urbanism and computerized treatment of textual data. It is clear that the question about methodology implies a particular theoretical shift. This shift we would like to call the “behavioral turn”. Among other changes, the “behavioral turn” as we construct it allows for overcoming the positivist separation between observer and the observed without sacrificing the possibility for reasonable empiric modeling.2
Before we argue in a more elaborate manner about this proposed turn in relation to textual data and urbanism, we first would like two accomplish two things. First, we briefly introduce two methodological concepts that deliberately try to cover the context of events, where those events are conceived as part of a series that always also develops into kind of a network of relations. Thus, we avoid to conceive of events as a series of separated points.
Secondly, we will discuss current mainstream methodology in the two fields that we are going to focus here. I think that the investigation of the assumptions of these approaches, often remaining hidden, sheds some light onto the arguments that support the reasonability of the “behavioral turn”.
4. Methodology
The big question remaining to deal with is thus: how to deal with the observations that we can make in and about our targets, the text or the city?
There is a clear starting point for the selection of any method as a method that could be considered as appropriate. The method should inherently respond to the seriality of the basic signal. A well-known method of choice for symbolic sequences are Markov chains, another important one are random contexts and random graphs. In the domain of numerical sequences wavelets are the most powerful way to represent various aspects of a signal at once.
Markov Processes
A Markov chain is the outcome of applying the theory of Markov processes onto a symbolic sequence. A Markov process is a neat description of the transitional order in a sequence. We also may say that it describes the conditional probabilities for the transitions between any subset of elements. Well, in this generality it is difficult to apply. Let us thus start with the most simple form, the Markov process of 1st order.
A 1st order Markov process describes just and only all pairwise transitions that are possible for given “alphabet” of discrete entries (symbols). These transitions can be arranged in a so-called transition matrix if we obey to the standard to use the preceding part of the transitional pair as row header and the succeeding part of the transitional pair as a column header. If a certain transition occurs, we enter a tick into the respective cell, given by the address row x column, which derives from the pair prec -> succ. That’s all. At least for the moment.
Such a table captures in some sense the transitional structure of the observed sequence. Of course, it captures only a simple aspect, since the next pair does not know anything about the previous pair. A 1st order Markov process is thus said to have no memory. Yet, it would be a drastic misunderstanding to generalize the absence of memory to any kind of Markov process. Actually, Markov processes can precisely be used to investigate the “memories” in a sequence, as we will see in a moment.
Anyway, on any kind of such a transition table we can do smart statistics, for instance to identify transitions that are salient for the “exceptional” high or low frequency. Such a reasoning takes into account the marginal frequencies of such a table and is akin to correspondence analysis. Van Hooff developed this “adjusted residual method” and has been applying it with great success in the analysis of observational data on Chimpanzees [4][5].
These residuals are residuals against a null-model, which in this case is the plain distribution. In other words, the reasoning is simply the same as always in statistics, aiming at establishing a suitable ratio of observed/expected, and then to determine the reliability of a certain selection that is based on that ratio. In the case of transition matrices the null-model states that all transitions occur with the same frequency. This is of course, simplifying, but it is also simple to calculate. There are of course some assumptions in that whole procedure that are worthwhile to be mentioned.
The most important assumption of the null-model is that all elements that are being used to set up the transitional matrix are independent from each other, except their 1st order dependency, of course. This also means that the null-model assumes equal weights for the elements of the sequence. It is quite obvious that we should assume so only in the beginning of the analysis. The third important assumption is that the process is stationary, meaning the kind and the strength of the 1st order dependencies do not change for the entire observed sequence.
Yet, nothing enforces us to stick to just the 1st order Markov processes, or to apply it globally. A 2nd order Markov process could be formulated which would map all transitions x(i)->x(i+2). We may also formulate a dense process for all orders >1, just by overlaying all orders from 1 to n into a single transitional matrix.
Proceeding this way, we end up with an ensemble of transitional models. Such an ensemble is suitable for the comparatist probabilistic investigation of the memory structure of a symbolic sequence that is being produced by a complex system. Matrices can be compared (“differenced”) regarding their density structure, revealing even spurious ties between elements across several steps in the sequence. Provided the observed sequence is long enough, single transition matrices as well as ensembles thereof can be resampled on parts of sequences in order to partition the global sequence, that is, to identify locally stable parts of the overall process.
Here you may well think that this sounds like a complicated “work-around” for a Hidden Markov Model (HMM). Yet, despite a HMM is more general than the transition matrix perspective in some respect, it is also less wealthy. In HMM, the multiplicity is—well—hidden. It reduces the potential complexity of sequential data into a single model, again with the claim of global validity. Thus, HMM are somehow more suitable the closer we are to physics, e.g. in speech recognition. But even there their limitation is quite obvious.
From the domain of ecology we can import another trick for dealing with the transitional structure. In ecosystems we can observe the so-called succession. Certain arrangements of species and their abundance follow rather regularly, yet probabilistic to each other, often heading towards some stable final “state”. Given a limited observation about such transitions, how can we know about the final state? Using the transitional matrix the answer can be found simply by a two-fold operation of multiplying the matrix with itself and intermittent filtering by renormalization. This procedure acts as a frequency-independent filter. It helps to avoid type-II errors when applying the adjusted residuals method, that is, transitions with a weak probability will be less likely dismissed as irrelevant ones.
The method of Markov processes is powerful, but is suffers from a serious problem. This problem is introduced by the necessity to symbolize certain qualities of the signal in advance to its usage in modeling.
We can’t use Markov processes directly on the raw textual data. Doing so instead would trap us in the symbolistic fallacy. We would either ascribe the symbol itself a meaning—which would result in a violation of the primacy of interpretation—or it would conflate the appearance of a symbol with its relevance, which would constitute a methodological mistake.
The way out of this situation is provided by a consequent probabilization. Generally we may well say that probabilisation takes the same role for quantitative sciences as the linguistic turn did for philosophy. Yet, it is still an attitude that is largely being neglected as a dedicated technique almost everywhere in any science. (for an example application of probabilisation with regard to evolutionary theory see this)
Instead of taking symbols as they are pretended to be found “out there”, we treat them as outcome of an abstract experiment, that is, as a random variable. Random variables establish them not as dual concepts, as 1 or 0, to be or not to be, they establish themselves as a probability distribution. Such a distribution contains potentially an infinite number of discretizations. Hence, probabilistic methods are always more general than those which rely on “given” symbols.
Kohonen et al. proposed a simple way to establish a random context [6]. The step from symbolic crispness to a numerical representation is not trivial, though. We need a double-articulated entity that is “at home” in both domains. This entity is a high-dimensional random fingerprint. Such a fingerprint consists simply of a large number, well above 100, of random values from the interval [0..1]. According to the Lemma of Hecht-Nielsen [7] any two of such vectors are approximately orthogonal to each other. In other words, it is a name expressed by numbers.
After a recoding of all symbols in a text into their random fingerprints it is easy to establish probabilistic distributions of the neighborhood of any word. The result is a random context, also called a random graph. The basic trick to accomplish such a distribution is to select a certain, fixed size for the neighborhood—say five or seven positions in total—and then arrange the word of interest always to a certain position, for instance into the middle position.
This procedure we do for all words in a text, or any symbolic series. Doing so, we get a collection of random contexts, that overlap. The final step then is a clustering of the vectors according to their similarity.
It is quite obvious that this procedure as it has been proposed by Kohonen sticks to strong assumptions, despite its turn to probabilization. The problem is the fixed order, that is, the order is independent from context in his implementation. Thus his approach is still limited in the same way as the n-gram approach (see chp.5.3 below). Yet, sometimes we meet strong inversions and extensions of relevant dependencies between words. Linguistics speak of injected islands with regard to wh*-phrases. Anaphors are another example. Chomsky critized the approach of fixed–size contexts very early.
Yet, there is no necessity to limit the methodology to fixed-size contexts, or to symmetrical instances of probabilistic contexts. Yes, of course this will result in a situation, where we corrupt the tabularity of the data representation. Many rows are different in their length and there is (absolutely) no justification to enforce a proper table by filling “missing values” into the “missing” cells of the table
Fortunately, there is another (probabilistic) technique that could be used to arrive at a proper table, without distorting the content by adding missing values. This technique is random projection, first identified by Johnson & Lindenstrauss (1984), which in the case of free-sized contexts has to be applied in an adaptive manner (see [8] or [9] for a more recent overview). Usually, a source (n*p) matrix (n=rows, p=columns=dimensions) is multiplied with a (p*k) random matrix, where the random numbers follow a Gaussian distribution), resulting in a target matrix of only k dimensions and n rows. This way a matrix of 10000+ columns can be projected into one made only from 100 columns without loosing much information. Yet, using the lemma of Hecht-Nielsen we can compress any of the rows of a matrix individually. Since the random vectors are approximately orthogonal to each other we won’t introduce any information across all the data vectors that are going to be fed into the SOM. This stepwise operation becomes quite important for large amounts of documents, since in this case we have to adopt incremental learning.
Such, we approach slowly but steadily the generalized probabilistic context that we described earlier. The proposal is simply that in dealing with texts by means of computers we have to apply precisely the most general notion of context, which is devoid from structural pre-occupations as we can meet them e.g. in the case of n-grams or Markov processes.
5. Computers Dealing with Text
Currently, so-called “text mining” is a hot topic. More and more of human communication is supported by digitally based media and technologies, hence more and more texts are accessible to computers without much efforts. People try to use textual data from digital environments for instance to do sentiment analysis about companies, stocks, or persons, mainly in the context of marketing. The craziness there is that they pretend to classify a text’s sentiment without understanding it, more or less on the frequency of scattered symbols.
The label “text mining” reminds to “data mining”; yet, the structure of the endeavors are drastically different. In data mining one is always interested in the relevant variables n order to build a sparse model that even could be understood by human clients. The model then in turn is used to optimize some kind of process from which the data for modeling has been extracted.
In the following we will describe some techniques, methods and attitudes that are highly unsuitable for the treatment of textual “data”, despite the fact that they are widely used.
Fault 1 : Objectivation
The most important difference between the two flavor of “digital mining” concerns however, the status of the “data”. In data mining, one deals with measurements that are arranged in a table. This tabular form is only possible on the basis of a preceding symbolization, which additionally is strictly standardized also in advance to the measurement.
In text mining this is not possible. There are no “explanatory” variables that could be weighted. Text mining thus just means to find a reasonable selection of text in response to a “query”. For textual data it is not possible to give any criterion how to look at a text, how to select a suitable reference corpus for determining any property of the text, or simply to compare it to other texts before its interpretation. There are no symbols, no criteria that could be filled into a table. And most significant, there is no target that could be found “in the data”.
It is devoid of any sense to try to optimize a selection procedure by means of a precision/recall ratio. This would mean that the meaning of text could be determined objectively before any interpretation, or, likewise, that the interpretation of a text is standardisable up to a formula. Both attempts are not possible, claiming otherwise is ridiculous.
People responded to these facts with a fierce endeavor, which ironically is called “ontology”, or even “semantic web”. Yet, neither will the web ever become “semantic” nor is database-based “ontology” a reasonable strategy (except for extremely standardized tasks). The idea in both cases is to determine the meaning of an entity before its actual interpretation. This of course is utter nonsense, and the fact that it is nonsense is also the reason why the so-called “semantic web” never started to work. They guys should really do more philosophy.
Fault 2 : Thinking in Frequencies
A popular measure for describing the difference of texts are variants of the so-called tf-idf measure. “tf” means “term frequency” and describes the normalized frequency of a term within a document. “idf” means “inverse document frequency”, which, actually, refers to the frequency of a word across all documents in a corpus.
The frequency of a term, even its howsoever differentialized frequency, can hardly be taken as the relevance of that term given a particular query. To cite the example from the respective entry in Wikipedia, what is “relevant” to select a document by means of the query “the brown cow”? Sticking to terms makes sense only if and only if we accept an apriori contract about the strict limitation to the level of the terms. Yet, this has nothing to do with meaning. Absolutely nothing. It is comparing pure graphemes, not even symbols.
Even if it would be related to meaning it would be the wrong method. Simply think about a text that contains three chapters: chapter one about brown dogs, chapter two about the relation of (lilac) cows and chocolate, chapter three about black & white cows. There is no phrase about a brown cow in the whole document, yet, it would certainly be selected as highly significant by the search engine.
This example nicely highlights another issue. The above mentioned hypothetical text could nevertheless be highly relevant, yet only in the moment the user would see it, triggering some idea that before not even was on the radar. Quite obviously, despite the search would have been different, probably, the fact remains that the meaning is neither in the ontology nor in the frequency and also not in text as such—before the actual interpretation by the user. The issue becomes more serious if we’d consider slightly different colors that still could count as “brown”, yet with a completely different spelling. And even more, if we take into account anaphoric arrangement.
The above mentioned method of Markov processes helps a bit, but not completely of course.
Astonishingly, even the inventors of the WebSom [6], probably the best model for dealing with textual data so far, commit the frequency fallacy. As input for the second level SOM they propose a frequency histogram. Completely unnecessary, I have to add, since the text “within” the primary SOM can be mapped easily to a Markov process, or to probabilistic contexts, of course. Interestingly, any such processing that brings us from the first to the second layer reminds somewhat more to image analysis than to text analysis. We mentioned that already earlier in the essay “Waves, Words and Images”.
Fault 3 : The Symbolistic Fallacy (n-grams & co.)
Another really popular methodology to deal with texts is n-grams. N-grams are related to Markov processes, as they also take the sequential order into account. Take for instance (again the example from Wiki) the sequence “to be or not to be”. The transformation into a 2-gram (or bi-gram) looks such “to be, be or, or not, not to, to be,” (items are between commas), while the 3-gram transformation produces “to be or, be or not, or not to, not to be”. In this way, the n-gram can be conceived as a small extract from a transition table of order (n-1). N-grams share a particular weakness with simple Markov models, which is the failure to capture long-range dependencies in language. These can be addressed only by means of deep grammatical structures. We will return to this point later in the discussion of the next fault No.4 (Structure as Meaning).
The strange thing is that people drop the tabular representation, thus destroying the possibility of calculating things like adjusted residuals. Actually, n-grams are mostly just counted, which is committing the first fault of thinking in frequencies, as described above.
N-gram help to build queries against databases that are robust against extensions of words, that is prefixes, suffixes, or forms of verbs due to flexing. All this has, however, nothing to do with meaning. It is a basic and primitive means to make symbolic queries upon symbolic storages more robust. Nothing more.
The real problem is the starting point: taking the term as such. N-grams start with individual words that are taken blindly as symbols. Within the software doing n-grams, they are even replaced by some arbitrary hash code, i.e. the software does not see a “word”, it deals just with a chunk of bits.
This way, using n-grams for text search commits the symbolistic fallacy, similar to ontologies, but even on a more basic level. In turn this means that the symbols are taken as “meaningful” for themselves. This results in a hefty collision with the private-language-argument put forward by Wittgenstein a long time ago.
N-grams are certainly more advanced than the nonsense based on tf-idf. Their underlying intention is to reflect contexts. Nevertheless, they fail as well. The ultimate reason for the failure is the symbolistic starting point. N-grams are only a first, though far too trivial and simplistic step into probabilization.
There is already a generalisation of n-grams available as described in published papers by Kohonen & Kaski: random graphs, based on random contexts, as we described it above. Random graphs overcome the symbolistic fallacy, especially if used together with SOM. Well, honestly I have to say that random graphs imply the necessity of a classification device like the SOM. This should not be considered as being a drawback, since n-grams are anyway often used together with Bayesian inference. Bayesian methods are, however, not able to distil types from observations as SOM are able to do. That now is indeed a drawback since in language learning the probabilistic approach necessarily must be accompanied with the concept of (linguistic) types.
Fault 4 : Structure as Meaning
The deep grammatical structure is an indispensable part of human languages. It is present from the sub-word level up to the level of rhetoric. And it’s gonna get really complicated. There is a wealth of rules, most of them to be followed rather strict, but some of them are applied only in a loose manner. Yet, all of them are rules, not laws.
Two issues are coming up here that are related to each other. The first one concerns the learning of a language. How do we learn a language? Wittgenstein proposed, simply by getting shown how to use it.
The second issue concerns the status of the models about language. Wittgenstein repeatedly mentioned that there is no possibility for a meta-language, and after all we know that Carnap’s program of a scientific language failed (completely). Thus we should be careful when applying a formalism to language, whether it is some kind of grammar, or any of the advanced linguistic “rules” that we know of today (see the lexicon of linguistics for that). We have to be aware that these symbolistic models are only projective lists of observations, arranged according to some standard of a community of experts.
Linguistic models are drastically different from models in physics or any other natural science, because in linguistics there is no outer reference. (Computational) Linguistics is mostly on the stage of a Babylonian list science [10], doing more tokenizing than providing useful models, comparable to biology in the 18th century.
Language is a practice. Language is a practice of human beings, equipped with a brain and embedded in a culture. In turn language itself is contributing to cultural structures and is embedded into it. There are many spatial, temporal and relational layers and compartments to distinguish. Within such arrangements, meaning happens in the course of an ongoing interpretation, which in turn is always a social situation. See Robert Brandom’s Making it Explicit as an example for an investigation of this aspect.
What we definitely have to be aware of is that projecting language onto a formalism, or subordinating language to an apriori defined or standardized symbolism (like in formal semantics) looses essentially everything language is made from and referring to. Any kind of model of a language is implicitly also claiming that language can be detached from its practice and from its embedding without loosing its main “characteristics”, its potential and its power. In short, it is the claim that structure conveys meaning.
This brings us to the question about the role of structure in language. It is a fact that humans not only understand sentences full of grammatical “mistakes”, and quite well so, in spoken language we almost always produce sentences that are full of grammatical mistakes. In fact, “mistakes” are so abundant that it becomes questionable to take them as mistakes at all. Methodologically, linguistics is thus falling back into a control science, forgetting about the role and the nature of symbolic rules such as it is established by grammar. The nature is an externalization, the role is to provide a standardization, a common basis, for performing interpretation of sentences and utterances in a reasonable time (almost immediately) and in a more or less stable manner. The empirical “given” of a sentence alone, even a whole text alone, can not provide enough evidence for starting with interpretation, nor even to finish it. (Note that a sentence is never a “given”.)
Texts as well as spoken language are nothing that could be controlled. There is no outside of language that would justify that perspective. And finally, a model should allow for suitable prediction, that is, it should enable us to perform a decision. Here we meet Chomsky’s call for competence. In case of language, a linguistic models should be able to produce language as a proof of concept. Yet, any attempt so far failed drastically, which actually is not really a surprise. Latest here it should become clear that the formal models of linguistics, and of course all the statistical approaches to “language processing” (another crap term from computational linguistics) are flawed in a fundamental way.
From the perspective of our interests here on the “Putnam Program” we conceive of formal properties as Putnam did in his “Meaning of “Meaning””. Formal properties are just that: properties among other properties. In our modeling essay we proposed to replace the concept of properties by the concept of the assignate, in order to emphasize the active role of the modeling instance in constructing and selecting the factors. Sometimes we use formal properties of terms and phrases, sometimes not, dependent on context, purpose or capability. There is neither a strict tie of formal assignates to the entity “word” or “sentence” nor could we detach them as part of formal approach.
Fault 5 : Grouping, Modeling and Selection
Analytic formal models are a strange thing, because such a model essentially claims that there is no necessity for a decision any more. Once the formula is there, it claims a global validity. The formula denies the necessity for taking the context as a structural element into account. It claims a perfect separation between observer and the observed. The global validity also means that the weights of the input factors are constant, or even that there are no such weights. Note that the weights translates directly into the implied costs of a choice, hence formulas also claim that the costs are globally constant, or at least, arranged in a smooth differentiable space. This is of course far from any reality for almost any interesting context, and of course for the contexts of language and urbanism, both deeply related to the category of the “social”.
This basic characteristic hence limits the formal symbolic approach to physical, if not just to celestial and atomic contexts. Trivial contexts, so to speak. Everywhere else something rather different is necessary. This different thing is classification as we introduced it first in our essay about modeling.
Searching for a text and considering a particular one as a “match” to the interests expressed by the search is a selection, much like any other “decision”. It introduces a notion of irreversibility. Searching itself is a difficult operation, even so difficult that is questionable whether we should follow this pattern at all. As soon as we start to search we enter the grammatological domain of “searching”. This means that we claim the expressibility of our interests in the search statement.
This difficulty is nicely illustrated by an episode with Gary Kasparov in the context of his first battle against “Deep Blue”. Given the billions of operations the super computer performed, a journalist came up with the question “How do find the correct move so fast?” Obviously, the journalist was not aware about the mechanics of that comparison. Kasparov answered: “ I do not search, I just find it.” His answer is not perfectly correct, though, as he should have said “I just do it”. In a conversation we mostly “just do language”. We practice it, but we very rarely search for a word, an expression, or the like. Usually, our concerns are on the strategic level, or in terms of speech act theory, on the illocutionary level.
Such we arrive now at the intermediary result that we have some kind of non-analytical models on the one hand, and the performance of their application on the other. Our suggestion is that most of these models are situated on an abstract, orthoregulative level, and almost never on the representational level of the “arrangement” of words.
A model has a purpose, even if it is an abstract one. There are no models without purpose. The purpose is synonymic to the selection. Often, we do not explicitly formulate a purpose, we just perform selections in a consistent manner. It is this consistency in the selections that imply a purpose. The really important thing to understand is also that the abstract notion of purpose is also synonymic to what we call “perspective”, or point of view.
One could mention here the analytical “models”, but those “models” are not models because they are devoid of a purpose. Given any interesting empirical situation, everybody knows that things may look quite different, just dependent on the “perspective” we take. Or in our words, which abstract purpose we impose to the situation. The analytic approach denies such a “perspectivism”.
The strange thing now is that many people mistake the mere clustering of observation on the basis of all contributing or distinguished factors as a kind of model. Of course, that grouping will radically change if we withdraw some of the factors, keeping only a subset of all available ones. Not only the grouping changes, but also the achievable typology and any further generalization will be also very different. In fact, any purpose, and even the tuning of the attitude towards the risk (costs) of unsuitable decisions changes the set of suitable factors. Nothing could highlight more the nonsense to call naïve take-it-all-clustering a “unsupervised modeling”. First, it is not a model. Second, any clustering algorithm or grouping procedure follows some optimality criterion, that is it supervises it despite claiming the opposite. “Unsupervised modeling” claims implicitly that it is possible to build a suitable model by pure analytic means, without any reference to the outside at all. This is, f course, not possible. It is this claim that is introducing a contradiction to the practice itself, because clustering usually means classification, which is not an analytic move at all. Due to this self-contradiction the term “unsupervised modeling” is utter nonsense. It is not only nonsense, it is even deceiving, as people get vexed by the term itself: they indeed believe that they are modeling in a suitable manner.
Now back to the treatment of texts. One of the most advanced procedures—it is a non-analytical one—is the WebSom. We described it in more detail in previous essays (here and here). Yet, as the second step Kohonen proposes clustering as a suitable means to decide about the similarity of texts. He is committing exactly the same mistake as described before. The trick, of course, is to introduce (targeted) modeling to the comparison of texts, despite the fact that there are no possible criteria apriori. What seems to be irresolvable disappears, however, as a problem if we take into account the self-referential relations of discourses, which necessarily engrave into the interpreter as self-modifying structural learning and historical individuality.
6. The Statistics of Urban Environments
The Importance of Conceptual Backgrounds
There is no investigation without implied purpose, simply because any investigation has to perform more often many selections rather than just some. One of the more influential selections that has to be performed considers the scope of the investigation. We already met this issue above when we discussed the affairs as we can meet it in behavioral sciences.
Considering investigations about social entities like urban environments, architecture or language. “scope” largely refers to the status of the individual, and in turn, to the status of time that we instantiate in our investigation. Both together establish the dimension of form as an element of the space of expressibility that we choose for the investigation.
Is the individual visible at all? I mean, in the question, in the method and after applying a methodology? For instance, as soon as we ask about matters of energy, individuals disappear. They also disappear if we apply statistics to raw observations, even if at first hand we would indeed observe individuals as individuals. To retain the visibility of individuals as individuals in a set of relations we have to apply proper means first. It is clear, that any cumulative measure like those from socio-economics also cause the disappearance of the context and the individual.
If we keep the individuals alive in our method, the next question we have to ask concerns the relations between the individuals. Do we keep them or do we drop them? Finally, regarding the unfolding of the processes that result from the temporal dynamics of those relations, we have to select whether we want to keep aspects of form or not. If you think that the way a text unfolds or the way things are happening in the urban environment is at least as important as their presence, well in this case you would have to care about patterns.
It is rather crucial to understand that these basic selections determine the outcome of an investigation as well as of any modeling or even theory building as grammatological constraints. Once we took a decision on the scope, the problematics of that choice becomes invisible, completely transparent. This is the actual reason for the fact that choosing a reductionist approach as the first step is so questionable.
In our earlier essay about the belief system in modernism we emphasized the inevitability of the selection of a particular metaphysical stance, ways before we even think about the scope of an investigation in a particular domain. In case of modernistic thinking, from positivism to existentialism, including any shape of materialism, the core of the belief system is metaphysical independence, shaping all the way down towards politics methods, tools, attitudes and strategies. If you wonder whether there is an alternative to modernistic thinking, take a look to our article where we introduce the concept of the choreostemic space.
Space Syntax
In the case of “Space Syntax” the name is program. The approach is situated in urbanism; it has been developed and is still being advocated by Bill Hillier. Originally, Hillier was a geo-scientist, which is somewhat important to follow his methodology.
Put into a nutshell, the concept of space syntax claims that the description of the arrangement of free space in a built environment is necessary and sufficient for describing the quality of a city. The method of choice to describe that arrangement is statistics, either through the concept of probabilistic density of people or through the concept of regression, relating physical characteristics of free space with the density of people. Density in turn is used to capture the effect of collective velocity vectors. If people start to slow down, walking around in different directions, density increases. Density of course also increases as a consequence of narrow passages. Yet, in this case the vectors are strongly aligned.
The spatial behavior of individuals is a result and a means of social behavior in many animal species. Yet it makes a difference whether we consider the spatial behavior of individuals or the arrangement of free space in a city as a constraint of the individual spatial behavior. Hillier’s claim of “The Space is the Machine” is mistaking the one for the other.
In his writings, Hillier over and over again commits the figure of the petitio principii. He starts with the strong belief in analytics and upon that he tries to justify the use of analytical techniques. His claim of “The need for an analytic theory of architecture” ([11], p.40) is just one example. He writes
The answer proposed in this chapter is that once we accept that the object of architectural theory is the non-discursive — that is, the configurational — content of space and form in buildings and built environments, then theories can only be developed by learning to study buildings and built environments as non-discursive objects.
Excluding the discourse as a constitutional element only the analytic remains. He drops any relational account, focusing just the physical matter and postulating meaning of physical things, i.e. meaning as an apriori in the physical things. His problem is just his inability to distinguish different horizons of time, of temporal development. Dismissing time means to dismiss memory, and of course also culture. For a physicalist or ultra-modernist like him this blindness is constitutive. He never will understand the structure of his failure.
His dismissal of social issues as part of a theory serves eo ipso as his justification of the whole methodology. This is only possible due to another, albeit consistent, mistake, the conflation of theory and models. Hillier is showing us over and over again only models, yet not any small contribution to an architectural theory. Applying statistics shows us a particular theoretical stance, but is not to be taken as such! Statistics instantiates those models, that is his architectural theory is following largely the statistical theory. We repeatedly pointed to the problems that appear if we apply statistics to raw observations.
The high self-esteem Hillier expresses in his nevertheless quite limited writings is topped by treating space as syntax, in other words as a trivial machine. Undeniably, human beings have a material body, and buildings take space as material arrangements. Undeniably matter arranges space and constitutes space. There is a considerably discussion in philosophy about how we could approach the problematic field of space. We won’t go into details here, but Hillier simply drops the whole stuff.
Matter arranges in space. This becomes quickly a non-trivial insight, if we change perspective from abstract matter and the correlated claim of the possibility of reductionism to spatio-temporal processes, where the relations are kept taken as a starting point. We directly enter the domain of self-organization.
By means of “Space Syntax” Hillier claimed to provide a tool for planning districts of a city, or certain urban environments. If he would restrict his proposals to certain aspects of the anonymized flow of people and vehicles, it would be acceptable as a method. But it is certainly not a proper tool to describe the quality of urban environments, or even to plan them.
Recently, he delivered a keynote speech [12] where he apparently departed from his former Space Syntax approach, that reaches back to 1984. There he starts with the following remark.
On the face of it, cities as complex systems are made of (at least) two sub-systems: a physical sub-system, made up of buildings linked by streets, roads and infrastructure; and a human sub-system made up of movement, interaction and activity. As such, cities can be thought of as socio-technical systems. Any reasonable theory of urban complexity would need to link the social and technical sub-systems to each other.
This clearly is much less reductionist, at first sight at least, than “Space Syntax”. Yet, Hillier remains aligned to hard-core positivism. Firstly, in the whole speech he fails to provide a useful operationalization of complexity. Secondly, his Space Syntax simply appears wrapped in new paper. Agency for him is still just spatial agency. The relevant urban networks for him is just the network of streets. Thirdly, it is bare nonsense to separate a physical and a human subsystem, and then to claim the lumping together of those as a socio-technical system. He obviously is unaware of more advance and much more appropriate ways of thinking about culture, such as ANT, the Actor-Network-Theory (Bruno Latour), which precisely drops the categorical separation of physical and human. This separation has been first critized by Merlau-Ponty in the 1940ies!
Hillier served us just as an example, but you may have got the point. Occasionally, one can meet attempts that at least try to integrate a more appropriate concept of culture and human being in urban environments. Think about Koolhaas and his AMO/OMA, for instance, despite the fact that Koolhaas himself also struggles with the modernist mindset (see our introductions into “JunkSpace” or “The Generic City”). Yet, he at least recognized that something is fundamentally problematic with that.
7. The Toolbox Perspective
Most of the interesting and relevant systems are complex. It is simply a methodological fault to use frequencies of observational elements to describe these systems, whether we are dealing with animals, texts, urban environments or people (dogs, cats) moving around in urban environments.
Tools provide filters, they respond to certain issues, both of the signal and of the embedding. Tools are artifacts for transformation. As such they establish the relationality between actors, things and processes. Tools produce and establish Heidegger’s “Gestell” as well as they constitute the world as a fabric of relations as facts and acts, as Wittgenstein emphasized so often and already in the beginning of the Tractatus.
What we like to propose here is a more playful attitude towards the usage of tools, including formal methods. By “playful” we refer to Wittgenstein’s rule following, but also to a certain kind of experimentation, not induced by theory, but rather triggered by the know-how of some techniques that are going to be arranged. Tools as techniques, or techniques as tools are used to distil symbols from the available signals. Their relevancy is determined only by the subsequent step of classification, which in turn is (ortho-)regulated by strategic goal or cultural habits. Never, however, should we take a particular method as a representative for the means to access meaning from a process, let it a text or an urban environment.
8. Behavior
In this concluding chapter we are going to try to provide more details about our move to apply the concept of behavior to urbanism and computational linguistics.
Since Friedrich Schleiermacher in 1830ies, hermeneutics is emphasizing a certain kind of autonomy of the text. Of course, the text itself is not a living thing as we consider it for animals. Before it “awakes” it has to be entered into mind matter, or more generally, it has to be interpreted. Nevertheless, an autonomy of the text remains, largely due to the fact that there is no Private Language. The language is not owned by the interpreting mind. Vilem Flusser proposed to radically turn the perspective and to conceive the interpreter as medium for texts and other “information”, rather than the other way round.
Additionally, the working of the brain is complex, least to say. Our relation to our own brain and our own mind is more that of an observer than that of a user or even controller. We experience them. Both together, the externality of language and the (partial) autonomy of the brain-mind lead to an arrangement where the text becomes autonomous. It inherits complimentary parts of independence from both parts of the world, from the internal and the external.
Furthermore, human languages are unlimited in their productivity. It is not only unlimited, it also is extensible. This pairs with its already mentioned deep structure, not only concerning the grammatical structure. Using language, or better, mastering language means to play with the inevitable inner contradictions that appear across the various layers, levels, aspects and processes of applied language. Within practiced language, there are many time horizons, instantiated by structural and semantic pointers. These aspects render the original series of symbols into an associative network of active components, which contributes further to the autonomy of texts. Roland Barthes notes (in [17]) that
The Plural of the Text depends … not on the ambiguity of its contents but on what might be called the sterographic plurality of its weave of signifiers (etymologically, the text is a tissue, a woven fabric). The reader of the Text may be compared to someone at a loose end.
Barthes implicitly emphasizes that the text does not convey a meaning, the meaning is not in the text, it can’t be conceived as something externalizable. In this essay he also holds that a text can’t be taken as just a single object. It is a text only in the context of other texts, and so the meaning that it develops upon interpretation is also dependent on the corpus into which it is embedded.
Methodologically, this (again) highlights the problematics that Alan Hajek called the reference class problem [13]. It is impossible for an interpreter to develop the meaning of a text outside of a previously chosen corpus. This dependency is inherited by any phrase, any sentence and any word within the text. Even a label like “IBM” that seems to be bijectively unique regarding the mapping of the graphem to its implied meaning is dependent on that. Of course, it will always refer somehow to the company. Yet, without the larger context it is not clear in any sense to which aspect of that company and its history the label refers to in a particular case. In literary theory this is called intertextuality. Further more, it is almost palpable here in this example that signs refer only to signs (the cornerstone of Peircean semiotics), and that concepts are nothing that could be defined (as we argued earlier in more detail).
We may settle here that a text as well as any part of it is established even through the selection of the embedding corpus, or likewise, a social practice, a life-form. Without such an embedding the text simply does not exist as a text. We just would find a series of graphemes. It is a hopeless exaggeration , if not self-deception, if people call the statistical treatment of texts “text mining”. reading it in another way, it may be considered even as a cynical term.
It is this dependence on local and global contexts, synchronically and diachronically, that renders the interpretation of a text similar to the interpretation of animal behavior.
Taken together, conceiving of texts as behaving systems is probably less a metaphor than it appears at first sight. Considering the way we make sense of a text, approaching a text is in many ways comparable with approaching an animal of a familiar species. We won’t know exactly what is going to happen, the course of events and action depends significantly on ourselves. The categories and ascribed properties necessary to establish an interaction are quite undefined in the beginning, also available only as types of rules, not as readily parameterized rules itself. And like in animals, the next approach will never be a simple repetition of the former one, even one knows the text quite good.
From the methodological perspective the significance of such a “behavioral turn”3 can’t be underestimated. For instance, nobody would interpret an animal by a rather short series of photographs, and keep the conclusion thereof once and for all. Interacting with a text as if it would behave demands for a completely different set of procedures. After all, one would deal with an open interaction. Such openness must be responded to with an appropriate attitude of the willingness for open structural learning. This holds not only for human interpreters, but rather also for any interpreter, even if it would be software. In other words, the software dealing with text must itself be active in a non-analytical manner in order to constitute what we call a “text”. Any kind of algorithm (in the definition of Knuth) does not deal with text, but just and blindly with a series of dead graphemes.
The Urban
For completely different material reasons cities can be considered also as autonomous entities. Their patterns of growth and differentiation looks much more like that of ensembles of biological entities than that of minerals. Of course, this doesn’t justify the more or less naïve assignment of the “city as organism”. Urban arrangements are complex in the sense we’ve defined it, they are semiogenic and associative. There is a continuous contest between structure as regulation and automation on the one side and liquification as participation and symbolization on the other, albeit symbols may play for both parties.
Despite this autonomy, it remains a fact that without human activity cities are as little alive as texts are. This raises the particular question of the relationships between a city and its inhabitants, between the people as citizens of the city that they constitute. This topic has been subject of innumerable essay, novels, and investigations. Recently, a fresh perspective onto that has been opened by Vera Bühlmann’s notion of the “Quantum City”.[14]
We can neither detach the citizens from their city, not vice versa. Nevertheless, the standardized and externalized collective contribution across space and time creates an arrangement that produces dissipative flows and shows a strong meta-stability that transcends the activities of the individuals. This stability should not be mistaken as a “state”, though. Like for any other complex system, including texts, we should avoid to try to assign a “state” to a particular city, or even a part of it. Everything is a process within a complex system, even if it appears to be rather stable. yet, this stability depends on the perspective of the observer. In turn, the seeming stability does not mean that a city-process could not be destroyed by human activity, let it be by individuals (Nero), by a collective, or by socio-economic processes. Yet, again as in case of complex systems, the question of causality would be the wrong starting point for addressing the issue of change as it would be a statistical description.
Cities and urban environments are fabrics of relations between a wide range of heterogenic and heterotopic (See Foucault or David Shane [15]) entities and processes across a likewise large range of temporal scales, meeting any shade between the material and the immaterial. There is the activity of single individuals, of collectives of individuals, of legislative and other norms, the materiality of the buildings and their changing usage and roles, different kinds of flows and streams as well as stores and memories.
Elsewhere we argued that this fabric may be conceived as a dynamic ensemble of associative networks [16]. Those should be clearly distinguished from logistic networks, whose purpose is given by organizing any kind of physical transfer. Associative networks re-arrange, sort, classify and learn. Such, they are also the abstract location of the transposition of the material into the immaterial. Quite naturally, issues of form and their temporal structure arise, in other words, behavior.
Our suggestion thus is to conceive of a city as en entity that behaves. This proposal has (almost) nothing to do with the metaphor of the “city as organism”, a transfer that is by far too naïve. Changes in urban environments are best conceived as “outcomes” of probabilistic processes that are organized as overlapping series, both contingent and consistent. The method of choice to describe those changes is based on the notion of the generalized context.
Urban Text, Text and Urbanity, Textuality and Performance
Urban environments establish or even produce a particular kind of mediality. We need not invoke the recent surge of large screens in many cities for that. Any arrangement of facades encodes a rich semantics that is best described employing a semiotic perspective, just as Venturi proposed it. Recently, we investigated the relationship between facades, whether made from stone or from screens, and the space that they constitute [17].
There is yet another important dimension between the text and the city. For many hundred years now, if not even millenia, cities are not imaginable without text in one or the other form. Latest since the early 19th century, text and city became deeply linked to one another with the surge of newspapers and publishing houses, but also through the intricate linkage between the city and the theater. Urban culture is text culture, far more than it could be conceived as an image culture. This tendency is only intensified through the web, albeit urbanity now gets significantly transformed by and into the web-based aspects of culture. At least we may propose that there is a strong co-evolution between the urban (as entity and as concept) and mediality, whether it expresses itself as text, as movie or as webbing.
The relationship between the urban and the text has been explored many times. It started probably with Walter Benjamin’s “flâneur” (for an overview see [18]). Nowadays, urbanists often refer to the concept of the “readability” of a city layout, a methodological habit originated by Kevin Lynch. Yet, if we consider the relation between the urban and the textual, we certainly have to take an abstract concept of text, we definitely have to avoid the idea that there are items like characters or words out there in the city. I think, we should at least follow something like the abstract notion of textuality, as it has been devised by Roland Barthes in his “From Work to Text” [19] as a “methodological field”. Yet, this probably is still not abstract enough, as urban geographers like Henri Lefebvre mistook the concept of textuality as one of intelligibility [20]. Lefebvre obviously didn’t understand the working of a text. How should he, one might say, as a modernist (and marxist) geographer. All the criticism that was directed against the junction between the urban and textuality conceived—as far as we know—text as something object-like, something that is out there as such, awaiting passively to be read and still being passive as it is being read, finally maybe even as an objective representation beyond the need (and the freedom for) interpretation. This, of course, represents a rather limited view on textuality.
Above we introduced the concept of “behaving texts”, that is, texts as active entities. These entities become active as soon as they are mediatized with interpreters. Again: not the text is conceived as the media or in a media-format, but rather the interpreter, whether it is a human brain-mind or a a suitable software tat indeed is capable for interpreting, not just for pre-programmed and blind re-coding. This “behavioral turn” renders “reading” a text, but also “writing” it, into a performance. Performances, on the other hand, comprise always and inevitable a considerable openness, precisely because they let collide the immaterial and the material from the side of the immaterial. Such, performances are the counterpart of abstract associativity, yet also settling at the surface that sheds matter from ideas.
In the introduction to their nicely edited book ”Performance and the City” Kim Solga, D.Hopkins and Shelley Orr [18] write, citing the urban geographer Nigel Thrift:
Although de Certeau conceives of ‘walking in the city’ not just as a textual experience but as a ‘series’ of embodied, creative’ practices’ (Lavery: 152), a ‘spatial acting-out of place’ (de Certeau: 98, our emphasis), Thrift argues that de Certeau: “never really leaves behind the operations of reading and speech and the sometimes explicit, sometimes implicit claim that these operations can be extended to other practices. In turn, this claim [ … ] sets up another obvious tension, between a practice-based model of often illicit ‘behaviour’ founded on enunciative speech-acts and a text-based model of ‘representation’ which fuels functional social systems.” (Thrift 2004: 43)
Quite obviously, Thrift didn’t manage to get the right grip to Certeau’s proposal that textual experience may be conceived—I just repeat it— as a series of embodied, creative practices. It is his own particular blindness that lets Thrift denunciate texts as being mostly representational.
Solsa and colleagues indeed emphasize the importance of performance, not just in their introduction, but also through their editing of the book. Yet, they explicitly link textuality and performance as codependent cultural practices. They write:
While we challenge the notion that the city is a ‘text’ to be read and (re)written, we also argue that textuality and performativity must be understood as linked cultural practices that work together to shape the body of phenomenal, intellectual, psychic, and social encounters that frame a subject’s experience of the city. We suggest that the conflict, collision, and contestation between texts and acts provoke embodied struggles that lead to change and renewal over time. (p.6)
Such, we find a justification for our “behavioral turn” and its application to texts as well as to the urban from a rather different corner. Even more significant, Solsa et al. seem to agree that performativity and textuality could not be detached from the urban at all. Apparently, the urban as a particular quality of human culture more and more develops into the main representative of human culture.
Yet, neither text nor performance, nor their combination count for a full account of the mediality of the urban. As we already indicated above, the movie as kind of a cross-media from text, image, and performance is equally important.
The relations between film and the urban, between architecture and the film, are also quite wide-spread. The cinema, somehow the successor of the theatre, could be situated only within the city. From the opposite direction, many would consider a city without cinemas as being somehow incomplete. The co-evolutionary story between both is still being under vivid development, I think.
There is particularly one architect/urbanist who is able to blend the film and the building into each other. You may know him quite well, I refer to Rem Koolhaas. Everybody knows that he has been an experimental moviemaker in his youth. It is much less known that he deliberately organized at least one of his buildings as kind of a movie: The Embassy of the Netherlands in Berlin (cf. [21]).
Here, Koolhaas arranged the rooms along a dedicated script. Some of the views out of the window he even trademarked to protect them!
Figure 1: Rem Koolhaas, Dutch Embassy, Berlin. The figure shows the script of pathways as a collage (taken from [21]).
9. The Behavioral Turn
So far we have shown how the behavioral turn could be supported and which are some of the first methodological consequences, if we embrace it. Yet, the picture developed so far is not complete, of course.
If we accept the almost trivial concept that autonomous entities are best conceived as behaving entities—remember that autonomy implies complexity—, then we further can ask about the structure of the relationship between the behaving subject and its counterpart, whether this is also a behaving subject or whether it is conceived more like passive object. For Bruno Latour, for instance, both together form a network, thereby blurring the categorical distinction between both.
Most descriptions of the process of getting into touch with something nowadays is dominated by the algorithmic perspective of computer software. Even Designer started to speak about interfaces. The German term for the same thing—“Schnittstelle”—is even more pronounced and clearly depicts the modernist prejudice in dealing with interaction. “Schnittstelle” implies that something, here the relation, is cut into two parts. A complete separation between interacting entities is assumed apriori. Such a separation is deeply inappropriate, since it would work only in strictly standardized environments, up to being programmed algorithmically. Precisely this was told us over and over again by designers of software “user interfaces”. Perhaps here we can find the reason for so many bad designs, not only concerning software. Fortunately, though just through a slow evolutionary process, things improve more and more. So-called “user-centric” design, or “experience-oriented” design became more abundant in recent years, but their conceptual foundation is still rather weak, or a wild mixture of fashionable habits and strange adaptations of cognitive science.
Yet, if we take the primacy of interpretation serious, and combine it with the “behavioral turn” we can see a much more detailed structure than just two parts cut apart.
The consequence of such a combination is that we would drop the idea of a clear-cut surface even for passive objects. Rather, we could conceive objects as being stuffed with a surrounding field that becomes stronger the closer we approach the object. By means of that field we distinguish the “pure” physicality from the semiotically and behaviorally active aspects.
This field is a simple one for stone-like matter, but even there it is still present. The field becomes much more rich, deep and vibrant if the entity is not a more or less passive object, but rather an active and autonomous subject. Such as an animal, a text, or a city. The reason being that there are no apriori and globally definable representative criteria that we could use to approach such autonomous entities. We only can know about more or less suitable procedures about how to derive such criteria in the particular case, approaching a particular individual {text, city}. The missing of such criteria is a direct correlate for their semantic productivity, or, likewise, for their unboundedness.
Approaching a semantically productive entity—such entities are also always able to induce new signs, they are semiosic entities—is reminds to approaching a gravitational field. Yet it is also very different from a gravitational field, since our semio-behavioral field shows increasing structural richness the closer the entities approach to each other. It is quite obvious that only by means of such a semio-behavioral field we can close the gap between the subject and the world that has been opened, or at least deepened by the modernist contributions from the times of Descartes until late computer science. Only upon a concept like the semio-behavioral field, which in turn is a consequence of the behavioral turn, we can overcome the existential fallacy as it has been purported and renewed over and over again by the dual pair of material and immaterial. The language game that separates the material and immaterial inevitably leads into the nonsensical abyss of existentialism. Dual concepts always come with tremendous costs, as they prevent any differentiated way of speaking about the matter. For instance, it prevents to recognize the materiality of symbols, or more precisely, the double-articulation of symbols between the more material and the more immaterial aspects of the world.
The following series of images may be taken as a metaphorical illustration of that semio-behavioral field. We call it the zona extima of the behavioral coating of entities.
Figure 2a: The semio-behavioral field around an entity.
Figure 2b: The situation as another entity approaches perceptively.
Figure 2c: Mutual penetration of semio-behavioral fields.
Taken together we may say, that whenever {sb,sth} gets into contact with {sb, sth}, we do so through the behavioral coating. This zone is of contact is not intimate (as Peter Sloterdijk describes it), it is rather extimate, though there is a smooth and graded change of quality from extimacy to intimacy as the distance decreases. The zona extima is a borderless (topological) field, driven by purposes (due to modelling), it is medial, behaviorally choreographed as negotiation, exposure, call & request.
The concept of extimation, or also the process of extimating, is much more suitable than “interaction” to describe what‘s going on when we act, behave, engage, actively perceive, encounter with or towards the other. The interesting thing with the web-based media is that some aspects of zona extima can be transferred.
10. Conclusion
In this essay we try to argument in favor of a behavioral turn as a general attitude when it comes to conceive the interaction of any kind of two entities. The behavioral turn is a consequence of three major and interrelated assumptions:
• – primacy of interpretation in the relation to the world;s;
• – primacy of process and relation against matter and point;
• – complexity and associativity in strongly mediatized environments.
All three assumptions are strictly outside of anything that phenomenological, positivist or modernist approaches can talk about or even practice.
It particularly allows to overcome the traditional and strict separation between the material and the immaterial, as well as the separation between the active and the passive. These shifts can’t be underestimated; they have far-reaching consequences upon the way we practice and conceive our world.
The behavioral turn is the consequence of a particular attitude that respects the bi-valency of world as a dynamic system of populations of relations. It is less the divide between the material and the immaterial, which anyway is somewhat an illusion deriving from the metaphysical claim of the possibility of essences. For instance, the jump that occurs between the realms of the informational and the causal establishes as a pair of two complimentary but strictly and mutually exclusive modes of speaking about the orderliness in the world. In some way, it is also the orderliness in the behavior of the observer—as repetition—that creates the informational that the observer than may perceive. The separation is thus a highly artificial one, in either direction. It is simply silly to discuss the issue of causality without referring to the informational aspects (for a full discussion of the issue see this essay). In any real-world case we always find both aspects together, and we find it as behavior.
Actually, the bi-valent aspect that I mentioned before refers to something quite different, in fact so different that we even can’t speak properly about it. It refers to these aspects that are apriori to modeling or any other comprehension, that are even outside to the performance of the individual itself. What I mean is the resistance of existential arrangements, inclusive the body that the comprehending entity is partially built from. This existential resistance introduces something like outer space for the cultural sphere. Needless to say that we can exist only within this cultural sphere. Yet, any action upon the world enforces us to take a short trip into the vacuum, and if we are lucky the re-entrance is even productive. We may well expect an intensification of the aspect of the virtual, as we argued here. Far from being suitable to serve as a primacy (as existentialism misunderstood the issue), the existential resistance, the absolute outside, enforces us to bark on the concept of behavior. Only “behavior” as a perceptional and performative attitude allows to extract coherence from the world without neglecting the fact of that resistance or contumacy.
The behavioral turn triggers a change in the methodology for empiric investigations as well. The standard set of methods for empiric descriptions changes, using the relation and the coherent series always as the starting point, best in its probabilized form, that is, as generalized probabilistic context. This also prevents the application of statistical methods directly to raw data. There should always be some kind of grouping or selection preceding the statistical reasoning. Otherwise we would try to follow the route that Wittgenstein blocked as a “wrong usage of symbols” (in his rejection of the reasonability of Russel/Whitehead’s Principia Mathematica). The concept of abstract behavior inclusive the advanced methodology that avoids to start with representational symbolification is clearly a sound way out of this deep problem from which any positivist empiric investigation suffers.
Interaction, including any action upon some other entity, when understood within the paradigm of behavior, becomes a recurrent, though not repetitive, self-adjusting process. During this process means and symbols may change and be replaced all the way down until a successful handshake. There is no objectivity in this process other than the mutual possibility for anticipation. Despite the existential resistance and contumacy that is attached to any re-shaping of the world, and even more so if we accomplish it by means of tools, this anticipation is, of course, greatly improved upon the alignment to cultural standards, contributing to the life-world as a shared space of immanence.
This provides us finally a sufficiently abstract, but also a sufficiently rich or manifold perspective on the issue of the roles of symbols regarding the text, the urban and the anime, the animal-like. None of those could be comprehended without first creating a catalog or a system of symbols. These symbols, both material and immaterial and thus kind of a hinge, a double-articulation, are rooted both in the embedding culture (as a de-empirifying selective force) and the individual, which constitutes another double-articulation. The concept of abstract behavior, given as a set of particular conditions and attitudes, allows to respond appropriately to the symbolic.
The really big question concerning our choreostemic capabilities—and those of the alleged machinic—therefore is: How to achieve the fluency in dealing with the symbolic without presuming it as a primary entity? Probably by exercising observing. I hope that the suggestions expressed so far in these essay provide some robust starting points. …we will see.
1. Here we simply cite the term of “information retrieval”, we certainly do not agree that the term is a reasonable one, since it is deeply infected by positivist prejudices. “Information” can’t be retrieved, because it is not “out there”. Downloading a digitally encoded text is neither a hunting nor a gathering for information, because information can’t be considered to be an object. Information is only present during the act of interpretation (more details about the status of information you can find here). Actually, what we are doing is simply “informationing”.
2. The notion of a “behavioral turn” is known from geography since the late 1960ies [22][23], and also from economics. In both fields, however, the behavioral aspect is related to the individual human being. In both areas, any level of abstraction with regard to the concept of behavior is missing. Quite in contrast to those movements, we do not focus on the neglect of the behavioral domain when it comes to human society, but rather the transfer of the abstract notion of behavior to non-living entities.
Another reference to “behavioral sciences” can be found in social sciences. Yet, in social sciences “behavioral” is often reduced to “behaviorist”, which of course is nonsense. A similar misunderstanding is abundant in political sciences.
3. Note that the proposed „behavioral turn“ should not be mistaken as a “behavioristic” move, as sort of a behaviorism. We strictly reject the stimulus-response scheme of the behaviorism. Actually, behaviorism as it has been developed by Watson and Pavlov has only little to do with behavior at all. It is nothing else than an overt reductionist program, rendering any living being into a trivial machine. Unfortunately, the primitive scheme of behaviorism is experiencing kind of a come-back in so-called “Behavioral Design”, where people talk about “triggers” much in the same way as Pavlov did (c.f. BJ Fogg’s Behavior Model).
• [2] G. Moran, J.C. Fentress (1979). A Search for Order in Wolf Social Behavior. pp.245-283. in: E. Klinghammer (ed.), The Behavior and Ecology of Wolves. Symp. held on 23-24.5.1975 in Wilmington N.C.), Garland STPM Press, New York..
• [3] Gilles Deleuze, Difference and repetitionGilles Deleuze, Difference and Repetition.
• [4] J.A.R.A.M. Van Hooff (1982). Categories and sequences of behaviour: methods of description and analysis. in: Handbook of methods in nonverbal behavior research (K.R. Scherer& P. Ekman, eds). Cambridge University Press, Cambridge.
• [5] P.G.M. van der Heijden, H. de Vries, J.A.R.A.M. van Hooff (1990). Correspondence analysis of transition matrices, with special attention to missing entries and asymmetry. Anim.Behav. 40: 49-64.
• [6] Teuvo Kohonen, Samuel Kaski, K. Lagus und J. Honkela (1996). Very Large Two-Level SOM for the Browsing of Newsgroups. In: C. von der Malsburg, W. von Seelen, J. C. Vorbrüggen and B. Sendhoff, Proceedings of ICANN96, International Conference on Artificial Neural Networks, Bochum, Germany, July 16-19, 1996, Lecture Notes in Computer Science, Vol. 1112, pp.269-274. Springer, Berlin.
• [7] Hecht-Nielsen (1994).
• [8] Javier Rojo Tuan, S. Nguyen (2010). Improving the Johnson-Lindenstrauss Lemma. available online.
• [9] Sanjoy Dasgupta, Presentation given about: Samuel Kaski (1998), Dimensionality Reduction by Random Mapping: Fast Similarity Computation for Clustering, Helsinki University of Technology 1998. available online.
• [10] Michel Serres, Nayla Farouki. Le trésor. Dictionnaire des sciences. Falmamrion, Paris 1998. p.394.
• [11] Bill Hillier, Space Syntax. E-edition, 2005.
• [12] Bill Hillier (2009). The City as a Socio-technical System: a spatial reformulation in the light of the levels problem and the parallel problem. Keynote paper to the Conference on Spatial Information Theory, September 2009.
• [14] Vera Bühlmann (2012). In the Quantum City – design, and the polynomial grammaticality of artifacts. forthcoming.
• [15] David G. Shane. Recombinant Urbanism. 2005.
• [18] D.J. Hopkins, Shelley Orr and Kim Solga (eds.), Performance and the City. Palgrave Macmillan, Basingstoke 2009.
• [19] Roland Barthes, From Work to Text. in: Image, Music, text: Essay Selected and translated. Transl. Stephen Heath, Hill&Wang, New York 1977. also available online @ google books p.56.
• [20] Henri Lefebvre, The Production of Space. 1979.
• [21] Vera Bühlmann. Inhabiting media. Thesis, University of Basel (CH) 2009.
• [22] Kevin R Cox, Jennifer Wolch and Julian Wolpert (2008). Classics in human geography revisited. “Wolpert, J. 1970: Departures from the usual environment in locational analysis. Annals of the Association of American Geographers 50, 220–29.” Progress in Human Geography (2008) pp.1–5.
• [23] Dennis Grammenos. Urban Geography. Encyclopedia of Geography. 2010. SAGE Publications. 1 Oct. 2010. available online.
The Text Machine
July 10, 2012 § Leave a comment
What is the role of texts? How do we use them (as humans)?
How do we access them (as reading humans)? The answers to such questions seem to be pretty obvious. Almost everybody can read. Well, today. Noteworthy, reading itself, as a performance and regarding its use, changed dramatically at least two times in history: First, after the invention of the vocal alphabet in ancient Greece, and the second time after book printing became abundant during the 16th century. Maybe, the issue around reading isn’t so simple as it seems in everyday life.
Beyond such accounts of historical issues and basic experiences, we have a lot of more theoretical results concerning texts. Beginning with Friedrich Schleiermacher who was the first to identify hermeneutics as a subject around 1830 and formulated it in a way that has been considered as more complete and powerful than the version proposed by Gadamer in the 1950ies. Proceeding of course with Wittgenstein (language games, rule following), Austin (speech act theory) or Quine (criticizing empirism). Philosophers like John Searle, Hilary Putnam and Robert Brandom then explicating and extending the work of the former heroes. And those have been accompanied by many others. If you wonder about linguistics missing here, well, then because linguistics does not provide theories about language. Today, the domain is largely caught by positivism and the corresponding analytic approach.
Here in his little piece we pose these questions in the context of certain relations between machines and texts. There are a lot of such relations, and even quite sophisticated or surprising ones. For instance, texts can be considered as kind of machines. Yet, they bear a certain note of (virtual) agency as well, resulting in a considerable non-triviality of this machine aspect of texts. Here we will not deal with this perspective. Instead, we just will take a look on the possibilities and the respective practices to handle or to “treat” texts with machines. Or, if you prefer, the treating of texts by machines, as far as a certain autonomy of machines could be considered as necessary to deal with texts at all.
Today, we can find a fast growing community of computer programmers that are dealing with texts as kind of unstructured information. One of the buzz-words is the so-called “semantic web”, another one is “sentiment analysis”. We won’t comment in any detail about those movements, because they are deeply flawed. The first one is trying to formalize semantics and meaning apriori, trying to render the world into a trivial machine. We repeatedly criticized this and we agree herein with Douglas Hofstadter. (see this discussion of his “Fluid Analogy”). The second is trying to identify the sentiment of a text or a “tweet”, e.g. about a stock or an organization, on the basis of statistical measures about keywords and their utterly naive “n-grammed” versions, without actually paying any notice to the problem of “understanding”. Such nonsense would not be as widespread if programmers would read only a few fundamental philosophical texts about language. In fact, they don’t, and thus they are condemned to visit any of the underdeveloped positions that arose centuries ago.
If we neglect the social role of texts for a moment, we might identify a single major role of texts, albeit we have to describe it then in rather general terms. We may say that the role of a text, as a specimen of many other texts from a large population, is its functioning as a medium for the externalization of mental content in order to serve the ultimate purpose, which consists of the possibility for a (re)construction of resembling mental content on the side of the interpreting person.
This interpretation is a primacy. It is not possible to assign meaning to text like a sticky note, then putting the text including the yellow sticky note directly into the recipients brain. That may sound silly, but unfortunately it’s the “theory” followed by many people working in the computer sciences. Interpretation can’t be controlled completely, though, not even by the mind performing it, not even by the same mind who seconds before externalized the text through writing or speaking.
Now, the notion of mental content may seem both quite vague and hopelessly general as well. Yet, in the previous chapter we introduced a structure, the choreostemic space, which allows to speak pretty precise about mental content. Note that we don’t need to talk about semantics, meaning or references to “objects” here. Mental content is not a “state” either. Thinking “state” and the mental together is much on the same stage as to seriously considering the existence of sea monsters in the end of 18th century, when the list science of Linnaeus was not yet reshaped by the upcoming historical turn in the philosophy of nature. Nowadays we must consider it as silly-minded to think about a complex story like the brain and its mind by means of “state”. Doing so, one confounds the stability of the graphical representation of a word in a language with the complexity of a multi-layered dynamic process, spanned between deliberate randomness, self-organized rhythmicity and temporary thus preliminary meta-stability.
The notion of mental content does not refer to the representation of referenced “objects”. We do not have maps, lists or libraries in our heads. Everything which we experience as inner life builds up from an enormous randomness through deep stacks of complex emergent processes, where each emergent level is also shaped from top-down, implicitly and, except the last one usually called “consciousness,” also explicitly. The stability of memory and words, of feelings and faculties is deceptive, they are not so stable at all. Only their externalized symbolic representations are more or less stable, their stability as words etc. can be shattered easily. The point we would like to emphasize here is that everything that happens in the mind is constructed on the fly, while the construction is completed only with the ultimate step of externalization, that is, speaking or writing. The notion of “mental content” is thus a bit misleading.
The mental may be conceived most appropriately as a manifold of stacked and intertwined processes. This holds for the naturalist perspective as well as for the abstract perspective, as he have argued in the previous chapter. It is simply impossible to find a single stable point within the (abstract) dynamics between model, concept, mediality and virtuality, which could be thought of as spanning a space. We called it the choreostemic space.
For the following remarks about the relation between text and machines and the practitioners engaged in building machines to handle texts we have to keep in mind just those two things: (i) there is a primacy of interpretation, (ii) the mental is a non-representative dynamic process that can’t be formalized (in the sense of “being represented” by a formula).
In turn this means that we should avoid to refer to formulas when going to build a “text machine”. Text machines will be helpful only if their understanding of texts, even if it is a rudimentary understanding, follows the same abstract principles as our human understanding of texts does. Machines pretending to deal with texts, but actually only moving dead formal symbols back and forth, as it is the case in statistical text mining, n-gram based methods and similar, are not helpful at all. The only thing that happens is that these machines introduce a formalistic structure into our human life. We may say that these techniques render humans helpful to machines.
Nowadays we can find a whole techno-scientific community that is engaged in the field of machine learning, devised to “textual data”. The computers are programmed in such a way that they can be used to classify texts. The idea is to provide some keywords, or anti-words, or even a small set of sample texts, which then are taken by the software as a kind of template that is used to build a selection model. This model then is used to select resembling texts from a large set of texts. We have to be very clear about the purpose of these software programs: they classify texts.
The input data for doing so is taken from the texts themselves. More precisely, they are preprocessed according to specialized methods. Each of the texts gets described by a possibly large set of “features” that have been extracted by these methods. The obvious point is that the procedure is purely empirical in the strong sense. Only the available observations (the texts) are taken to infer the “similarity” between texts. Usually, not even linguistic properties are used to form the empirical observations, albeit there are exceptions. People use the so-called n-gram approach, which is only little more than counting letters. It is a zero-knowledge model about the series of symbols, which humans interpret as text. Additionally, the frequency or relative positions of keywords and anti-words are usually measured and expressed by mostly quite simple statistical methods.
Well, classifying texts is something that is quite different from understanding texts. Of course. Yet, said community tries to reproduce the “classification” achieved or produced by humans. Such, any of the engineers of the field of machine learning directed to texts implicitly claims kind of an understanding. They even organize competitions.
The problems with the statistical approach are quite obvious. Quine called it the dogma of empiricism and coined the Gavagai anecdote about it, which even provides much more information than the text alone. In order to understand a text we need references to many things outside the particular text(s) at hand. Two of those are especially salient: concepts and the social dimension. Straightly opposite to the believe of positivists, concepts can’t be defined in advance to a particular interpretation. Using catalogs of references does not help much, if these catalogs are used just as lists of references. The software does not understand “chair” by the “definition” stored in a database, or even by the set of such references. It simply does not care whether there are encoded ASCII codes that yield the symbol “chair” or the symbol “h&e%43”. Douglas Hofstadter has been stressing this point over and over again, and we fully agree to that.
From that necessity to a particular and rather wide “background” (notion by Searle) the second problem derives, which is much more serious, even devastating to the soundness of the whole empirico-statistical approach. The problem is simple: Even we humans have to read a text before being able to understand it. Only upon understanding we could classify it. Of course, the brain of many people is trained sufficiently as to work about the relations of the texts and any of its components while reading the text. The basic setup of the problem, however, remains the same.
Actually, what is happening is a constantly repeated re-reading of the text, taking into account all available insights regarding the text and the relations of it to the author and the reader, while this re-reading often takes place in the memory. To perform this demanding task in parallel, based on the “cache” available from memory, requires a lot of experience and training, though. Less experienced people indeed re-read the text physically.
The consequence of all of that is that we could not determine the best empirical discriminators for a particular text in-the-reading in order to select it as-if we would use a model. Actually, we can’t determine the set of discriminators before we have read it all, at least not before the first pass. Let us call this the completeness issue.
The very first insight is thus that a one-shot approach in text classification is based on a misconception. The software and the human would have to align to each other in some kind of conversation. Otherwise it can’t be specified in principle what the task is, that is, which texts should actually be selected. Any approach to text classification not following the “conversation scheme” is necessarily bare nonsense. Yet, that’s not really a surprise (except for some of the engineers).
There is a further consequence of the completeness issue. We can’t set up a table to learn from at all. This too is not a surprise, since setting up a table means to set up a particular symbolization. Any symbolization apriori to understanding must count as a hypothesis. Such simple. Whether it matches our purpose or not, we can’t know before we didn’t understand the text.
However, in order to make the software learning something we need assignates (traditionally called “properties”) and some criteria to distinguish better models from less performant models. In other words, we need a recurrent scheme on the technical level as well.
That’s why it is not perfectly correct to call texts “unstructured data”. (Besides the fact that data are not “out there”: we always need a measurement device, which in turn implies some kind of model AND some kind of theory.) In the case of texts, imposing a structure onto a text simply means to understand it. We even could say that a text as text is not structurable at all, since the interpretation of a text can’t never be regarded as finished.
All together, we may summarize the issue of complexity of texts as deriving from the following properties in the following way:
• – there are different levels of context, which additionally stretch across surrounds of very different sizes;
• – there are rich organizational constraints, e.g. grammars
• – there is a large corpus of words, while any of them bears meaning only upon interpretation;
• – there is a large number of relations that not only form a network, but which also change dynamically in the course of reading and of interpretation;
• – texts are symbolic: spatial neighborhood does not translate into reference, in neither way;
• understanding of texts requires a wealth of external, and quite abstract-concepts, that appear as significant only upon interpretation, as well as a social embedding of mutual interpretation,.
This list should at least exclude any attempt to defend the empirico-statistical approach as a reasonable one. Except the fact that it conveys a better-than-nothing attitude. These brings us to the question of utility.
Engineers build machines that are supposedly useful, more exactly, they are intended to be fulfill a particular purpose. Mostly, however, machines, even any technology in general, is useful only upon processes of subjective appropriation. The most striking example for this is the car. Else, computers have evolved not for reasons of utility, but rather for gaming. Video did not become popular for artistic reasons or for commercial ones, but due to the possibilities the medium offered for the sex industry. The lesson here being that an intended purpose is difficult to achieve as of the actual usage of the technology. On the other hand, every technology may exert some gravitational forces to develop a then unintended symbolic purpose and regarding that even considerable value. So, could we agree that the classification of texts as it is performed by contemporary technology is useful?
Not quite. We can’t regard the classification of texts as it is possible with the empirico-statistical approach as a reasonable technology. For the classification of texts can’t be separated from their understanding. All we can accomplish by this approach is to filter out those texts that do not match our interests with a sufficiently high probability. Yet, for this task we do not need text classification.
Architectures like 3L-SOM could also be expected to play an important role in translation, as translation requires even deeper understanding of texts as it is needed for sorting texts according to a template.
Besides the necessity for this doubly recurrent scheme we haven’t said much so far here about how then actually to treat the text. Texts should not be mistaken as empiric data. That means that we have to take a modified stance regarding measurement itself. In several essays we already mentioned the conceptual advantages of the two-layered (TL) approach based on self-organizing maps (TL-SOM). We already described in detail how the TL-SOM works, including the the basic preparation of the random graph as it has been described by Kohonen.
The important thing about TL-SOM is that it is not a device for modeling the similarity of texts. It is just a representation, even as it is a very powerful one, because it is based on probabilistic contexts (random graphs). More precisely, it is just one of many possible representations, even as it is much more appropriate than n-gram and other jokes. We even should NOT consider the TL-SOM as so-called “unsupervised modeling”, as the distinction between unsupervised vs. supervised is just another myth (=nonsense if it comes to quantitative models). The TL-SOM is nothing else than an instance for associative storage.
The trick of using a random graph (see the link above) is that the surrounds of words are differentially represented as well. The Kohonen model is quite scarce in this respect, since it applies a completely neutral model. In fact, words in a text are represented as if they would be all the same: of the same kind, of the same weight, etc. That’s clearly not reasonable. Instead, we should represent a word in several, different manners into the same SOM.
Yet, the random graph approach should not be considered just as a “trick”. We repeatedly argued (for instance here) that we have to “dissolve” empirical observations into a probabilistic (re)presentation in order to evade and to avoid the pseudo-problem of “symbol grounding”. Note that even by the practice of setting up a table in order to organize “data” we are already crossing the rubicon into the realm of the symbolic!
The real trick of the TL-SOM, however, is something completely different. The first layer represents the random graph of all words, the actual pre-specific sorting of texts, however, is performed by the second layer on the output of the first layer. In other words, the text is “renormalized”, the SOM itself is used as a measurement device. This renormalization allows to organize data in a standardized manner while allowing to avoid the symbolic fallacy. To our knowledge, this possible usage of the renormalization principle has not been recognized so far. It is indeed a very important principle that puts many things in order. We will deal later in a separate contribution with this issue again.
Only based on the associative storage taken as an entirety appropriate modeling is possible for textual data. The tremendous advantage of that is that the structure for any subsequent consideration now remains constant. We may indeed set up a table. The content of this table, the data, however is not derived directly from the text. Instead we first apply renormalization (a technique known from quantum physics, cf. [1])
The input is some description of the text completely in terms of the TL-SOM. More explicit, we have to “observe” the text as it behaves in the TL-SOM. Here, we are indeed legitimized to treat the text as an empirical observation, albeit we can, of course, observe the text in many different ways. Yet, observing means to conceive the text as a moving target, as a series of multitudes.
One of the available tools is Markov modeling, either as Markov chains, or by means of Hidden Markov Models. But there are many others. Most significantly, probabilistic grammars, even probabilistic phrase structure grammars can be mapped onto Markov models. Yet, again we meet the problem of apriori classification. Both models, Markovian as well as grammarian, need an assignment of grammatical type to a phrase, which often first requires understanding.
Given the autonomy of text, their temporal structure and the impossibility to apply apriori schematism, our proposal is that we just have to conceive of the text like we do of (higher) animals. Like an animal in its habitat, we may think of the text as inhabiting the TL-SOM, our associative storage. We can observe paths, their length and form, preferred neighborhoods, velocities, size and form of habitat.
Similar texts will behave in a similar manner. Such similarity is far beyond (better: as if from another planet) the statistical approach. We also can see now that the statistical approach is being trapped by the representationalist fallacy. This similarity is of course a relative one. The important point here is that we can describe texts in a standardized manner strictly WITHOUT reducing their content to statistical measures. It is also quite simple to determine the similarity of texts, whether as a whole, or whether regarding any part of it. We need not determine the range of our source at all apriori to the results of modeling. That modeling introduces a third logical layer. We may apply standard modeling, using a flexible tool for transformation and a further instance of a SOM, as we provide it as SomFluid in the downloads. The important thing is that this last step of modeling has to run automatically.
The proposed structure keeps any kind of reference completely intact. It also draws on its collected experience, that is, all texts it have been digesting before. It is not necessary to determine stopwords and similar gimmicks. Of course, we could, but that’s part of the conversation. Just provide an example of any size, just as it is available. Everything from two words, to a sentence, to a paragraph, to the content of a directory will work.
Such a 3L-SOM is very close to what we reasonably could call “understanding texts”. But does it really “understand”?
As such, not really. First, images should be stored in the same manner (!!), that is, preprocessed as random graphs over local contexts of various size, into the same (networked population of) SOM(s). Second, a language production module would be needed. But once we have those parts working together, then there will be full understanding of texts.
(I take any reasonable offer to implement this within the next 12 months, seriously!)
Understanding is a faculty to move around in a world of symbols. That’s not meant as a trivial issue. First, the world consists of facts, where facts comprise an universe of dynamic relations. Symbols are just not like traffic signs or pictograms as these belong to the more simple kind of symbols. Symbolizing is a complex, social, mediatized diachronic process.
Classifying, understood as “performing modeling and applying models” consists basically of two parts. One of them could be automated completely, while the other one could not treated by a finite or apriori definable set of rules at all: setting the purpose. In the case of texts, classifying can’t be separated from understanding, because the purpose of the text emerges only upon interpretation, which in turn requires a manifold of modeling raids. Modeling a (quasi-)physical system is completely different from that, it is almost trivial. Yet, the structure of a 3L-SOM could well evolve into an arrangement that is capable to understand in a similar way as we humans do. More precisely, and a bit more abstract, we also could say, that a “system” based on a population of 3L-SOM once will be able to navigate in the choreostemic space.
• [1] B. Delamotte (2003). A hint of renormalization. Am.J.Phys. 72 (2004) 170-184, available online: arXiv:hep-th/0212049v3.
A Deleuzean Move
June 24, 2012 § Leave a comment
It is probably one of the main surprises in the course of
growing up as a human that in the experience of consciousness we may meet things like unresolvable contradictions, thoughts that are incommensurable, thoughts that lead into contradictions or paradoxes, or thoughts that point to something which is outside of the possibility of empirical, so to speak “direct” experience. All these experiences form a particular class of experience. For one or the other reason, these issues are issues of mental itself. We definitely have to investigate them, if we are going to talk about things like machine-based episteme, or the urban condition, which will be the topic of the next few essays.
There have been only very few philosophers1 who have been embracing paradoxicality without getting caught by antinomies and paradoxes in one or another way.2 Just to be clear: Getting caught by paradoxes is quite easy. For instance, by violating the validity of the language game you have been choosing. Or by neglecting virtuality. The first of these avenues into persistent states of worries can be observed in sciences and mathematics3, while the second one is more abundant in philosophy. Fortunately, playing with paradoxicality without getting trapped by paradoxes is not too difficult either. There is even an incentive to do so.
Without paradoxicality it is not possible to think about beginnings, as opposed to origins. Origins—understood as points of {conceptual, historical, factual} departure—are set for theological, religious or mystical reasons, which by definition are always considered as bearer of sufficient reason. To phrase it more accurately, the particular difficulty consists in talking about beginnings as part of an open evolution without universal absoluteness, hence also without the need for justification at any time.
Yet, paradoxicality, the differential of actual paradoxes, could form stable paradoxes only if possibility is mixed up with potentiality, as it is for instance the case for perspectives that could be characterised as reductionist or positivist. Paradoxes exist strictly only within that conflation of possibility and potentiality. Hence, if a paradox or antinomy seems to be stable, one always can find an implied primacy of negativity in lieu of the problematic field spawned and spanned by the differential. We thus can observe the pouring of paradoxes also if the differential is rejected or neglected, as in Derrida’s approach, or the related functionalist-formalist ethics of the Frankfurt School, namely that proposed by Habermas [4]. Paradoxes are like knots that always can be untangled in higher dimensions. Yet, this does NOT mean that everything could be smoothly tiled without frictions, gaps or contradictions.
Embracing the paradoxical thus means to deny the linear, to reject the origin and the absolute, the centre points [6] and the universal. We may perceive remote greetings from Nietzsche here4. Perhaps, you already may have classified the contextual roots of these hints: It is Gilles Deleuze to whom we refer here and who may well be regarded as the first philosopher of open evolution, the first one who rejected idealism without sacrificing the Idea.5
In the hands of Deleuze—or should we say minds?—paradoxicality does neither actualize into paradoxes nor into idealistic dichotomic dialectics. A structural(ist) and genetic dynamism first synthesizes the Idea, and by virtue of the Idea as well as the space and time immanent to the Idea paradoxicality turns productive.7
Philosophy is revealed not by good sense but by paradox. Paradox is the pathos or the passion of philosophy. There are several kinds of paradox, all of which are opposed to the complementary forms of orthodoxy – namely, good sense and common sense. […] paradox displays the element which cannot be totalised within a common element, along with the difference which cannot be equalised or cancelled at the direction of a good sense. (DR227)
As our title already indicates, we not only presuppose and start with some main positions and concepts of Deleuzean philosophy, particularly those he once developed in Difference and Repetition (D&R)8. There will be more details later9. We10 also attempt to contribute some “genuine” aspects to it. In some way, our attempt could be conceived as a development being an alternative to part V in D&R, entitled “Asymmetrical Synthesis of the Sensible”.
This Essay
Throughout the collection of essays about the “Putnam Program” on this site we expressed our conviction that future information technology demands for an assimilation of philosophy by the domain of computer sciences (e.g. see the superb book by David Blair “Wittgenstein, Language and Information” [47]). There are a number of areas—of both technical as well as societal or philosophical relevance—which give rise to questions that already started to become graspable, not just in the computer sciences. How to organize the revision of beliefs?11 What is the structure of the “symbol grounding problem”? How to address it? Or how to avoid the fallacy of symbolism?12 Obviously we can’t tackle such questions without the literacy about concepts like belief or symbol, which of course can’t be reduced to a merely technical notion. Beliefs, for instance, can’t be reduced to uncertainty or its treatment, despite there is already some tradition in analytical philosophy, computer sciences or statistics to do so. Else, with the advent of emergent mental capabilities in machines ethical challenges appear. These challenges are on both sides of the coin. They relate to the engineers who are creating such instances as well as to lawyers who—on the other side of the spectrum—have to deal with the effects and the properties of such entities, and even “users” that have to build some “theory of mind” about them, some kind of folk psychology.
And last but not least, just the externalization of informational habits into machinal contexts triggers often pseudo-problems and “deep” confusion.13 Examples for such confusion are the question about the borders of humanity, i.e. as kind of a defense war fought by anthropology, or the issue of artificiality. Where does the machine end and where does the domain of the human start? How can we speak reasonably about “artificiality”, if our brain/mind remains still dramatically non-understood and thus implicitly is conceived by many as kind of a bewildering nature? And finally, how to deal with technological progress: When will computer scientists need self-imposed guidelines similar to those geneticists ratified for their community in 1974 during the Asimolar Conferences? Or are such guidelines illusionary or misplaced, because we are weaving ourselves so intensively into our new informational carpets—made from multi- or even meta-purpose devices—that are righteous flying carpets?
There is also a clearly recognizable methodological reason for bringing the inventioneering of advanced informational “machines” and philosophy closer together. The domain of machines with advanced mental capabilities—I deliberately avoid the traditional term of “artificial intelligence”—, let us abbreviate it MMC, acquires ethical weight in itself. MMC establishes a subjective Lebenswelt (life form) that is strikingly different from ours and which we can’t understand analytically any more (if at all)14. The challenge then is how to talk about this domain? We should not repeat the same fallacy as anthropology and anthropological philosophy have been committing since Kant, where human measures have been applied (and still are up today) to “nature”. If we are going to compare two different entities we need a differential position from which both can be instantiated. Note that no resemblance can be expected between the instances, nor between the instances and the differential. That differential is a concept, or an idea, and as such it can’t be addressed by any kind of technical perspective. Hence, questions of mode of speaking can’t be conceived as a technical problem, especially not for the domain of MMC, also due to the implied self-referentiality of the mental itself.
Taken together, we may say that our motivation follows two lines. Firstly, the concern is about the problematic field, the problem space itself, about the possibility that problems could become visible at all. Secondly, there is a methodological position characterisable as a differential that is necessary to talk about the subject of incommensurable that are equipped entities with mental capacities.15
Both directions and all related problems can be addressed in the same single move, so at least is our proposal. The goal of this essay is the introduction and a brief discussion of a still emerging conceptual structure that may be used as an image of thought, or likewise as a tool in the sense of an almost formal mental procedure, helping to avoid worries about the diagnosis—or supporting it—of the challenges opened by the new technologies. Of course, it will turn out that the result is not just applicable to the domain of philosophy of technology.
In the following we will introduce a unique structure that has been inspired not only from heterogeneous philosophical sources. Those stretch from Aristotle to Peirce, from Spinoza to Wittgenstein, and from Nietzsche to Deleuze, to name but a few, just to give you an impression what mindset you could expect. Another important source is mathematics, yet not used as a ready-made system for formal reasoning, but rather as a source for a certain way of thinking. Last, but not least, biology is contributing as the home of the organon, of complexity, of evolution, and, more formally, on self-referentiality. The structure we will propose as a starting point that appears merely technical, thus arbitrary, and at the same time it draws upon the primary amalgamate of the virtual and the immanent. Its paradoxicality consists in its potential to describe the “pure” any, the Idea that comprises any beginning. Its particular quality as opposed to any other paradoxicality is caused by a profound self-referentiality that simultaneously leads to its vanishing, its genesis and its own actualization. In this way, the proposed structure solves a challenge that is considered by many throughout the history of philosophy to be one of the most serious one. The challenge in question is that of sufficient reason, justification and conditionability. To be more precise, that challenge is not solved, it is more correct to say that it is dissolved, made disappear. In the end, the problem of sufficient reason will be marked as a pseudo-problem.
Here, a small remark is necessary to be made to the reader. Finally, after some weeks of putting this down, it turned out as a matter of fact that any (more or less) intelligible way of describing the issues exceeds the classical size of a blog entry. After all, now it comprises approx. 150’000 characters (incl white space), which would amount to 42+ pages on paper. So, it is more like a monograph. Still, I feel that there are many important aspects left out. Nevertheless I hope that you enjoy reading it.
The following provides you a table of content (active links) for the remainder of this essay:
2. Brief Methodological Remark
As we already noted, the proposed structure is self-referential. Self-referentiality also means that all concepts and structures needed for an initial description will be justified by the working of the structure, in other words, by its immanence. Actually, similarly to the concept of the Idea in D&R, virtuality and immanence come very close to each other, they are set to be co-generative. As an Idea, the proposed structure is complete. As any other idea, it needs to be instantiated into performative contexts, thus it is to be conceived as an entirety, yet neither as a completeness nor as a totality. Yet, its self-referentiality allows for and actually also generates a “self-containment” that results in a fractal mirroring of itself, in a self-affine mapping. Metaphorically, it is a concept that develops like the leaf of a fern. Superficially, it could look like a complete and determinate entirety, but it is not, similar to area-covering curves in mathematics. Those fill a 2-dimensional area infinitesimally, yet, with regard to their production system they remain truly 1-dimensional. They are a fractal, an entity to which we can’t apply ordinal dimensionality. Such, our concept also develops into instances of fractal entirety.
For these reasons, it would be also wrong to think that the structure we will describe in a moment is “analytical”, despite it is possible to describe its “frozen” form by means of references to mathematical concepts. Our structure must be understood as an entity that is not only not neutral or invariant against time. It forms its own sheafs of time (as I. Prigogine described it) Analytics is always blind against its generative milieu. Analytics can’t tell anything about the world, contrary to a widely exercised opinion. It is not really a surprise that Putnam recommended to reduce the concept of “analytic” to “an inexplicable noise”. Very basically it is a linear endeavor that necessarily excludes self-referentiality. Its starting point is always based on an explicit reference to kind of apparentness, or even revelation. Analytics not only presupposes a particular logic, but also conflates transcendental logic and practiced quasi-logic. Else, the pragmatics of analysis claims that it is free from constructive elements. All these characteristics do not apply to out proposal, which is as less “analytical” as the philosophy of Deleuze, where it starts to grow itself on the notion of the mathematical differential.
3. The Formal Structure
For the initial description of the structure we first need a space of expressibility. This space then will be equipped with some properties. And right at the beginning I would like to emphasize that the proposed structure does not “explain” by itself anything, just like a (philosophical) grammar. Rather, through its usage, that is, its unfolding in time, it shows itself and provides a stable as well as a generative ground.
The space of the structure is not a Cartesian space, where some concepts are mapped onto the orthogonal dimensions, or where concepts are thought to be represented by such dimensions. In a Cartesian space, the dimensions are independent from each other.16 Objects are represented by the linear and additive combination of values along those dimensions and thus their entirety gets broken up. We loose the object as a coherent object and there would be no way to regain it later, regardless the means and the tools we would apply. Hence the Cartesian space is not useful for our purposes. Unfortunately, all the current mathematics is based on the cartesian, analytic conception. Currently, mathematics is a science of control, or more precisely, a science about the arrangement of signs as far as it concerns linear, trivial machines that can be described analytically. There is not yet a mathematics of the organon. Probably category theory is a first step into its direction.
Instead, we conceive our space as an aspectional space, as we introduced it in a previous chapter. In an aspectional space concepts are represented by “aspections” instead of “dimensions”. In contrast to the values in a dimensional space, values in an aspectional can not be changed independently from each other. More precisely, we always can keep only at most 1 aspection constant, while the values along all the others change simultaneously. (So-called ternary diagrams provide a distantly related example for this in a 2-dimensional space.) In other words, within the N-manifolds of the aspectional space always all values are dependent on each other.
This aspectional space is stuffed with a hyperbolic topological structure. The space of our structure is not flat. You may take M.C. Escher’s plates as a visualization of such a space. Yet, our space is different from such a fixed space; it is a relativistic space that is built from overlapping hyperbolic spaces. At each point in the space you will find a point of reference (“origin”) for a single hyperbolic reference system. Our hyperbolic space is locally centred. A mathematical field about comparable structures would be differential topology.
So far, the space is still quite easy and intuitively to understand. At least there is still a visualization possible for it. This changes probably with the next property. Points in this aspectional space are not “points”, or expressed in a better, less obscure way, our space does not contain points at all. In a Cartesian space, points are defined by one or more scales and their properties. For instance, in a x-y-coordinate system we could have real numbers on both dimensions, i.e. scales, or we could have integers on the first, and reals on the second one. The interaction of the number systems used to create a scale along a dimension determines the expressibility of the space. This way, a point is given as a fixed instance of a set of points as soon as the scale is given. Points themselves are thus said to be 0-dimensional.
Our “points”, i.e. the content of our space is quite different from that. It is not “made up” from inert and passive points but the second differential, i.e. ultimately a procedure that invokes an instantiation. Our aspectional space thus is made from infinitesimal procedural sites, or “situs” as Leibniz probably would have said. If we would represent the physical space by a Cartesian dimensional system, then the second derivative would represent an acceleration. Take this as a metaphor for the behavior of our space. Yet, our space is not a space that is passive. The second-order differential makes it an active space and a space that demands for an activity. Without activity it is “not there”.
We also could describe it as the mapping of the intensity of the dynamics of transformation. If you would try to point to a particular location, or situs, in that space, which is of course excluded by its formal definition, you would instantaneously “transported” or transformed, such that you would find yourself elsewhere instantaneously. Yet, this “elsewhere” can not be determined in Cartesian ways. First, because that other point does not exist, second, because it depends on the interaction of the subject’s contribution to the instantiation of the situs and the local properties of the space. Finally, we can say that our aspectional space thus is not representational, as the Cartesian space is.
So, let us sum the elemental17 properties of our space of expressibility:
• 1. The space is aspectional.
• 2. The topology of the space is locally hyperbolic.
• 3. The substance of the space is a second-order differential.
4. Mapping the Semantics
We now are going to map four concepts onto this space. These concepts are themselves Ideas in the Deleuzean sense, but they are also transcendental. They are indeterminate and real, just as virtual entities. As those, we take the chosen concepts as inexplicable, yet also as instantiationable.
These four concepts have been chosen initially in a hypothetical gesture, such that they satisfy two basic requirements. First, it should not be possible to reduce them to one another. Second, together they should allow to build a space of expressibility that would contain as much philosophical issues of mentality as possible. For instance, it should contain any aspect of epistemology or of languagability, but it does not aim to contribute to the theory of morality, i.e. ethics, despite the fact that there is, of course, significant overlapping. For instance, one of the possible goals could be to provide a space that allows to express the relation between semiotics and any logic, or between concepts and models.
So, here are the four transcendental concepts that form the aspections of our space as described above:
• – virtuality
• – mediality
• – model
• – concept
Inscribing four concepts into a flat, i.e. Euclidean aspectional space would result in a tetraedic space. In such a space, there would be “corners,” or points of inflections, which would represent the determinateness of the concepts mapped to the aspections. As we have emphasized above, our space is not flat, though. There is no static visualization possible for it, since our space can’t be mapped to the flat Euclidean space of a drawing, or of the space of our physical experience.
So, let us proceed to the next level by resorting to the hyperbolic disc. If we take any two points inside the disc, their distance is determinate. Yet, if we take any two points at the border of the disc, the distance between those points is infinite from the inside perspective, i.e. for any perspective associated to a point within the disc. Also the distance from any point inside the disc to the border is infinite. This provides a good impression how transcendental concepts that by definition can’t be accessed “as such”, or as a thing, can be operationalized by the hyperbolic structure of a space. Our space is more complicated, though, as the space is not structured by a fixed hyperbolic topology that is, so to speak, global for the entire disc. The consequence is that our space does not have a border, but at the same time it remains an aspectional space. Turning the perspective around, we could say that the aspections are implied into this space.
Let us now briefly visit these four concepts.
4.1. Virtuality
Virtuality describes the property of “being virtual”. Saying that something is virtual does not mean that this something does not exist, despite the property “existing” can’t be applied to it either. It is fully real, but not actual. Virtuality is the condition of potentiality, and as such it is a transcendental concept. Deleuze repeatedly emphasises that virtuality does not refer to a possibility. In the context of information technologies it is often said that this or that is “virtual”, e.g. virtualized servers, or virtual worlds. This usage is not the same as in philosophy, since, quite obviously, we use the virtual server as a server, and the world dubbed “virtual“ indeed does exist in an actualized form. Yet, in both examples there is also some resonance to the philosophical concept of virtuality. But this virtuality is not exclusive to the simulated worlds, the informationally defined server instances or the WWW. Virtualization is, as we will see in a moment, implied by any kind of instance of mediality.
As just said, virtuality and thus also potentiality must be strictly distinguished from possibility. Possible things, even if not yet present or existent, can be thought of in a quasi-material way, as if they would exist in their material form. We even can say that possible things and the possibilities of things are completely determined in any given moment. It is not possible to say so about potentiality. Yet, without the concept of potentiality we could not speak about open evolutionary processes. Neglecting virtuality thus is necessarily equivalent to the apriori claim of determinateness, which is methodologically and ethically highly problematic.
The philosophical concept of virtuality is known since Aristotle. Recently, Bühlmann18 brought it to the vicinity of semiotics and the question of reference19 in her work about mediality. There would be much, much more to say about virtuality here, just, the space is missing…
4.2. Mediality
Mediality, that is the medial aspects of things, facts and processes belongs to the most undervalued concepts nowadays, even as we get some exercise by means of so-called “social media”. That term perfectly puts this blind spot to stage through its emphasis: Neither is there any mediality without sociality, nor is there any sociality without mediality. Mediality is the concept that has been “discovered” as the last one of our small group. There is a growing body of publications, but many are—astonishingly—deeply infected by romanticism or positivism20, with only a few exceptions.21 Mediality comprises issues like context, density, or transformation qua transfer. Mediality is a concept that helps to focus the appropriate level of integration in populations or flows when talking about semantics or meaning and their dynamics. Any thing, whether material or immaterial, that occurs in a sufficient density in its manifoldness may develop a mediality within a sociality. Mediality as a “layer of transport” is co-generative to sociality. Media are never neutral with respect to the transported, albeit one can often find counteracting forces here.
Signs and symbols could not exist as such without mediality. (Yet, this proposal is based on the primacy of interpretation, which is rejected by modernist set of beliefs. The costs for this are, however, tremendous, as we are going to argue here) The same is true for words and language as a whole. In real contexts, we usually find several, if not many medial layers. Of course, signs and symbols are not exhaustively described by mediality. They need reference, which is a compound that comprises modeling.
4.3. Model
Models and modeling need not be explicated too much any more, as it is one of the main issues throughout our essays. We just would like to remember to the obvious fact that a “pure” model is not possible. We need symbols and rules, e.g. about their creation or usage, and necessarily both are not subject of the model itself. Most significantly, models need a purpose, a concept to which they refer. In fact, any model presupposes an environment, an embedding that is given by concepts and a particular social embedding. Additionally, models would not be models without virtuality. On the one hand, virtuality is implied by the fact that models are incarnations of specific modes of interpretation, and on the other hand they imply virtuality themselves, since they are, well, just models.
We frequently mentioned that it is only through models that we can build up references to the external world. Of course, models are not sufficient to describe that referencing. There is also the contingency of the manifold of populations and the implied relations as quasi-material arrangements that contribute to the reference of the individual to the common. Yet, only modeling allows for anticipation and purposeful activity. It is only though models that behavior is possible, insofar any behavior is already differentiated behavior. Models are thus the major site where information is created. It is not just by chance that the 20th century experienced the abundance of models and of information as concepts.
In mathematical terms, models can be conceived as second-order categories. More profane, but equivalent to that, we can say that models are arrangement of rules for transformation. This implies the whole issue of rule-following as it has been investigated and formulated by Wittgenstein. Note that rule-following itself is a site of paradoxicality. As there is no private language, there is also no private model. Philosophically, and a bit more abstract, we could describe them as the compound of providing the possibility for reference (they are one of the conditions for such) and the institutionalized site for creating (f)actual differences.
4.4. Concepts
Concept is probably one of the most abused, or at least misunderstood concepts, at least in modern times. So-called Analytical Philosophy is claiming over and over again that concepts could be explicated unambiguously, that concepts could be clarified or defined. This way, the concept and its definition are equaled. Yet, a definition is just a definition, not a concept. The language game of the definition makes sense only in a tree of analytical proofs that started with axioms. Definitions need not to be interpreted. They are fully given by themselves. Such, the idea of clarifying a concept is nothing but an illusion. Deleuze writes (DR228)
It is not surprising that, strictly speaking, difference should be ‘inexplicable’. Difference is explicated, but in systems in which it tends to be cancelled; this means only that difference is essentially implicated, that its being is implication. For difference, to be explicated is to be cancelled or to dispel the inequality which constitutes it. The formula according to which ‘to explicate is to identify’ is a tautology.
Deleuze points to the particular “mechanism” of eradication by explication, which is equal to its transformation into the sayable. There is a difference between 5 and 7, but the arithmetic difference does not cover all aspects of difference. Yet, by explicating the difference using some rules, all the other differences except the arithmetic one vanish. Such, this inexplicability is not limited to the concept of difference. In some important way, these other aspects are much more interesting and important than the arithmetic operation itself or the result of it. Actually, we can understand differencing only as far we are aware of these other aspects.
Elsewhere, we already cited Augustine and his remark about time:22 “What, then, is time? If no one ask of me, I know; if I wish to explain to him who asks, I know not.” Here, we can observe at least two things. Firstly, this observation may well be the interpreted as the earliest rejection of “knowledge as justified belief”, a perspective which became popular in modernism. Meanwhile it has been proofed to be inadequate by the so-called Gettier problem. The consequences for the theory of data bases, or machine-based processing of data, can’t be underestimated. It clearly shows, that knowledge can’t be reduced to confirmed hypotheses qua validated models, and belief can’t be reduced to kind of a pre-knowledge. Belief must be something quite different.
The second thing to observe by those two example concerns the status of interpretation. While Augustine seems to be somewhat desperate, at least for a moment23, analytical philosophy tries to abolish the annoyance of indeterminateness by killing the freedom inherent to interpretation, which always and inevitably happens, if the primacy of interpretation is denied.
Of course, the observed indeterminateness is equally not limited to time either. Whenever you try to explicate a concept, whether you describe it or define it, you find the unsurmountable difficulty to pick one of many interpretations. Again: There is no private language; meaning, references and signs exist only within social situations of interpretation. In other words, we again find the necessity of invoking the other conceptual aspects from which we build our space. Without models and mediality there is no concept. And even more profound than models, concepts imply virtuality.
In the opposite direction we can understand now that these four concepts are not only not reducible to each other. They are dependent on each other and—somewhat paradoxically—they are even competitively counteracting. From this we can expect an abstract dynamics that reminds somewhat to the patterns evolving in Reaction-Diffusion-Systems. These four concepts imply the possibility for a basic creativity in the realm of the Idea, in the indeterminate zone of actualisation that will result in a “concrete” thought, or at least the experience of thinking.
Before we proceed we would like to introduce a notation that should be helpful in avoiding misunderstandings. Whenever we refer to the transcendental aspects between which the aspections of our space stretch out, we use capital letters and mark it additionally by a bar, such as “_Concept”,or “_Model”.The whole set of aspects we denote by “_A”,while its unspecified items are indicated by “_a”.
5. Anti-Ontology: The T-Bar-Theory
The four conceptual aspects _Aplay different roles. They differ in the way they get activated. This becomes visible as soon as we use our space as a tool for comparing various kinds of mental concepts or activities, such as believing, referring, explicating or understanding. These we will inspect later in detail.
Above we described the impossibility to explicate a concept without departing from the “conceptness”. Well, such a description is actually not appropriate according to our aspectional space. The four basic aspections are built by transcendental concepts. There is a subjective, imaginary yet pre-specific scale along those aspections. Hence, in our space “conceptness” is not a quality, but an intensity, or almost a degree, a quantity. The key point then is that a mental concept or activity relates always to all four transcendental aspections in such a way that the relative location of the mental activity can’t be changed along just a single aspect alone.
We also can recognize another significant step that is provided by our space of expressibility. Traditionally, concepts are used as existential signifiers, in philosophy often called qualia. Such existential signifiers are only capable to indicate presence or absence, which thus is also confined to naive ontology of Hamletian style (to be or not to be). It is almost impossible to build a theory or a model from existential signifiers. From the modeling or the measurement theory point of view, concepts are on the binary scale. Despite concepts collect a multitude of such binary usages, appropriate modeling remains impossible due the binary scale, unless we would probabilize all potential dual pairs.
Similarly to the case of logic we also have to distinguish the transcendental aspect _a,that is, the _Model,_Mediality,_Concept,and _Virtualityfrom the respective entity that we find in applications. Those practiced instances of a are just that: instances. That is: instances produced by orthoregulated habits. Yet, the instances of a that could be gained through the former’s actualization do not form singularities, or even qualia. Any a can be instantiated into an infinite diversity of concrete, i.e. definable and sayable abstract entities. That’s the reason for the kinship between probabilistic entities and transcendental perspectives. We could operationalize the latter by the former, even if we have to distinguish sharply between possibility and potentiality. Additionally we have to keep in mind that the concrete instances do not live independently from their transcendental ancestry24.
Deleuze provides us a nice example of this dynamics in the beginning of part V in D&R. For him, “divergence” is an instance of the transcendental entity “Difference”.
What he calls “phenomenon” we dubbed “instance”, which is probably more appropriate in order to avoid the reference to phenomenology and the related difficulties. Calling it “phenomenon” pretends—typically for any kind of phenomenology or ontology—sort of a deeply unjustified independence of mentality and its underlying physicality.
This step from existential signifiers to the situs in a space for expressibility, made possible by our aspectional space, can’t be underestimated. Take for instance the infamous question that attracted so many misplaced answers: “How do words or concepts acquire reference?” This question appears to be especially troubling because signs do refer only to signs.25 In existential terms, and all the terms in that question are existential ones, this question can’t be answered, even not addressed at all. As a consequence, deep mystical chasms unnecessarily keep separating the world from the concepts. Any resulting puzzle is based on a misconception. Think of Platons chorismos (greek for “separation”) of explanation and description, which recently has been taken up, refreshed and declared being a “chasm” by Epperson [31] (a theist realist, according to his own positioning; we will meet him later again). The various misunderstandings are well-known, ranging from nominalism to externalist realism to scientific constructivism.
They all vanish in a space that overcomes the existentiality embedded in the terms. Mathematically spoken, we have to represent words, concepts and references as probabilized entities, as quasi-species as Manfred Eigen called it in a different context, in order to avoid naive mysticism regarding our relations to the world.
It seems that our space provides the possibility for measuring and comparing different ways of instantiation for _A,kind of a stable scale. We may use it to access concepts differentially, that is, we now are able to transform concepts in a space of quantitability (a term coined by Vera Bühlmann). The aspectional space as we have constructed it is thus necessary even in order to talk just about modeling. It would provide the possibility for theories about any transition between any mental entities one could think of. For instance, if we conceive “reference” as the virtue of purposeful activity and anticipation, we could explore and describe the conditions for the explication of the path between the _Modelon the one side and the _Concept on the other.On this path—which is open on both sides—we could, for instance, first meet different kinds of symbols near the Model, started by idealization and naming of models, followed by the mathematical attitude concerning the invention and treatment of signs, _Logicand all of its instances, semiosis and signs, words, and finally concepts, not forgetting above all that this path necessarily implies a particular dynamics regarding _Medialityand _Virtuality.
Such an embedding of transformations into co-referential transcendental entities is anything we can expect to “know” reliably. That was the whole point of Kant. Well, here we can be more radical than Kant dared to. The choreostemic space is a rejection of the idea of “pure thought”, or pure reason, since such knowledge needs to undergo a double instantiation, and this brings subjectivity back. It is just a phantasm to believe that propositions could be secured up to “truth”. This is even true for least possible common denominator, existence.
I think that we cannot know whether something exists or not (here, I pretend to understand the term exist), that it is meaningless to ask this. In this case, our analysis of the legitimacy of uses has to rest on something else. (David Blair [49])
Note that Blair is very careful in his wording here. He is not about any universality regarding the justification, or legitimization. His proposal is simply that any reference to “Being” or “Existence” is useless apriori. Claiming seriousness of ontology as an aspect of or even as an external reality immediately instantiates the claim of an external reality as such, which would be such-and-such irrespective to its interpretation. This, in turn, would consequently amount to a stance that would set the proof of irrelevance of interpretation and of interpretive relativism as a goal. Any familiar associations about that? Not to the least do physicists, but only physicists, speak of “laws” in nature. All of this is, of course, unholy nonsense, propaganda and ideology at least.
As a matter of fact, even in a quite strict naturalist perspective, we need concepts and models. Those are obviously not part of the “external” nature. Ontology is an illusion, completely and in any of its references, leading to pseudo-problems that are indeed “very difficult” to “solve”. Even if we manage to believe in “existence”, it remains a formless existence, or more precisely, it has to remain formless. Any ascription of form immediately would beat back as a denial of the primacy of interpretation, hence in a naturalist determinism.
Before addressing the issue of the topological structure of our space, let us trace some other figures in our space.
6. Figures and Forms
Whenever we explicate a concept we imply or refer to a model. In a more general perspective, this applies to virtuality and mediality as well. To give an example: Describing a belief does not mean to belief, but to apply a model. The question now is, how to revert the accretion of mental activities towards the _Model._Virtuality can’t be created deliberately, since in this case we would refer again to the concept of model. Speaking about something, that is, saying in the Wittgensteinian sense, intensifies the _Model.
It is not too difficult, though, to find some candidate mechanics that turns the vector of mental activity away from the _Concept.It is through performance, mere action without explicable purpose, that we introduce new possibilities for interpretation and thus also enriched potential as the (still abstract) instance of _Virtuality.
In contrast to that, the _Concept is implied.The _Conceptcan only be demonstrated. Even by modeling. Traveling on some path that is heading towards the _Model,the need for interpretation continuously grows, hence, the more we try to approach the “pure” _Model,the stronger is the force that will flip us back towards the _Concept.
_Mediality,finally, the fourth of our aspects, binds its immaterial colleagues to matter, or quasi-matter, in processes that are based on the multiplicity of populations. It is through _Medialityand its instances that chunks of information start to behave as device, as quasi-material arrangement. The whole dynamics between _Conceptsand _Modelsrequires a symbol system, which can evolve only through the reference to _Mediality,which in turn is implied by populations of processes.
Above we said that the motivation for this structure is to provide a space of expressibility for mental phenomena in their entirety. Mental activity does not consist of isolated, rare events. It is an multitude of flows integrated into various organizational levels, even if we would consider only the language part. Mapping these flows into our space rises the question whether we could distinguish different attractors, different forms of recurrence.
Addressing this question establishes an interesting configuration, since we are talking about the form of mental activities. Perhaps it is also appropriate to call these forms “mental style”. In any case, we may take our space as a tool to formalize the question about potential classes of mental styles. In order to render out space more accessible, we take the tetraedic body as a (crude) approximating metaphor for it.
Above we stressed the point that any explication intensifies the _Model aspect. Transposed into a Cartesian geometry we would have said—metaphori- cally—that explication moves us towards the corner of the model. Let us stick to this primitive representation for a moment and in favour of a more intuitive understanding. Now imagine constructing a vector that points away from the model corner, right to the middle of the area spanned by virtuality, mediality and concept. It is pretty clear, that mental activity that leaves the model behind, and quite literally so, in this way will be some form of basic belief, or revelation. Religiosity (as a mental activity) may be well described as the attempt to balance virtuality, mediality and concept without resorting to any kind of explication, i.e. models. Of course, this is not possible in an absolute manner, since it is not possible to move in the aspectional space without any explication. This in turn then yields a residual that again points towards the model corner.
Inversely, it is not possible to move only in the direction of the _Model.Nevertheless, there are still many people proposing such, think for instance about (abundant as well as overdone) scientism. What we can see here are particular forms of mental activity. What about other forms? For instance, the fixed-point attractor?
As we have seen, our aspectional space does not allow for points as singularities. Both the semantics of the aspections as well as the structure of the space as a second-order differential prevents them. Yet, somebody could attempt to realize an orbit around a singularity that is as narrow as possible. Despite such points of absolute stability are completely illusionary, the idea of the absoluteness of ideas—idealism—represents just such an attempt. Yet, the claim of absoluteness brings mental activity to rest. It is not by accident therefore that it was the logician Frege who championed kind of a rather strange hyperplatonism.
At this point we can recognize the possibility to describe different forms of mental activity using our space. Mental activity draws specific trails into our space. Moreover, our suggestion is that people prefer particular figures for whatever reasons, e.g. due to their cultural embedding, their mental capabilities, their knowledge, or even due to their basic physical constraints. Our space allows to compare, and perhaps even to construct or evolve particular figures. Such figures could be conceived as the orthoregulative instance for the conditions to know. Epistemology thus looses its claim of universality.
It seems obvious to call our space a “choreostemic” space, a term which refers to choreography. Choreography means to “draw a dance”, or “drawing by dancing”, derived from Greek choreia (χορεύω) for „dancing, (round) dance”. Vera Bühlmann [19] described that particular quality as “referring to an unfixed point loosely moving within an occurring choreography, but without being orchestrated prior to and independently of such occurrence.”
The notion of the choreosteme also refers to the chorus of the ancient theatre, with all its connotations, particularly the drama. Serving as an announcement for part V of D&R, Deleuze writes:
However, what carries out the third aspect of sufficient reason—namely, the element of potentiality in the Idea? No doubt the pre-quantitative and pre-qualitative dramatisation. It is this, in effect, which determines or unleashes, which differenciates the differenciation of the actual in its correspondence with the differentiation of the Idea. Where, however, does this power of dramatisation come from? (DR221)
It is right here, where the choreostemic space links in. The choreostemic space does not abolish the dramatic in the transition from the conditionability of Ideas into concrete thoughts, but it allows to trace and to draw, to explicate and negotiate the dramatic. In other words, it opens the possibility for a completely new game: dealing with mental attitudes. Without the choreostemic space this game is not even visible, which itself has rather unfortunate consequences.
The choreostemic space is not an epistemic space either. Epistemology is concerned about the conditions that are influencing the possibility to know. Literally, episteme means “to stand near”, or “to stand over”. It draws upon a fixed perspective that is necessary to evaluate something. Yet, in the last 150 years or so, philosophy definitely has experienced the difficulties implied by epistemology as an endeavour that has been expected to contribute finally to the stabilization of knowledge. I think, the choreostemic space could be conceived as a tool that allows to reframe the whole endeavour. In other words, the problematic field of the episteme, and the related research programme “epistemology” are following an architecture (or intention), that has been set up far too narrow. That reframing, though, has become accessible only through the “results” of—or the tools provided by — the work of Wittgenstein and Deleuze. Without the recognition of the role of language and without a renewal of the notion of the virtual, including the invention of the concept of the differential, that reframing would not have been possible at all.
Before we are going to discuss further the scope of the choreostemic space and the purposes it can serve, we have to correct the Cartesian view that slipped in through our metaphorical references. The Cartesian flavour keeps not only a certain arbitrariness alive, as the four conceptual aspects _Aare given just by some subjective empirical observations. It also keeps us stick completely within the analytical space, hence with a closed approach that again would need a mystical external instance for its beginning. This we have to correct now.
7. Reason and Sufficiency
Our choreostemic space is built as an aspectional space that is spanned by transcendental entities. As such, they reflect the implied conditionability of concrete entities like definitions, models or media. The _Conceptcomprises any potential concrete concept, the _Modelcomprises any actual model of whatsoever kind and expressed in whatsoever symbolic system, the _Medialitycontains the potential for any kind of media, whether more material or more immaterial in character. The transcendental status of these aspects also means that we never can “access” them in their “pure” form. Yet, due to these properties our space allows to map any mental activity, not just of the human brain. In a more general perspective, our space is the space where the _Comparison takes place.
The choreostemic space is of course itself a model. Given the transcendentality of the four conceptual aspects _A,we can grasp the self-referentiality. Yet, this neither does result in an infinite regress, nor in circularity. This would be the case only if the space would be Cartesian and the topological structure would be flat (Euclidean) and global.
First, we have to consider that the choreostemic space is not only model, precisely due to its self-referentiality. Second, it is a tool, and as such it is not time-inert as a physical law. Its relevance unfolds only if it is used. This, however, invokes time and activity. Thus the choreostemic space could be conceived also as a means to intensify the virtual aspects of thought. Furthermore, and third, it is of course a concept, that is, it is an instance of the _Concept.As such, it should be constructed in a way that abolishes any possibility for a Cartesio-Euclidean regression. All these aspects are covered by the topological structure of the choreostemic space: It is meant to be a second-order differential.
A space made by the second-order differential does not contain items. It spawns procedures. In such a space it is impossible to stay at a fixed point. Whenever one would try to determine a point, one would be accelerated away. The whole space causes divergence of mental activities. Here we find the philosophical reason for the impossibility to catch a thought as a single entity.
We just mentioned that the choreostemic space does not contain items. Due to the second-order differential it is not made up as a set of coordinates, or, if we’d consider real scaled dimensions, as potential sets of coordinates. Quite to the opposite, there is nothing determinable in it. Yet, in rear-view, or hindsight, respectively, we can reconstruct figures in a probabilistic manner. The subject of this probabilism is again not determinable coordinates, but rather clouds of probabilities, quite similar to the way things are described in quantum physics by the Schrödinger equation. Unlike the completely structureless and formless clouds of probability which are used in the description of electrons, the figures in our space can take various, more or less stable forms. This means that we can try to evolve certain choreostemic figures and even anticipate them, but only to a certain degree. The attractor of a chaotic system provides a good metaphor for that: We clearly can see the traces in parameter space as drawn by the system, yet, the system’s path as described by a sequence of coordinates remains unpredictable. Nevertheless, the attractor is probabilistically confined to a particular, yet cloudy “figure,” that is, an unsharp region in parameter space. Transitions are far from arbitrary.
Hence, we would propose to conceive the choreostemic space as being made up from probabilistic situs (pl.). Transitions between situs are at the same time also transformations. The choreostemic space is embedded in its own mediality without excluding roots in external media.
Above we stuffed the space with a hyperbolic topology in order to align to the transcendentality of the conceptual aspects. It is quite important to understand that the choreostemic space does not implement a single, i.e. global hyperbolic relation. In contrast, each situs serves as point of reference. Without this relativity, the choreostemic space would be centred again, and in consequence it would turn again to the analytic and totalising side. This relativity can be regarded as the completed and subjectivising Cartesian delocalization of the “origin”. It is clear that the distance measures of any two such relative hyperbolic spaces do not coincide any more. There is neither apriori objectivity nor could we expect a general mapping function. Approximate agreement about distance measures may be achievable only for reference systems that are rather close to each other.
The choreostemic space comprises any condition of any mental attitude or thought. We already mentioned it above: The corollary of that is that the choreostemic space is the space of _Comparisonas a transcendental category.
It comprises the conditions for the whole universe of Ideas, it is an entirety. Here, it is again the topological structure of the space that saves us from mental dictatorship. We have to perform a double instantiation in order to arrive at a concrete thought. It is somewhat important to understand that these instantiations are orthoregulated.
It is clear that the choreostemic space destroys the idea of a uniform rationality. Rationality can’t be tied to truth, justice or utility in an objective manner, even if we would soften objectivity as a kind of relaxed intersubjectivity. Rationality depends completely on the preferred or practiced figures in the choreostemic space. Two persons, or more generally, two entities with some mental capacity, could completely agree on the facts, that is on the percepts, the way of their construction, and the relations between them, but nevertheless assign them completely different virtues and values, simply for the fact that the two entities inhabit different choreostemic attractors. Rationality is global within a specific choreostemic figure, but local and relative with regard to that figure. The language game of rationality therefore does not refer to a particular attitude towards argumentation, but quite in contrast, it includes and displays the will to establish, if not to enforce uniformity. Rationality is the label for the will to power under the auspices of logic and reductionism. It serves as the display for certain, quite critical moral values.
Thus, the notion of sufficient reason looses its frightening character as well. As any other principle of practice it gets transformed into a strictly local principle, retaining some significance only with regard to situational instrumentality. Since the choreostemic space is a generative space, locality comprises temporal locality as well. According to the choreostemic space, sufficient reasons can’t even be transported between subsequent situations. In terms of the choreostemic space notions like rationality or sufficient reason are relative to a particular attractor. In different attractors their significance could be very different, they may bear very different meanings. Viewed from the opposite direction, we also can see that a more or less stable attractor in the choreostemic has first to form, or: to be formed, before there is even the possibility for sufficient reasons. This goes straightly parallel to Wittgenstein’s conception of logic as a transcendental apriori that possibly becomes instantiated only within the process of an unfolding Lebensform. As a contribution to political reason, the choreostemic space it enables persons inhabiting different attractors, following different mental styles. Later, we will return to this aspect.
In D&R, Deleuze explicated the concept of the “Image of Thought”, as part III of D&R is titled. There he first discusses what he calls the dogmatic image of thought, comprised according to him from eight elements that together lead to the concept of the idea as an representation (DR167). Following that we insists that the idea is bound to repetition and difference (as differenciation and differentiation), where repetition introduces the possibility of the new, as it is not the repetition of the same. Nevertheless, Deleuze didn’t develop this Image into a multiplicity, as it could have been expected from a more practical perspective, i.e. the perspective of language games. These games are different from his notion emphasizing at several instances that language is a rich play.
For me it seems that Deleuze didn’t (want to) get rid of ontology, hence he did not conceive of his great concept of the “differential” as a language game, and in turn he missed to detect the opportunity for self-referentiality or even to apply it in a self-referential manner. We certainly do therefore not agree with his attempt to ground the idea of sufficient reason as a global principle. Since “sufficient reason” is a practice I think it is not possible or not sufficient to conceive of it as a transcendental guideline.
8. Elective Kinships
It is pretty clear that the choreostemic space is applicable to many problematic fields concerning mental attitudes, and hence concerning cultural issues at large, reaching far beyond the specificity of individual domains.
As we will see, the choreostemic space may serve as a treatment for several kinds of troublesome aberrances, in philosophy itself as well as in its various applications. Predominantly, the choreostemic space provides the evolutionary perspective towards the self-containing theoretical foundation of plurality and manifoldness.26 Comparing that with Hegel’s slogans of “the synthesis of the nation’s reason“ (“Synthese des Volksgeistes“) or „The Whole is the Truth“ („Das Ganze ist das Wahre“) shows the difference regarding its level and scope.
Before we go into the details of the dynamics that unfolds in the choreostemic space, we would like to pick up on two areas, the philosophy of the episteme and the relationship between anthropology and philosophy.
8.1. Philosophy of the Episteme
The choreostemic space is not about a further variety of some epistemological argument. It is thought as a reframing of the concerns that have been addressed traditionally by epistemology. (Here, we already would like to warn of the misunderstanding that the choreostemic space exhausts as epistemology.) Hence, it should be able to serve (as) the theoretical frame for the sociology of science or the philosophy of science as well. Think about the work of Bruno Latour [9], Karin Knorr Cetina [10] or Günther Ropohl [11] for the sociology of science or the work of van Fraassen [12] of Giere [13] for the field of philosophy of science. Sociology and philosophy, and quite likely any of the disciplines in human sciences, should indeed establish references to the mental in some way, but rather not to the neurological level, and—since we have to avoid anthropological references—to cognition as it is currently understood in psychology as well.
Giere, for instance, brings the “cognitive approach” and hence the issue of practical context close to the understanding of science, criticizing the idealising projection of unspecified rationality:
Philosophers’ theories of science are generally theories of scientific rationality. The scientist of philosophical theory is an ideal type, the ideally rational scientist. The actions of real scientists, when they are considered at all, are measured and evaluated by how well they fulfill the ideal. The context of science, whether personal, social or more broadly cultural, is typically regarded as irrelevant to a proper philosophical understanding of science” (p.3).
The “cognitive approach” that Giere proposes as a means to understand science is, however, threatened seriously by the fact that there is no consensus about the mental. This clearly conflicts with the claim of trans-cultural objectivity of contemporary science. Concerning cognition, there are still many simplistic paradigms around, recently seriously renewed by the machine learning community. Aaron Ben Ze’ev [14] writes critically:
In the schema paradigm [of the mind, m.], which I advocate, the mind is not an internal container but a dynamic system of capacities and states. Mental properties are states of a whole system, not internal entities within a particular system. […] Novel information is not stored in a separate warehouse, but is ingrained in the constitution of the cognitive system in the form of certain cognitive structures (or schemas). […] The attraction of the mechanistic paradigm is its simplicity; this, however, is an inadequate paradigm, because it fails to explain various relevant phenomena. Although the complex schema paradigm does not offer clear-cut solutions, it offers more adequate explanations.
How problematic even such critiques are can be traced as soon as we remember Wittgenstein’s mark on “mental states” (Brown Book, p.143):
There is a kind of general disease of thinking which always looks for (and finds) what would be called a mental state from which all our acts spring as from a reservoir.
In the more general field of epistemology there is still no sign for any agreement about the concept of knowledge. From our position, this is little surprising. First, concepts can’t be defined at all. All we can find are local instances of the transcendental entity. Second, knowledge and even its choreostemic structure is dependent on the embedding culture while at the same time it is forming the culture. The figures in the choreostemic space are attractors: They do not prescribe the next transformation, but they constrain the possibility for it. How ever to “define” knowledge in an explicit, positively representationalist manner? For instance, knowledge can’t be reduced to confirmed hypotheses qua validated models. It is just impossible in principle to say “knowledge is…”, since this implies inevitably the demand for an objective justification. At most, we can take it as a language game. (Thus the choreosteme, that is, the potential of building figures in the choreostemic space, should not be mixed with the episteme! We will return to this issue later again.)
Yet, just to point to the category of the mental as a language game does not feel satisfying at all. Of course, Wittgenstein’s work sheds bright light on many aspects of mentality. Nevertheless, we can’t use Wittgenstein’s work as a structure; it is itself to be conceived as a result of a certain structuredness. On the other hand, it is equally disappointing to rely on the scientific approach to the mental. In some way, we need a balanced view, which additionally should provide the possibility for a differential experimentation with mechanisms of the mental.
Just that is offered by the choreostemic space. We may relate disciplinary reductionist models to concepts as they live in language games without any loss and without getting into troubles as well.
Let us now see what is possible by means of the choreostemic space and the anti-ontological T-Bar-Theory for the terms believing, referring, explicating, understanding and knowing. It might be relevant to keep in mind that by “mental activities” we do not refer to any physical or biochemical process. We distinguish the mental from the low-level affairs in the brain. Beliefs, or believing, are thus considered to be language games. From that perspective our choreostemic space just serves as a tool to externalize language in order to step outside of it, or likewise, to get able to render important aspects of playing the language game visible.
The category of beliefs, or likewise the activity of believing27, we already met above. We characterised it as a mental activity that leaves the model behind. We sharply refute the quite abundant conceptualisation of beliefs as kind of uncertainty in models. Since there is no certainty at all, not even with regard to transcendental issues, such would make little sense. Actually, the language game of believing shows its richness even on behalf of a short investigation like this one.
Before we go into details here let us see how others conceive of it. PMS Hacker [27] gave the following summary:
Over the last two and a half centuries three main strands of opinion can be discerned in philosophers’ investigations of believing. One is the view that believing that p is a special kind of feeling associated with the idea that p or the proposition that p. The second view is that to believe that p is to be in a certain kind of mental state. The third is that to believe that p is to have a certain sort of disposition.
Right to the beginning of his investigation, Hacker marks the technical, reductionist perspective onto believe as a misconception. This technical reductionism, which took form as so-called AGM-theory in the paper by Alchourron, Gärdenfors and Makinson [28] we will discuss below. Hacker writes about it:
Before commencing analysis, one misconception should be mentioned and put aside. It is commonly suggested that to believe that p is a propositional attitude.That is patently misconceived, if it means that believing is an attitude towards a proposition. […] I shall argue that to believe that p is neither a feeling, nor a mental state, nor yet a disposition to do or feel anything.
Obviously, believing has several aspects. First, it is certainly kind of a mental activity. It seems that I need not to tell anybody that I believe in order to be able to believe. Second, it is a language game, and a rich one, indeed. It seems almost to be omnipresent. As a language game, it links “I believe that” with, “I believe A” and “I believe in A”. We should not overlook, however, that these utterances are spoken towards someone else (even in inner speech), hence the whole wealth of processes and relations of interpersonal affairs have to be regarded, all those mutual ascriptions of roles, assertions, maintained and demonstrated expectations, displays of self-perception, attempts to induce a certain co-perception, and so on. We frequently cited Robert Brandom who analysed that in great detail in his “Making it Explicit”.
Yet, can we really say that believing is just a mental activity? For the one, above we did not mention that believing is something like a “pure” mental activity. We clearly would reject such a claim. First, we clearly can not set the mental as such into a transcendental status, as this would lead straight to a system like Hegel’s philosophy, with all its difficulties, untenable claims and disastrous consequences. Second, it is impossible to explicate “purity”, as this would deny the fact that models are impossible without concepts. So, is it possible that a non-conscious being or entity can believe? Not quite, I would like to propose. Such an entity will of course be able to build models, even quite advanced ones, though probably not about reflective subjects as concepts or ideas. It could experience that it could not get rid of uncertainty and its closely related companion, risk. Such we can say that these models are not propositions “about” the world, they comprise uncertainty and allow to deal with uncertainty through actions in the world. Yet, the ability to deal with uncertainty is certainly not the same as believing. We would not need the language game at all. Saying “I believe that A” does not mean to have a certain model with a particular predictive power available. As models are explications, expressing a belief or experiencing the compound mental category “believing” is just the demonstration that any explication is impossible for the person.
Note that we conceive of “belief “as completely free of values and also without any reference to mysticism. Indeed, the choreostemic space allows to distinguish different aspects of the “compound experience” that we call “belief”, which otherwise are not even visible as separate aspects of it. As a language game we thus may specify it as the indication that the speaker assigns—or the listener is expected to assign—a considerable portion of the subject matter to that part of the choreostemic figure that points away from the _Model.It is immediately clear from the choreostemic space that mental activity without belief is not possible. There is always a significant “rest” that could not be covered by any kind of explication. This is true for engineering and of course for any kind of social interaction, as soon as mutual expectations appear on the stage. By means of the choreostemic space we also can understand the significance of trust in any interaction with the external world. In communicative situations, this quickly may lead to a game of mutual deontic ascriptions, as Robert Brandom [15] has been arguing for in his “Making it Explicit”.
Interestingly enough, belief (in its choreostemically founded version) is implied by any transition away from the _Model,for instance also in case of the transition path that ultimately is heading towards the _Concept.Even more surprising—at first sight—and particularly relevant is the “inflection dynamics” in the choreostemic space. The more one tries to explicate something the larger the necessary imports (e.g. through orthoregulations) from the other _a,and hence the larger is the propensity for an inflecting flip.28
As an example, take for instance the historical development of theories in particle physics. There, people started with rather simple experimental observations, which then have been assimilated by formal mathematical models. Those in turn led to new experiments, and so forth, until physics has been reaching a level of sophistication where “observations” are based on several, if not many layers of derived concepts. On the way, structural constants and heuristic side conditions are implied. Finally, then, the system of the physical model turns into an architectonics, a branched compound of theory-models, that sounds as trivial as it is conceptual. In case of physics, it is the so-called grand unified theory. There are several important things here. First, due to large amounts of heuristic settings and orthoregulations, such concepts can’t be proved or disproved anymore, the least by empirical observations. Second, on the achieved level of abstraction, the whole subject could be formulated in a completely different manner. Note that such a dynamic between experiment, model, theory29 and concept never has been described in a convincing manner before.30
Now that we have a differentiated picture about belief at our disposal we can briefly visit the field of so-called belief revision. Belief revision has been widely adopted in artificial intelligence and machine learning as the theory for updating a data base. Quite unfortunately, the whole theory is, well, simply crap, if we would go to apply it according to its intention. I think that we can raw some significance of the choreostemic space from this mismatch for a more appropriate treatment of beliefs in information technology.
The theory of belief revision was put forward by a branch of analytical philosophy in a paper by Alchourron, Gärdenfors and Makinson (1985) [29], often abbr. as “AGM-theory.” Hansson [30] writes:
A striking feature of the framework employed there [monnoo: AGM] is its simplicity. In the AGM framework, belief states are represented by deductively closed sets of sentences, called belief sets. Operations of change take the form of either adding or removing a specified sentence.
Sets of beliefs are held by an agent, who establishes or maintains purely logical relations between the items of those beliefs. Hansson correctly observes that:
The selection mechanism used for contraction and revision encodes information about the belief state not represented by the belief set.
Obviously, such “belief sets” have nothing to do with beliefs as we know it from language game, besides the fact that is a misdone caricature. As with Pearl [23], the interesting stuff is left out: How to achieve those logical sentences at all, notably by a non-symbolic path of derivation? (There are no symbols out there in the world.) By means of the choreostemic space we easily derive the answer: By an orthoregulated instantiation of a particular choreostemic performance in an unbounded (open) aspectional space that spans between transcendental entities. Since the AGM framework starts with or presupposes logic, it simply got stuck in symbolistic fallacy or illusion. Accordingly, Pollock & Gillies [30] demonstrate that “postulational approaches” such as the AGM-theory can’t work within a fully developed “standard” epistemology. Both are simply incompatible to each other.
Closely related to believing is explicating, the latter being just the inverse of the former, pointing to the “opposite direction”. Explicating is almost identical to describing a model. The language game of “explication” means to transform, to translate and to project choreostemic figures into lists of rules that could be followed, or in other words, into the sayable. Of course, this transformation and projection is neither analytic nor neutral. We must be aware of the fact that even a model can’t be explicated completely. Else, this rule-following itself implies the necessity of believes and trust, and it requires a common understanding about the usage or the influence of orthoregulations. In other words, without an embedding into a choreostemic figure, we can’t accomplish an explication.
Understanding, Explaining, Describing
Outside of the perspective of the language game “understanding” can’t be understood. Understanding emerges as a result of relating the items of a population of interpretive acts. This population and the relations imposed on them are closely akin to Heidegger’s scaffold (“Gestell”). Mostly, understanding something is just extending an existent scaffold. About these relations we can’t speak clearly or in an explicit manner any more, since these relations are constitutive parts of the understanding. As all language games this too unfolds in social situations, which need not be syntemporal. Understanding is a confirming report about beliefs and expectations into certain capabilities of one’s own.
Saying “I understand” may convey different meanings. More precisely, understanding may come along in different shades that are placed between two configurations. Either it signals that one believes to be able to extend just the own scaffold, one’s own future “Gestelltheit”. Alternatively it is used to indicate the belief that the extension of the scaffold is shared between individuals in such a way as to be able to reproduce the same effect as anyone else could have produced understanding the same thing. This effect could be merely instrumental or, more significantly, it could refer to the teaching of further pupils. In this case, two people understand something if they can teach another person to the same ends.
Beside the performative and social aspects of understanding there are of course the mental aspects of the concept of “understanding” something. These can be translated into choreostemic terms. Understanding is less a particular “figure” in the CS than it is a deliberate visiting of the outer regions of the figure and the intentional exploration of those outposts. We understand something only in case we are aware of the conditions of that something and of our personal involvements. These includes cognitive aspects, but also the consequences of the performative parts of acts that contribute to an intensifying of the aspect of virtuality. A scientist who builds a strong model without considering his and its conditionability does not understand anything. He just would practice a serious sort of dogma (see Quine about the dogmas of empiricism here!). Such a scientist’s modeling could be replaced by that of a machine.
A similar account could be given to the application of a grammar, irrespective the abstractness of that grammar. Referring to a grammar without considering its conditionability could be performed by a mindless machine as well. It would indeed remain a machine: mindless, and forever determined. Such is most, if not all of the computer software dealing with language today.
We again would like to emphasize that understanding does not exhaust in the ability to write down a model. Understanding means to relate the model to concepts, that is, to trace a possible path that would point towards the concept. A deep understanding refers to the ability to extend a figure towards the other transcendental aspects in a conscious manner. Hence, within idealism and (any sort of) representationalism understanding is actually excluded. They mistake the transcendental for the empirical and vice versa, ending in a strict determinism and dogmatism.
Explaining, in turn, indicates the intention to make somebody else to understand a certain subject. The infamous existential “Why?” does not make any sense. It is not just questionable why this language game should by performed at all, as the why of absolute existence can’t be answered at all. Actually, it seems to be quite different from that. As a matter of fact, we indeed play this game in a well comprehensible way and in many social situations. Conceiving “explanation” of nature as to account for its existence (as Epperson does it, see [31] p.357) presupposes that everything could turned into the sayable. It would result in the conflation of logic and factual world, something Epperson indeed proposes. Some pages later in his proposal about quantum physics he seems to loosen that strict tie when referring to Whitehead he links “understanding” to coherence and empirical adequacy. ([31] p.361)
I offer this argument in the same speculative philosophical spirit in which Whitehead argued for the fitness of his metaphysical scheme to the task of understanding (though not “explaining”) nature—not by the “provability” of his first principles via deduction or demonstration, but by their evaluation against the metrics of coherence and empirical adequacy.
Yet, this presents us an almost a perfect phenomenological stance, separating objects from objects and subjects. Neither coherence nor empirical adequacy can be separated from concepts, models and the embedding Lebenswelt. It expresses thus the believe of “absolute” understanding and final reason. Such ideas that are at least highly problematic, even and especially if we take into account the role Whitehead gives the “value” as an cosmological apriori. It is quite clear, that this attitude to understanding is sharply different from anything that is related to semiotics, the primacy of interpretation, to the role of language or a relational philosophy, in short, to anything what resembles even remotely to what we proposed about understanding of understanding a few lines above.
The intention to make somebody else to understand a certain subject necessarily implies a theory, where theory here is understood (as we always do) as a milieu for deriving or inventing models. The “explaining game” comprises the practice of providing a general perspective to the recipient such that she or he could become able to invent such a model, precisely because a “direct” implant of an idea into someone else is quite impossible. This milieu involves orthoregulation and a grammar (in the philosophical sense). The theory and the grammar associated or embedded with it does nothing else than providing support to find a possibility for the invention or extension of a model. It is a matter of persistent exchange of models from a properly grown population of models that allow to develop a common understanding about something. In the end we then may say “yes, I can follow you!”
Describing is often not distinguished (properly) from explaining. Yet, in our context of choreostemically embedded language games it is neither mysterious nor difficult to do so. We may conceive of describing just as explicating something into the sayable, the element of cross-individual alignment is not part of it, at least in a much less explicit way. Hence, usually the respective declaration will not be made. The element of social embedding is much less present.
Describing pretends more or less that all the three aspects accompanying the model aspect could be neglected, particularly however the aspects of mediality and virtuality. The mathematical proof can be taken as an extreme example for that. Yet, even there it is not possible, since at least a working system of symbols is needed, which in turn is rooted in a dynamics unfolding as choreostemic figure, the mental aspect of Forms of Life. Basically, this impossibility for fixing a “position” in the choreostemic space is responsible for the so-called foundational crisis in mathematics. This crisis prevails even today in philosophy, where many people naively enough still search for absolute justification, or truth, or at least regard such as a reasonable concept.
All this should not be understood as an attempt to deny description or describing as a useful category. Yet, we should be aware that the difference to explaining is just one of (choreostemic) form. More explicitly, said difference is an affair of of culturally negotiated portions of the transcendental aspects that make up mental life.
I hope this sheds some light on Wittgenstein’s claim that philosophy should just describe, but not explain anything. Well, the possibly perceived mysteriousness may vanish as well, if we remember is characterisation of grammar
Both, understanding and explaining are quite complicated socially mediated processes, hence they unfold upon layers of milieus of mediality. Both not only relate to models and concepts that need to exist in advance and thus to a particular dynamics between them, they require also a working system of symbols. Models and concepts relate to each other only as instances of _Models and _Concepts,that is in a space as it is provided by the choreostemic space. Talking about understanding as a practice is not possible without it.
Referring to something means to point to the expectation that the referred entity could point to the issue at hand. Referring is not “pointing to” and hence does not consist of a single move. It is “getting pointed to”. Said expectation is based on at least one model. Hence, if we refer to something, we put our issue as well as ourselves into the context of a chain of signifiers. If we refer to somebody, or to a named entity, then this chain of interpretive relations transforms in one of two ways.
Either the named entity is used, that is, put into a functional context, or more precisely, by assigning it a sayable function. The functionalized entity does not (need to) interpret any more, all activity gets centralized, which could be used as the starting point for totalizing control. This applies to any entity, whether it is just material or living, social.
The second way how referencing is affected by names concerns the reference to another person, or a group of persons. If it is not a functional relationship, e.g. taking the other as a “social tool”, it is less the expected chaining as signifier by the other person. Persons could not be interpreted as we interpret things or build signs from signals. Referring to a person means to accept the social game that comprises (i) mutual deontic assignments that develop into “roles”, including deontic credits and their balancing (as first explicated by Brandom [15]), (ii) the acceptance of the limit of the sayable, which results in a use of language that is more or less non-functional, always metaphorical and sometimes even poetic, as well as (iii) the declared persistence for repeated exchanges. The fact that we interpret the utterances of our partner within the orthoregulative milieu of a theory of mind (which builds up through this interpretations) means that we mediatize our partner at least partially.
The limit of the sayable is a direct consequence of the choreostemic constitution of performing thinking. The social is based on communication, which means “to put something into common”; hence, we can regard “communication” as the driving, extending and public part of using sign systems. As a proposed language game, “functional communication” is nonsense, much like the utterance “soft stone”.
By means of the choreostemic space we also can see that any referencing is equal to a more or less extensive figure, as models, concepts, performance and mediality is involved.
At first hand, we could suspect that before any instantiation qua choreostemic performance we can not know something positively for sure in a global manner, i.e. objectively, as it is often meant to be expressed by the substantive “knowledge”. Due to that performance we have to interpret before we could know positively and objectively. The result is that we never can know anything for sure in a global manner. This holds even for transcendental items, that is, what Kant dubbed “pure reason”. Nevertheless, the language game “knowledge” has a well-defined significance.
“Knowledge” is a reasonable category only with respect to performing, interpreting (performance in thought) and acting (organized performance). It is bound to a structured population of interpretive situations, to Peircean signs. We thus find a gradation of privacy vs. publicness with respect to knowledge. We just have to keep in mind that neither of these qualities could be thought of as being “pure”. Pure privacy is not possible, because there is nothing like a private language (meaning qua usage and shared reference). Pure publicness is not possible because there is the necessity of a bodily rooted interpreting mechanism (associative structure). Things like “public space” as a purely exterior or externalized thing do not exist. The relevant issue for our topic of a machine-based episteme is that functionalism always ends in a denial of the private language argument.
We now can see easily why knowledge could not be conceived as a positively definable entity that could be stored or transferred as such. First, it is of course a language game. Second, and more important, “knowing {of, about, that}” always relates to instances of transcendental entities, and necessarily so. Third, even if we could agree on some specific way of instantiating the transcendental entities, it always invokes a particular figure unfolding in an aspectional space. This figure can’t be transferred, since this would mean that we could speak about it outside of itself. Yet, that’s not possible, since it is in turn impossible to just pretend to follow a rule.
Given this impossibility we should stay for a moment at the apparent gap opened by it towards teaching. How to teach somebody something if knowledge can’t be transferred? The answer is furnished by the equipment that is shared among the members of a community of speakers or co-inhabitants of the choreostemic space. We need this equipment for matching the orthoregulation of our rule-following. The parts, tools and devices of this equipment are made from palpable traditions, cultural rhythms, institutions, individual and legal preferences regarding the weighting of individuals versus the various societal clusters, the large story of the respective culture and the “templates” provided by it, the consciously accessible time horizon, both to the past and the future31, and so on. Common sense wrongly labels the resulting “setup” as “body of values”. More appropriately, we could call it grammatical dynamics. Teaching, then, is in some way more about the reconstruction of the equipment than about the agreement of facts, albeit the arrangement of the facts may tell us a lot about the grammar.
Saying ‘I know’ means that one wants to indicate that she or he is able to perform choreostemically with regard to the subject at hand. In other words, it is a label for a pointer (say reference) to a particular image of thought and its use. This includes the capability of teaching and explaining, which probably are the only way to check if somebody really knows. We can, however, not claim that we are aligned to a particular choreostemic dynamics. We only can believe that our choreostemic moves are part of a supposed attractor in the choreostemic space. From that also follows that knowledge is not just about facts, even if we would conceive of facts as a compound of fixed relations and fixed things.
The traditional concerns of epistemology as the discipline that asks about the conditions of knowing and knowledge must be regarded as a misplaced problem. Usually, epistemology does not refer to virtuality or mediality. Else, in epistemology knowledge is often sharply separated from belief, yet for the wrong reasons. The formula of “knowledge as justified belief” puts them both onto the same stage. It then would have to be clarified what “justified” should mean, which is not possible in turn. Explicating “justifying” would need reference to concepts and models, or rather the confinement to a particular one: logic. Yet, knowledge and belief are completely different with regard to their role in choreostemic dynamics. While belief is an indispensable element of any choreostemic figure, knowledge is the capability to behave choreostemically.
8.2. Anthropological Mirrors
Philosophy suffers even more from a surprising strangeness. As Marc Rölli recently mentioned [34] in his large work about the relations between anthropology and philosophy (KAV),
Since more than 200 years philosophy is anthropologically determined. Yet, philosophy didn’t investigate the relevance of this fact to any significant extent. (KAV15)32
Rölli agrees with Nietzsche regarding his critique of idealism.
“Nietzsche’s critique of idealism, which is available in many nuances, always targeting the philosophical self-misunderstanding of the pure reason or pure concepts, is also directed against a certain conception of nature.” (KAV439)33.
…where this rejected certain conception of nature is purposefulness. In nature there is no forward directed purpose, no plan. Such ideas are either due to religious romanticism or due to a serious misunderstanding of the Darwinian theory of natural evolution. In biological nature, there is only blind tendency towards the preference of intensified capability for generalization34. Since Kant, and inclusively him, and in some way already Descartes, philosophy has been influenced by scientific, technological or anthropological conceptions about nature in general, or the nature of the human mind.
Such is (at least) problematic for three reasons. First, it constitutes a misunderstanding of the role of philosophy to rely on scientific insights. Of course, this perspective is becoming (again) visible only today, notably after the Linguistic Turn as far as it regards non-analytical philosophy. Secondly, however, it is clear that the said influence implies, if it remains unreflected, a normative tie to empiric observations. This clearly represents a methodological shortfall. Thirdly, even if one would accept a certain link between anthropology and philosophy, the foundations taken from a “philosophy of nature”35 are so simplistic, that they hardly could be regarded as viable.
This almost primitive image about the purposeful nature finally flowed into the functionalism of our days, whether in philosophy (Habermas) or so-called neuro-philosophy, by which many feel inclined to establish a variety of determinism that is even proto-Hegelian.
In the same passage that invokes Nietzsche’s critique, Rölli cites Friedrich Albert Lange [39]
“The topic that we actually refer to can be denoted explicitly. It is quasi the apple in the logical lapse of German philosophy subsequent to Kant: the relation between subject and object within knowledge.” (KAV443)36
Lange deliberately attests Kant—in contrast to the philosophers of the German idealism— to be clear about that relationship. For Kant subject and object constitute only as an amalgamate, the pure whatsoever has been claimed by Hegel, Schelling and their epigones and inheritors. The intention behind introducing pureness, according to Lange, is to support absolute reason or absolute understanding, in other words, eternally justified reason and undeniability of certain concepts. Note that German Idealism was born before the foundational crisis in mathematics, that started with Russell’s remark on Frege’s “Begriffsschrift” and his “all” quantor, that found its continuation in the Hilbert programme and that finally has been inscribed to the roots of mathematics by Goedel. Philosophies of “pureness” are not items of the past, though. Think about materialism, or about Agamben’s “aesthetics of pure means”, as Benjamin Morgan [39] correctly identified the metaphysical scaffold of Agamben’s recent work.
Marc Rölli dedicates all of the 512 pages to the endeavor to destroy the extra-philosophical foundations of idealism. As the proposed alternative we find pragmatism, that is a conceptual foundation of philosophy that is based on language and Life form (Lebenswelt in the Wittgensteinian sense). He concludes his work accordingly:
After all it may have become more clear that this pragmatism is not about a simple, naive pragmatism, but rather about a pragmatism of difference37 that has been constructed with great subtlety. (KAV512)38
Rölli’s main target is German Idealism. Yet, undeniably Hegelian philosophy is not only abundant on the European continent, where it is the Frankfurt School from Adorno to Habermas and even K.-O. Apel, followed by the ill-fated ideas of Luhmann that are infected by Hegel as well. Significant traces of it can be found in Germany’s society also in contemporary legal positivism and the oligarchy of political parties.
During the last 20 years or so, Hegelian positions spread considerably also in anglo-american philosophy and political theory. Think about Hard and Negri, or even the recent works of Brian Massumi. Hegelian philosophy, however, can’t be taken in portions. It is totalitarian all through, because its main postulates such as “absolute reason” are totalizing by themselves. Hegelian philosophy is a relic, and a quite dangerous one, regardless whether you interpret it in a leftist (Lenin) or in a rightist (Carl Schmitt) manner. With its built-in claim for absoluteness the explicit denial of context-specificity, of the necessary relativity of interpretation, of the openness of future evolution, of the freedom inscribed deeply even into the basic operation of comparison, all of these positions turn into transcendental aprioris. The same holds for the claim that things, facts, or even norms can be justified absolutely. No further comment should be necessary about that.
The choreostemic space itself can not result in a totalising or even totalitarian attitude. We met this point already earlier when we discussed the topological structure of the space and its a-locational “substance” (Reason and Sufficiency). As Deleuze emphasized, there is a significant difference between entirety and completeness, which just mirrors the difference between the virtual and the actual. We’d like to add that the choreostemic space also disproves the possibility for universality of any kind of conception. In some way, yet implicitly, the choreostemic space defends humanity against materiality and any related attitude. Even if we would be determined completely on the material level, which we are surely not39, the choreostemic space proofs the indeterminateness and openness of our mental life.
You already may have got the feeling that we are going to slip into political theory. Indeed, the choreostemic space not only forms a space indeterminateness and applicable pre-specificity, it provides also a kind of a space of “Swiss neutrality”. Its capability to allow for a comparison of collective mental setups, without resorting to physicalist concepts like swarms or mysticistic concepts like “collective intelligence”, provides a fruitful ground for any construction of transitions between choreostemic attractors.
Despite the fact that the choreostemic space concerns any kind of mentality, whether seen as hosted more by identifiable individuals or by collectives, the concept should not be taken as an actual philosophy of reason (“Philosophie des Geistes”). It transcends it as it does regarding any particular philosophical stance. It would be wrong as well to confine it into an anthropology or an anthropological architecture of philosophy, as it is the case not only in Hegel (Rölli, KAV137). In some way, it presents a generative zone for a-human philosophies, without falling prey to the necessity to define what human or a-human should mean. For sure, here we do not refer to transhumanism as it is known today, which just follows the traditional anthropological imperative of growth (“Steigerungslogik”), as Rölli correctly remarks (KAV459).
A-Human simply means that as a conception it is neither dependent nor confined to the human Lebenswelt. (We again would like to stress the point that it does neither represent a positively sayable universalism not even kind of a universal procedural principle, and as well that this “a-” should also not be understood as “anti” or “opposed”, simply as “being free of”). It is this position that is mandatory to draw comparisons40 and, subsequently, conclusions (in the form of introduced irreversibilities) about entities that belong to strikingly different Lebenswelten (forms of life). Any particular philosophical position immediately would be guilty in applying human scales to non-human entities. That was already a central cornerstone of Nietzsche’s critique not only of German philosophy of the 19th century, but also of natural sciences.
8.3. Simplicissimi
Rölli criticizes the uncritical adoption of items taken from the scientific world view by philosophy in the 19th century. Today, philosophy is still not secured against simplistic conceptions, uncritically assimilated from certain scientific styles, despite the fact that nowadays we could know about the (non-analytic) Linguistic Turn, or the dogmatics in empiricism. What I mean here comprises two conceptual ideas, the reduction of living or social system to states and the notion of exception or that of normality respectively.
There are myriads of references in the philosophy of mind invoking so-called mental states. Yet, not only in the philosophy of mind one can find the state as a concept, but also in political theory, namely in Giorgio Agamben’s recent work, which also builds heavily on the notion of the “state of exception”. The concept of a mental state is utter nonsense, though, and mainly so for three very different reasons. The first one can be derived from the theory of complex systems, the second one from language philosophy, and the third one from the choreostemic space.
In complex systems, the notion of a state is empty. What we can observe subsequent to the application of some empiric modeling is that complex systems exhibit meta-stability. It looks as if they are stable and trivial. Yet, what we could have learned mainly from biological sciences, but also from their formal consideration as complex systems, is that they aren’t trivial. There is no simple rule that could describe the flow of things in a particular period of time. The reason is precisely that they are creative. They build patterns, hence the build a further “phenomenal” level, where the various levels of integration can’t be reduced to one another. They exhibit points of bifurcation, which can be determined only in hindsight. Hence, from the empirical perspective we only can estimate the probability for stability. This, however, is clearly to weak as to support the claim of “states”.
Actually, from the perspective of language-oriented philosophy, the notion of a state is even empty for any dynamical system that is subject to open evolution (but probably even for trivial dynamic systems). A real system does not build “states”. There are only flows and memories. “State” is a concept, in particular, an idealistic—or at least an idealizing—concept that are only present in the interpreting entity. The fact that one first has to apply a model before it is possible to assign states is deliberately peculated whenever it is invoked by an argument that relates to philosophy or to any (other) kind of normativity. Therefore, the concept of “state” can’t be applied analytically, or as a condition in a linearly arranged argument. Saying that we do not claim that the concept of state is meaningless at large. In natural science, especially throughout the process of hypothesis building, the notion of state can be helpful (sometimes, at least).
Yet, if one would use it in philosophy in a recurrent manner, one would quickly arrive at the choreostemic space (or something very similar), where states are neither necessary nor even possible. Despite that a “state” is only assigned, i.e. as a concept, philosophers of mind41 and philosophers of political theory alike (as Agamben [37] among other materialists) use it as a phenomenal reference. It is indeed somewhat astonishing to observe this relapse into naive realism within the community of otherwise trained philosophers. One of the reasons for this may well be met in the missing training in mathematics.42
The third argument against the reasonability of the notion of “state” in philosophy can be derived from the choreostemic space. A cultural body comprises individual mentality as well as a collective mentality based on externalized symbolic systems like language, to make a long story short. Both together provide the possibility for meaning. It is absolutely impossible to assign a “state” to a cultural body without loosing the subject of culture itself. It would be much like a grammatical mistake. That “subject” is nothing else than a figurable trace in the choreostemic space. If one would do such an assignment instead, any finding would be relevant only within the reduced view. Hence, it would be completely irrelevant, as it could not support the self-imposed pragmatics. Continuing to argue about such finding then establishes a petitio principii: One would find only what you originally assumed. The whole argument would be empty and irrelevant.
Similar arguments can be put forward regarding the notion of the exceptional, if it is applied in contexts that are governed by concepts and their interpretation, as opposed to trivial causal relationships. Yet, Giorgio Agamben indeed started to built a political theory around the notion of exception [37], which—at first sight strange enough—already triggered an aesthetics of emergency. Elena Bellina [38] cites Agamben:
The state of exception “is neither external nor internal to the juridical order, and the problem of defining it concerns a threshold, or a zone of indifference, where inside and outside do not exclude each other but rather blur with each other.” In this sense, the state of exception is both a structured or rule-governed and an anomic phenomenon: “The state of exception separates the norm from its application in order to make its application possible. It introduces a zone of anomie into the law in order to make the effective regulation of the real possible.”
It results in nothing else than disastrous consequences if the notion of the exception would be applied to areas where normativity is relevant, e.g. in political theory. Throughout history there are many, many terrible examples for that. It is even problematic in engineering. We may even call it fully legitimized “negativity engineering”, as it establishes completely unnecessary the opposite of the normal and the deviant as an apriori. The notion of the exception presumes total control as an apriori. As such, it is opposed to the notion of openness, hence it also denies the primacy of interpretation. Machines that degenerate and that would produce disasters on any malfunctioning can’t be considered as being built smartly. In a setup that embraces indeterminateness, there is even no possibility for disastrous fault. Instead, deviances are defined only with respect to the expectable, not against an apriori set, hence obscure, normality. If the deviance is taken as the usual (not the normal, though!), fault-tolerance and even self-healing could be built in as a core property, not as an “exception handling”.
Exception is the negative category to the normal. It requires models to define normality, models to quantify the deviation and finally also arbitrary thresholds to label it. All of the three steps can be applied in linear domains only, where the whole is dependent on just very few parameters. For social mega-systems as societies it is nothing else than a methodological categorical illusion to apply the concept of the exception.
9. Critique of Paradoxically Conditioned Reason
Nothing could be more different to that than pragmatism, for which the choreostemic space can serve as the ultimate theory. Pragmatism always suffered from—or at least has been violable against—the reproach of relativism, because within pragmatism it is impossible to argue against it. With the choreostemic space we have constructed a self-sufficient, self-containing and a necessary model that not only supports pragmatism, but also destroys any possibility of universal normative position or normativity. Probably even more significant, it also abolishes relativism through the implied concept of the concrete choreostemic figure, which can be taken as the differential of the institution or the of tradition43. Choreostemic figures are quite stable since they relate to mentality qua population, which means that they are formed as a population of mental acts or as mental acts of the members of a population. Even for individuals it is quite hard to change the attractor inhabited in choreostemic space, to change into another attractor or even to build up a new one.
In this section we will check out the structure of the way we can use the choreostemic space. Naively spoken we could ask for instance, how can we derive a guideline to improve actions? How can we use it to analyse a philosophical attitude or a political writing? Where are the limits of the choreostemic space?
The structure behind such questions concerns a choice on a quite fundamental level. The issue is whether to argue strictly in positive terms, to allow negative terms, or even to define anything starting from negative terms only. In fact, there are quite a few of different possibilities to arrange any melange of positivity or negativity. For instance, one could ontologically insist first on contingency as a positivity, upon then constraints would act as a negativity. Such traces we will not follow here. We regard them either as not focused enough or, most of them, as being infected by realist ontology.
In more practical terms this issue of positivity and negativity regards the way of how to deal with justifications and conditions. Deleuze argues for strict positivity; in that he follows Spinoza and Nietzsche. Common sense, in contrast, is given only as far as it is defined against the non-common. In this respect, any of the existential philosophical attitudes, whether Christian religion, phenomenology or existentialism, are quite similar to each other. Even Levinas’ Other is infected by it.
Admittedly, at first hand it seems quite difficult, if not impossible, to arrive at an appropriate valuation of other persons, the stranger, the strange, in short, the Other, but also the alienated. Or likewise, how to derive or develop a stance to the world that does not start from existence. Isn’t existence the only thing we can be sure about? And isn’t the external, the experience the only stable positivity we can think about? Here, we shout a loud No! Nevertheless we definitely do not deny the external either.
We just mentioned that the issue of justification is invoked by our interests here. This gives rise to ask about the relation of the choreostemic space to epistemology. We will return to this in the second half of this section.
Positivity. Negativity.
Obviously, the problem of the positive is not the positive, but how we are going to approach it. If we set it primary, we first run into problems of justification, then into ethical problems. Setting the external, the existence, or the factual positive as primary we neglect the primacy of interpretation. Hence, we can’t think about the positive as an instance. We have to think of it as a Differential.
The Differential is defined as an entirety, yet not instantiated. Its factuality is potential, hence its formal being is neither exhaustive nor limiting its factuality, or positivity. Its givenness demands for action, that is for a decision (which is sayable regarding its immediacy) bundled with a performance (which is open and just demonstrable as a matter of fact).
The concept of choreosteme follows closely Deleuze’s idea of the Differential: It is built into the possibility of expressibility that spans as the space between the _Directionsas they are indicated by the transcendental aspects _A.The choreostemic space does not constitute a positively definable stance, since the space for it, the choreostemic space is not made from elements that could be defined apriori to any moment in time. Nevertheless it is well-defined. In order to provide an example which requires a similar approach we may refer to the space of patterns as they are potentially generated by Turing-systems. The mechanics of Turing-patterns, its mechanism, is well-defined as well, it is given in its entirety, but the space of the patterns can’t be defined positively. Without deep interpretation there is nothing like a Turing-pattern. Maybe, that’s one of the reasons that hard sciences still have difficulties to deal adequately with complexity.
Besides the formal description of structure and mechanism of our space there is nothing left about one could speak or think any further. We just could proceed by practicing it. This mechanism establishes a paradoxicality insofar as it does not contain determinable locations. This indeterminateness is even much stronger than the principle of uncertainty as it is known from quantum physics, which so far is not constructed in a self-referential manner (at least if we follow the received views). Without any determinate location, there seems to be no determinable figure either, at least none of which we could say that we could grasp them “directly”, or intuitively. Yet, figures may indeed appear in the choreostemic space, though only by applying orthoregulative scaffolds, such as traditions, institutions, or communities that form cultural fields of proposals/propositions (“Aussagefeld”), as Foucault named it [40].
The choreostemic space is not a negativity, though. It does not impose apriori determinable factual limits to a real situation, whether internal or external. It even doesn’t provide the possibility for an opposite. Due to its self-referentiality it can be instantiated into positivity OR negativity, dependent on the “vector”—actually, it is more a moving cloud of probabilities—one currently belongs to or that one is currently establishing by one’s own performances.
It is the necessity of choice itself, appearing in the course of instantiation of the twofold Differential, that introduces the positive and the negative. In turn, whenever we meet an opposite we can conclude that there has been a preceding choice within an instantiation. Think about de Saussure structuralist theory of language, which is full of opposites. Deleuze argues (DR205) that the starting point of opposites betrays language:
In other words, are we not on the lesser side of language rather than the side of the one who speaks and assigns meaning? Have we not already betrayed the nature of the play of language – in other words, the sense of that combinatory, of those imperatives or linguistic throws of the dice which, like Artaud’s cries, can be understood only by the one who speaks in the transcendent exercise of language? In short, the translation of difference into opposition seems to us to concern not a simple question of terminology or convention, but rather the essence of language and the linguistic Idea.
In more traditional terms one could say it is dependent on the “perspective”. Yet, the concept of “perspective” is fallacious here, at least so, since it assumes a determinable stand point. By means of the choreostemic space, we may replace the notion of perspectives by the choreostemic figure, which reflects both the underlying dynamics and the problematic field much more adequately. In contrast to the “perspective”, or even of such, a choreostemic figure spans across time. Another difference is that a perspective needs to be taken, which does not allow for continuity, while a choreostemic figure evolves continually. The possibility for negativity is determined along the instantiation from choreosteme to thought, while the positivity is built into the choreostemic space as a potential. (Negative potentials are not possible.)
Such, the choreostemic space is immune to any attempt—should we say poison pill?—to apply a dialectic of the negative, whether we consider single, double, or absurdly enough multiply repeated ones. Think about Hegel’s negativity, Marx’s rejection and proposal for a double negativity, or the dropback by Marcuse, all of which must be counted simply as stupidity. Negativity as the main structural element of thinking did not vanish, though, as we can see in the global movement of anti-capitalism or the global movement of anti-globalization. They all got—or still get—victimized by the failure to leave behind the duality of concepts and to turn them into a frame of quantitability. A recent example for that ominous fault is given by the work of Giorgio Agamben; Morgan writes:
Given that suspending law only increases its violent activity, Agamben proposes that ‘deactivating’ law, rather erasing it, is the only way to undermine its unleashed force. (p.60)
The first question, of course, is, why the heck does Agamben think that law, that is: any lawfulness, must be abolished. Such a claim includes the denial of any organization and any institution, above all, as practical structures, as immaterial infrastructures and grounding for any kind of negotiation. As Rölli noted in accordance to Nietzsche, there is quite an unholy alliance between romanticism and modernism. Agamben, completely incapable of getting aware of the virtual and of the differential alike, thus completely stuck in a luxurating system of “anti” attitudes, finds himself faced with quite a difficulty. In his mono-(zero) dimensional modernist conception of world he claims:
“What is found after the law is not a more proper and original use value that precedes law, but a new use that is born only after it. And use, which has been contaminated by law, must also be freed from its value. This liberation is the task of study, or of play.”
Is it really reasonable to demand for a world where uses, i.e. actions, are not “contaminated” by law? Morgan continues:
In proposing this playful relation Agamben makes the move that Benjamin avoids: explicitly describing what would remain after the violent destruction of normativity itself. ‘Play’ names the unknowable end of ‘divine violence’.
Obviously, Agamben never realized any paradox concerning rule-following. Instead, he runs amok against his own prejudices. “Divine violence” is the violence of ignorance. Yet, abolishing knowledge does not help either, nor is it an admirable goal in itself. As Derrida (another master of negativity) before him, in the end he demands for stopping interpretation, any and completely. Agamben provides us nothing else than just another modernist flavour of a philosophy of negativity that results in nihilistic in-humanism (quite contrary to Nietzsche, by the way). It is somewhat terrifying that Agamben receives not jut little attention currently.
In the last statement we are going to cite from Morgan, we can see in which eminent way Agamben is a thinker of the early 19th century, incapable to contribute any reasonable suggestion to current political theory:
But it is not only the negative structure of the argument but also the kind of negativity that is continuous between Agamben’s analyses of aesthetic and legal judgement. In other words, ‘normality without a norm’, which paradoxically articulates the subtraction of normativity from the normal, is simply another way of saying ‘law without force or application’.
This Kantian formulation is not only fully packed with uncritical aprioris, such like normality or the normal, which marks Agamben as an epigonic utterer of common sense. As this ancient form of idealism demonstrates, Agamben obviously never heard anything of the linguistic turn as well. The unfortunate issue with Agamben’s writing is that it is considered both as influential and pace-making.
So, should we reject negativity and turn to positivity? Rejecting negativity turns problematic only if it is taken as an attitude that stretches out from the principle down to the activity. Notably, the same is true for positivity. We need not to get rid of it, which only would send us into the abyss of totalised mysticism. Instead, we have to transcend them into the Differential that “precedes” both. While the former could be reframed into the conditionability of processes (but not into constraints!), the latter finds its non-representational roots in the potential and the virtual. If the positive is taken as a totalizing metaphysics, we soon end in overdone specialization, uncritical neo-liberalism or even dictatorship, or in idealism as an ideology. The turn to a metaphysics of (representational) positivity is incurably caught in the necessity of justification, which—unfortunately enough for positivists—can’t be grounded within a positive metaphysics. To justify, that is to give “good reasons”, is a contradictio in adiecto, if it is understood in its logic or idealistic form.
Both, negativity and positivity (in their representational instances) could work only if there is a preceding and more or less concrete subject, which of course could not presupposed when we are talking about “first reasons” or “justification”. This does not only apply to political theory or practice, it even holds for logic as a positively given structure. Abstractly, we can rewrite the concreteness into countability. Turning the whole thing around we see that as long as something is countable we will be confined by negativity and positivity on the representational level. Herein lies the limitation of the Universal Turing Machine. Herein lies also the inherent limitation of any materialism, whether in its profane or it theistic form. By means of the choreostemic space we can see various ways out of this confined space. We may, for instance, remove the countability from numbers by mediatizing it into probabilities. Alternatively, we may introduce a concept like infinity to indicate the conceptualness of numbers and countability. It is somewhat interesting that it is the concept of the infinite that challenges the empiric character of numbers. Else, we could deny representationalism in numbers while trying to keep countability. This creates the strange category of infinitesimals. Or we create multi-dimensional number spaces like the imaginary numbers. There are, of course, many, many ways to transcend the countability of numbers, which we can’t even list here. Yet, it is of utmost importance to understand that the infinite, as any other instance of departure from countability, is not a number any more. It is not countable either in the way Cantor proposed, that is, thinking of a smooth space of countability that stretches between empiric numbers and the infinite. We may count just the symbols, but the reference has inevitably changed. The empirics is targeting the number of the symbols, not the their content, which has been defined as “incountability”. Only by this misunderstanding one could get struck by the illusion that there is something like the countability of the infinite. In some ways, even real numbers do not refer to the language game of countability, and all the more irrational numbers don’t either. It is much more appropriate to conceive of them as potential numbers; it may well be that precisely this is the major reason for the success of mathematics.
The choreostemic space is the condition for separating the positive and the negative. It is structure and tool, principle and measure. Its topology implies the necessity for instantiation and renders the representationalist fallacy impossible; nevertheless, it allows to map mental attitudes and cultural habits for comparative purposes. Yet, this mapping can’t be used for modeling or anticipation. In some way it is the basis for subjectivity as pre-specific property, that is for a _Subjectivity,of course without objectivity. Therefore, the choreostemic space also allows to overcome the naïve and unholy separation of subjects and objects, without denying the practical dimension of this separation. Of course, it does so by rejecting even the tiniest trace of idealism, or apriorisms respectively.
The choreostemic space does not separate apriori the individual or the collective forms of mentality. In describing mentality it is not limited to the sayable, hence it can’t be attacked or even swallowed by positivism. Since it provides the means to map those habitual _Mentalfigures, people could talk about transitions between different attractors, which we could call “choreostemic galaxies”. The critical issue of values, those typical representatives of uncritical aprioris, is completely turned into a practical concern. Obviously, we can talk about “form” regarding politics without the need to invoke aesthetics. As Benjamin Morgan recently demonstrated (in the already cited [41]), aesthetics in politics necessarily refers to idealism.
Rejecting representational positivity, that is, any positivity that we could speak of in a formal manner, is equivalent to the rejection of first reason as an aprioric instance. As we already proposed for representational positivity, the claim of a first reason as a point of departure that is never revisited again results as well in a motionless endpoint, somewhere in the triangle built from materialism, idealism or realism. Attempts to soften this outcome by proposing a playful, or hypothetical, if not pragmatic, “fixation of first principles” are not convincing, mainly because this does not allow for any coherence between games, which results in a strong relativity of principles. We just could not talk about the relationships between those “firstness games”. In other words, we would not gain anything. An example for such a move is provided by Epperson [42]. Though he refers to the Aristotelian potential, he sticks with representational first principles, in his case logic in the form of the principle of the excluded middle and the principle of non-contradiction. Epperson does not get aware of the problems regarding the use of symbols in doing this. Once Wittgenstein critized the very same point in the Principia by Russell and Whitehead. Additionally, representational first principles are always transporters for ontological claims. As long as we recognize that the world is NOT made from objects, but of relations organized, selected and projected by each individual through interpretation, we would face severe difficulties. Only naive realism allows for a frictionless use of first principles. Yet, for a price that is definitely too high.
We think that the way we dissolved the problem of first reason has several advantages as compared to Deleuze’s proposal of the absolute plane of immanence. First, we do not need the notion of absoluteness, which appears at several instances in Deleuze’s main works “What is Philosophy?” [35] (WIP), “Empiricism and Subjectivity [43], and his “Pure Immanence” [44]. The second problem with the plane of immanence concerns the relation between immanence and transcendence. Deleuze refers to two different kinds of transcendence. While in WIP he denounces transcendence as inappropriate due to its heading towards identity, the whole concept of transcendental empiricism is built on the Kantian invention. This two-fold measure can’t be resolved. Transcendence should not be described by its target. Third, Deleuze’s distinction between the absolute plane of immanence and the “personal” one, instantiated by each new philosophical work, leaves a major problem: Deleuze leaves completely opaque how to relate the two kinds of immanence to each other. Additionally, there is a potentially infinite number of “immanences,” implying a classification, a differential and an abstract kind of immanence, all of which is highly corrosive for the idea of immanence itself. At least, as long one conceives immanence not as an entity that could be naturalized. This way, Deleuze splits the problem of grounding into two parts: (1) a pure, hence “transcendent” immanence, and (2) the gap between absolute and personal immanence. While the first part could be accepted, the second one is left completely untouched by Deleuze. The problem of grounding has just been moved into a layer cake. Presumably, these problems are caused by the fact that Deleuze just considers concepts, or _Concepts, if we’d like to consider the transcendental version as well. Several of those imply the plane of immanence, which can’t be described, which has no structure, and which just is implied by the factuality of concepts. Our choreostemic space moves this indeterminacy and openness into a “form” aspect in a non-representational, non-expressive space with the topology of a double-differential. But more important is that we not only have a topology at our disposal which allows to speak about it without imposing any limitation, we else use three other foundational and irreducibly elements to think that space, the choreostemic space. The CS thus also brings immanence and transcendence into the same single structure.
In this section we have discussed a change of perspective towards negativity and positivity. This change did become accessible by the differential structure of the choreostemic space. The problematic field represented by them and all the respective pseudo-solutions has been dissolved. This abandonment we achieved through the “Lagrangean principle”, that is, we replaced the constants—positivity and negativity respectively—by a procedure—instantiation of the Differential—plus a different constant. Yet, this constant is itself not a not a finite replacement, i.e. a “constant” as an invariance. The “constant” is only a relative one: the orthoregulation, comprising habits, traditions and institutions.
Reason—or as we would like to propose for its less anthropological character and better scalability, mentality—has been reconstructed as a kind of omnipresent reflection on the conditionability of proceedings in the choreostemic space. The conditionability can’t be determined in advance to the performed mental proceedings (acts), which for many could appear as somewhat paradoxical. Yet, it is not. The situation is quite similar to Wittgenstein’s transcendental logic that also gets instantiated just by doing something, while the possibility for performance precedes that of logic.
Finally, there is of course the question, whether there is any condition that we impose onto the choreostemic itself, a condition that would not be resolved by its self-referentiality. Well, there is indeed one: The only unjustified apriori of the choreostemic space seems to be the primacy of interpretation (POI). This apriori, however, is only a weak one, and above all, a practicable one, or one that derives from the openness of the world. Ultimately, the POI in turn is a direct consequence of the time-being. Any other aspect of interpretation is indeed absorbed by the choreostemic space and its self-referentiality, hence requiring no further external axioms or the like. In other words, the starting point of the choreostemic space, or the philosophical attitude of the choreosteme, is openness, the insight that the world is far to generative as to comprehend all of it.
The fact that it is almost without any apriori renders the choreostemic space suitable for those practical purposes where the openness and its sibling, ignorance, calls for dedicated activity, e.g. in all questions of cross-disciplinarity or trans-culturality. As far as different persons establish different forms of life, the choreostemic space even is highly relevant for any aspect of cross-personality. This in turn gives rise to a completely new approach to ethics, which we can’t follow here, though.
<h5>Mentality without Knowledge</h5>
Two of the transcendental aspects of the choreostemic space are _Model,and _Concept. The concepts of model and concept, that is, instantiations of our aspects, are key terms in philosophy of science and epistemology. Else, we proposed that our approach brings with it a new image of thought. We also said that mental activities inscribe figures or attractors into that space. Since we are additionally interested in the issue of justification—we are trying to get rid of them—the question of the relation between the choreostemic space and epistemology is being triggered.
The traditional primary topic of epistemology is knowledge, how we acquire it, particularly however the questions of first how to separate it from beliefs (in the common sense) on the one hand, and second how to secure it in a way that we possibly could speak about truth. In a general account, epistemology is also about the conditions of knowledge.
Our position is pretty clear: the choreostemic space is something that is categorically different from episteme or epistemology. Which are the reasons?
We reject the view that truth in its usual version is a reasonable category for talking about reasoning. Truth as a property of a proposition can’t be a part of the world. We can’t know anything for sure, neither regarding the local context, nor globally. Truth is an element of logic, and the only truth we can know of is empty: a=a. Yet, knowledge is supposed to be about empirical facts (arrangements of relations). Wittgenstein thus set logic as transcendental. Only the transcendental logic can be free of semantics and thus only within transcendental logic we can speak of truth conditions. The consequence is that we can observe either of two effects. First, any actual logic contains some semantic references, because of which it could be regarded as “logic” only approximately. Second, insisting on the application of logical truth values to actual contexts instead results in a categorical fault. The conclusion is that knowledge can’t be secured neither locally from a small given set of sentences about empirical facts, nor globally. We even can’t measure the reliability of knowledge, since this would mean to have more knowledge about the fact than it is given by the local observations provide. As a result, paradoxes and antinomies occur. The only thing we can do is try to build networks of stable models for a negotiable anticipation with negotiable purposes. In other words, facts are not given by relation between objects, but rather as a system of relations between models, which as a whole is both accepted by a community of co-modelers and which provides satisfying anticipatory power. Compared to that the notion of partial truth (Newton da Costa & Steven French) is still misconceived. It keeps sticking to the wrong basic idea and as such it is inferior to our concept of the abstract model. After all, any account of truth violates the fact that it is itself a language game.
Dropping the idea of truth we could already conclude that the choreostemic space is not about epistemology.
Well, one might say, ok, then it is an improved epistemology. Yet, this we would reject as well. The reason for that is a grammatical one. Knowledge in the meaning of epistemology is either about sayable or demonstrable facts. If someone says “I know”, or if someone ascribes to another person “he knows”, or if a person performs well and in hindsight her performance is qualified as “based on intricate knowledge” or the like, we postulate an object or entity called knowledge, almost in an ontological fashion. This perspective has been rejected by Isabelle Peschard [45]. According to her, knowledge can’t be separated from activity, or “enaction”, and knowledge must be conceived as a social embedded practice, not as a stateful outcome. For her, knowledge is not about representation at all. This includes the rejection of the truth conditions as a reasonable part of a concept of knowledge. Else, it will be impossible to give a complete or analytical description of this enaction, because it is impossible to describe (=to explicate) the Form of Life in a self containing manner.
In any case, however, knowledge is always, at least partially, about how to do something, even if it is about highly abstract issues. That means that a partial description of knowledge is possible. Yet, as a second grammatical reason, the choreostemic space does not allow for any representations at all, due to its structure, which is strictly local and made up from the second-order differential.
There are further differences. The CS is a tool for the expression of mental attractors, to which we can assign distinct yet open forms. To do so we need the concepts of mediality and virtuality, which are not mentioned anywhere in epistemology. Mental attractors, or figures, will always “comprise” beliefs, models, ideas, concepts as instances of transcendental entities, and these instances are local instances, which are even individually constrained. It is not possible to explicate these attractors other than by “living” it.
In some way, the choreostemic space is intimately related to the philosophy of C.S. Peirce, which is called “semiotics”. As he did, we propose a primacy of interpretation. We fully embrace his emphasis that signs only refer to signs. We agree with his attempt for discerning different kinds of signs. And we think that his firstness, secondness and thirdness could be related to the mechanisms of the choreostemic space. In some way, the CS could be conceived as a generalization of semiotics. Saying this, we also may point to the fact that Peirce’s philosophy is not regarded as epistemology either.
Rejecting the characterization of the choreostemic space as an epistemological subject we can now even better understand the contours of the notion of mentality. The “mental” can’t be considered as a set of things like beliefs, wishes, experiences, expectations, thought experiments, etc. These are just practices, or likewise practices of speaking about the relation between private and public aspects of thinking. Any of these items belong to the same mentality, to the same choreostemic figures.
In contrast to Wittgenstein, however, we propose to discard completely the distinction between internal and external aspects of the mental.
And nothing is more wrong-headed than calling meaning a mental activity! Unless, that is, one is setting out to produce confusion.” [PI §693]
One of the transcendental aspects in the CS is concept, another is model. Both together are providing the aspects of use, idea and reference, that is, there is nothing internal and external any more. It simply depends on the purpose of the description, or the kind of report we want to create about the mental, whether we talk about the mental in an internalist or in externalist way, whether we talk about acts, concepts, signs, or models. Regardless, what we do as humans, it will always be predominantly a mental act, irrespective the change of material reconfigurations.
10. Conclusion
It is probably not an exaggeration to say that in the last two decades the diversity of mentality has been discovered. A whole range of developments and shifts in public life may have been contributing to that, concerning several domains, namely from politics, technology, social life, behavioural science and, last but not least, brain research. We saw the end of the Cold War, which has been signalling an unrooting of functionalism far beyond the domain of politics, and simultaneously the growth and discovery of the WWW and its accompanied “scopic44 media” [46, 47]. The “scopics” spurred the so-called globalization that worked much more in favour of the recognition of diversity than it levelled that diversity, at least so far. While we are still in the midst of the popularization and increasingly abundant usage of so-called machine learning, we already witness an intensified mutual penetration and amalgamation of technological and social issues. In the behavioural sciences, probably also supported by the deepening of mediatization, an unforeseen interest in the mental and social capabilities of animals manifested, pushing back the merely positivist and dissecting description of behavior. As one of the most salient examples may serve the confirmation of cultural traditions in dolphins and orcas, concerning communication as well as highly complex collaborative hunting. The unfolding of collaboration requires the mutual and temporal assignment of functional roles for a given task. This not only prerequisites a true understanding of causality, but even its reflected use as a game in probabilistic spaces.
Let us distil three modes or forms here, (i) the animal culture, (ii) the machine-becoming and of course (iii) the human life forms in the age of intensified mediatization. All three modes must be considered as “novel” ones, for one reason or another. We won’t go in any further detail here, yet it is pretty clear that the triad of these three modes render any monolithic or anthropologically imprinted form of philosophy of mind impossible. In turn, any philosophy of mind that is limited to just the human brains relation to the world, or even worse, which imposes analytical, logical or functional perspectives onto it, must be considered as seriously defect. This applies still to large parts of the mainstream in philosophy of mind (and even ethics).
In this essay we argued for a new Image of Thought that is independent from the experience of or by a particular form of life, form of informational45 organization or cultural setting, respectively. This new Image of Thought is represented through the choreostemic space. This space is dynamic and active and can be described formally only if it is “frozen” into an analytical reduction. Yet, its self-referentiality and self-directed generativity is a major ingredient. This self-referentiality is takes a salient role in the space’s capability to leave its conditions behind.
One of the main points of the choreostemic space (CS) probably is that we can not talk about “thought”—regardless its quasi-material and informational foundations—without referring to the choreostemic space. It is a (very) strong argument against Rylean concepts about the mind that claim the irrelevance of the concept of the mental by proposing that looking at the behavior is sufficient to talk about the “mind”. Of course, the CS does not support “the dogma of the ghost in the machine“ either. The choreostemic space defies (and helps to defy) any empirical and so also anthropological myopias through its triple-feature of transcendental framing, differential operation and immanent rooting. Such it is immune against naturalist fallacies such as Cartesian dualism as well as against arbitrariness or relativism. Neither it could be infected by any kind of preoccupation such like idealism or universalism. Despite one could regard it in some way as “pure Thought”, or consider it as the expressive situs of it, its purity is not an idealistic one. It dissolves either into the metaphysical transcendentality of the four conceptual aspects _a,that is, the _Model, _Mediality,_Concept,and _Virtuality.Or it takes the form of the Differential that could be considered as being kind of a practical transcendentality46 [48]. There, as one of her starting points Bühlmann writes:
Deleuze’s fundamental critique in Difference and Repetition is that throughout the history of philosophy, these conditions have always been considered as »already confined« in one way or another: Either within »a formless, entirely undifferentiated underground« or »abyss« even, or within the »highly personalized form« of an »autocratically individuated Being«
Our choreostemic space provides also the answer to the problematics of conditions.47 As Deleuze, we suggest to regard conditions only as secondary, that is as relevant entities only after any actualization. This avoids negativity as a metaphysical principle. Yet, in order to get completely rid of any condition while at the same time retain conditionability as a transcendental entity we have to resort to self-referentiality as a generic principle. Hence, our proposal goes beyond Deleuze’s framework as he developed it from “Difference and Repetition” until “What is Philosophy?”, since he never made this move.
Basically, the CS supports Wittgenstein’s rejection of materialism, which experienced a completely unjustified revival in the various shades of neuro-isms. Malcolm cites him [49]:
It makes as little sense to ascribe experiences, wishes, thoughts, beliefs, to a brain as to a mushroom. (p.186)
This support should not surprise, since the CS was deliberately constructed to be compatible with the concept of language game. Despite the CS also supports his famous remark about meaning:
it is also clear that the CS may be taken as a means to overcome the debate about external or internal primacies or foundations of meaning. The duality of internal vs. external is neutralized in the CS. While modeling and such the abstract model always requires some kind of material body, hence representing the route into some interiority, the CS is also spanned by the Concept and by Mediality. Both concepts are explicit ties between any kind of interiority and and any kind of exteriority, without preferring a direction at all. The proposal that any mental activity inscribes attractors into that space just means that interiority and exteriority can’t be separated at all, regardless the actual conceptualisation of mind or mentality. Yet, in accordance with PI 693 we also admit that the choreostemic space is not equal to the mental. Any particular mentality unfolds as an actual performance in the CS. Of course, the CS does not describe material reconfigurations, environmental contingency etc. and the performance taking place “there”. In other words, it does not cover any aspect of use. On the other hand, material reconfiguration are simply not “there” as long as they do not get interpreted by applying some kind of model.
The CS clearly shows that we should regard questions like “Where is the mind?” as kind of a grammatical mistake, as Blair lucidly demonstrates [50]. Such a usage of the word “mind” not only implies irrevocably that it is a localizable entity. It also claims its conceptual separatedness. Such a conceptualization of the mind is illusionary. The consequences for any attempt to render “machines” “more intelligent” are obviously quite dramatic. As for the brain, it is likewise impossible to “localize” mental capacities in the case of epistemic machines. This fundamental de-territorialization is not a consequence of scale, as in quantum physics. It is a consequence of the verticality of the differential, the related necessity of forms of construction and the fact, that a non-formal, open language, implying randolations to the community, is mandatory to deal with concepts.
One important question about a story like the “choreostemic space” with its divergent, but nevertheless intimately tied four-fold transcendentality is about the status of that space. What “is” it? How could it affect actual thought? Since we have been starting even with mathematical concepts like space, mappings, topology, or differential, and since our arguments frequently invokes the concept of mechanism,one could suspect that it is a piece of analytical philosophy. This ascription we can clearly reject.
Peter Hacker convincingly argues that “analytical philosophy” can’t be specified by a set of properties of such assumed philosophy. He proposes to consider it as a historical phase of philosophy, with several episodes, beginning around 1890 [53]. Nevertheless, during the 1970ies a a set of believes formed kind of a basic setup. Hacker writes:
But there was broad consensus on three points. First, no advance in philosophical understanding can be expected without the propaedeutic of investigating the use of the words relevant to the problem at hand. Second, metaphysics, understood as the philosophical investigation into the objective, language-independent, nature of the world, is an illusion. Third, philosophy, contrary to what Russell had thought, is not continuous with, but altogether distinct from science. Its task, contrary to what the Vienna Circle averred, is not the clarification or ‘improvement’ of the language of science.
Where we definitely disagree is at the point about metaphysics. Not only do we refute the view that metaphysics is about the objective, language-independent, nature of the world. As such we indeed would reject metaphysics. An example for this kind of thinking is provided by the writing of Whitehead. It should have become clear throughout our writing that we stick to the primacy of interpretation, and accordingly we do regard the believe in an objective reality as deeply misconceived. Thereby we do neither claim that our mental life is independent from the environment—as radical constructivism (Varela & Co) does—nor do we claim that there is no external world around us that is independent from our perception and constructions. Such is just belief in metaphysical independence, which plays an important tole in modernism. The idea of objective reality is also infected by this belief, resulting in a self-contradiction. For “objective” makes sense only as an index to some kind of sociality, and hence to a group sharing a language, and further to the use of language. The claim of “objective reality is thus childish.
More important, however, we have seen that the self-referentiality of terms like concept (we called those “strongly singular terms“) enforces us to acknowledge that Concept, much like logic, is a transcendental category. Obviously we refer strongly to transcendental, that is metaphysical categories. At the same time we also propose, however, that there are manifolds of instances of those transcendental categories.
The choreostemic space describes a mechanism. In that it resembles to the science of biology, where the concept of mechanism is an important epistemological tool. As such, we try to defend against mysticism, against the threat that is proposed by any all too quick reference to the “Lebenswelt”, the form of life and the ways of living. But is it really an “analysis”?
Putnam called “analysis” an “inexplicable noise”[54]. His critique was precisely that semantics can’t be found by any kind of formalization, that is outside of the use of language. In this sense we certainly are not doing analytic philosophy. As a final point we again want to emphasize that it is not possible to describe the choreostemic space completely, that is, all the conditions and effects, etc., due to its self-referentiality. It is a generative space that confirms its structure by itself. Nevertheless it is neither useless nor does it support solipsism. In a fully conscious act it can be used to describe the entirety of mental activity, and only as a fully conscious act, while this description is a fully non-representational description. In this way it overcomes not only the Cartesian dualism about consciousness. In fact, it is another way to criticise the distinction between interiority and exteriority.
For one part we agree with Wittgenstein’s critique (see also the work of PMS Hacker about that), which identifies the “mystery” of consciousness as an illusion. The concept of the language game, which is for one part certainly an empiric concept, is substantial for the choreostemic space. Yet, the CS provides several routes between the private and the communal, without actually representing one or the other. The CS does no distinguish between the interior and the exterior at all, just recall that mediality is one of the transcendental aspects. Along with Wittgenstein’s “solipsistic realism” we consequently reject also the idea that ontology can be about the external world, as this again would introduce such a separation. Quite to the contrast, the CS vanishes the need for the naive conception of ontology. Ontology makes sense only within the choreostemic space.
Yet, we certainly embrace the idea that mental processes are ultimately “based” on physical matter, but unfolded into and by their immaterial external surrounds, yielding an inextricable compound. Referring to any “neuro” stuff regarding the mental does neither “explain” anything nor is it helpful to any regard, whether one considers it as neuro-science or as neuro-phenomenology.
Summarizing the issue we may say that the choreostemic space opens a completely new level for any philosophy of the mental, not just what is being called the human “mind”. It also allows to address scientific questions about the mental in a different way, as well as it clarifies the route to machines that could draw their own traces and figures into that space. It makes irrepealable clear that any kind of functionalism or materialism is once and for all falsified.
Let us now finally inspect our initial question that we put forward in the editorial essay. Is there a limit for the mental capacity of machines? If yes, which kind of limit and where could we draw it? The question about the limit of machines directly triggers the question about the image of humanity („Bild des Menschen“), which is fuelled from the opposite direction. So, does this imply kind of a demarcation line between the domain of the machines and the realm of the human? Definitely not, of course. To opt for such a separation would not only follow idealist-romanticist line of critizising technology, but also instantiate a primary negativity.
Based on the choreostemic space, our proposal is a fundamentally different one. It can be argue that this space can contains any condition of any thought as an population of unfolding thoughts. These unfoldings inscribe different successions into the space, appearing as attractors and figures. The key point of this is that different figures, representing different Lebensformen (Forms of Life) that are probably even incommensurable to each other, can be related to each other without reducing any of them. The choreostemic space is a space of mental co-habitation.
Let us for instance start with the functionalist perspective that is so abundant in modernism since the times of Descartes. A purely functionalist stance is just a particular figure in that space, as it applies to any other style of thinking. Using the dictum of the choreosteme as a guideline, it is relatively easy to widen the perspective into a more appropriate one. Several developmental paths into a different choreostemic attractor are possible. For instance, mediatization through social embedding [52], opening through autonomous associative mechanisms as we have described it, or the adhoc recombination of conceptual principles as it has been demonstrated by Douglas Hofstadter. Letting a robot range freely around also provokes the first tiny steps away from functionalism, albeit the behavioral Bauplan of the insects (arthropoda) demonstrates that this does not install a necessity for the evolutionary path to advanced mental capabilities.
The choreostemic space can serve as such a guideline because it is not infected by anthropology in any regard. Nevertheless it allows to speak clearly about concepts like belief and knowledge, of course, without reducing these concepts to positive definite or functionalist definitions. It also remains completely compatible with Wittgenstein’s concept of the language game. For instance, we reconstructed the language game “knowing” as a label for a pointer (say reference) to a particular image of thought and its use. Of course, this figure should not be conceived as a fixed point attractor, as the various shades of materialism, idealism and functionalism actually would do (if they would argue along the choreosteme). It is somewhat interesting that here, by means of the choreostemic space, Wittgenstein and Deleuze approach each other quite closely, something they themselves would not have been supported, probably.
Where is the limit of machines, then?
I guess, any answer must refer to the capability to leave a well-formed trace in the choreostemic space. As such, the limits of machines are to be found in the same way as they are found for us humans: To feel and to act as an entity that is able to contribute to culture and to assimilate it in its mental activity.
We started the choreostemic space as a framework to talk about thinking, or more general: about mentality, in a non-anthropological and non.-reductionist manner. In the course of our investigation, we found a tool that actualizes itself into real social and cognitive situations. We also found the infinite space of choreostemic galaxies as attractors for eternal returns without repetition of the identical. Choreosteme keeps the any alive, without subjugating individuality, it provides a new and extended level of sayability without falling into representationalism. Taken together, as a new Image of Thought it allows to develop thinking deliberately and as part of a multitudinous variety.
1. This piece is thought of as a close relative to Deleuze’s Difference & Repetition (D&R)[1]. Think of it as a satellite of it, whose point of nearest approach is at the end of part IV of D&R, and thus also as a kind of extension of D&R.
2. Deleuze of course, belongs to them, but of course also Ludwig Wittgenstein (see §201 of PI [2], “paradox” of rule following), and Wilhelm Vossenkuhl [3], who presented three mutually paradoxical maxims as a new kind of a theory of morality (ethics), that resists the reference to monolithically set first principles, such as for instance in John Rawls’ “Theory of Justice”. The work of those philosophers also provides examples of how to turn paradoxicality productive, without creating paradoxes at all, the main trick being to overcome their fixation by a process. Many others, including Derrida, just recognize paradoxes, but are neither able to conceive of paradoxicality nor to distinguish them from paradoxes, hence they take paradoxes just as unfortunate ontological knots. In such works, one can usually find one or the other way to prohibit interpretation (think about the trail, grm. “Spur” in Derrida)
3. Paradoxes and antinomies like those described by Taylor, Banach-Tarski, Russell or of course Zenon are all defect, i.e. pseudo-paradoxes, because they violate their own “gaming pragmatics”. They are not paradoxical at all, but rather either simply false or arbitrarily fixed within the state of such violation. The same fault is committed by the Sorites paradox and its relatives. They are all mixing up—or colliding—the language game of countability or counting with the language game of denoting non-countability, as represented by the infinite or the infinitesimal. Instead of saying that they violate the apriori self-declared “gaming pragmatics” we also could say that they change the most basic reference system on the fly, without any indication of doing so. This may happen through an inadequate use of the concept of infiniteness.
4. DR 242 eternal return: it is not the same and the identical that returns, but the virtual structuredness (not even a “principle”), without which metamorphosis can’t be conceived.
5. In „Difference and Repetition“, Deleuze chose to spell “Idea” with a capital letter, in order to distinguish his concept from the ordinary word.
7. Here we find interesting possibilities for a transition to Alan Turing‘s formal foundation of creativity [5].
8. This includes the usage of concepts like virtuality, differential, problematic field, the rejection of the primacy of identity and closely related to that, the rejection of negativity, the rejection of the notion of representation, etc. Rejecting the negative opens an interesting parallel to Wittgenstein’s insisting on the transcendentality of logics and the subordination of any practical logic to performance. Since the negative is a purely symbolic entity, it is also purely aposteriori to any genesis, that is self-referential performance.
9. I would like to recommend to take a look to the second part of part IV in D&R, and maybe, also to the concluding chapter therein (download it here).
10. Saying „we“ here is not just due to some hyperbolic politeness. The targeted concept of this essay, the choreosteme, has been developed by Vera Bühlmann and the author of this essay (Klaus Wassermann) in close collaboration over a number of years. Finally the idea proofed to be so strong that now there is some dissent about the role and the usage of the concept.
11. For belief revision as described by others, overview @ Stanford, a critique by Pollock, who clarified that belief revision as comprised and founded by the AGM theory (see below) is incompatible to standard epistemology.
12. By symbolism we mean the belief that symbols are the primary and apriori existent entities for any description of any problematic field. In machine-based epistemology for instance, we can not start with data organized in tables because this pre-supposes a completed process of “ensymbolization”. Yet, in the external world there are no symbols, because symbols only exist subsequent to interpretation. We can see that symbolism creates the egg-chick-problem.
13. Miriam Meckel, communication researcher at the university of Zürich, is quite active in drawing dark-grey pictures. Recently, she coined “Googlem” as a resemblance to Google and Golem. Meckel commits several faults in that: She does not understand the technology(accusing Google to use averages), and she forgets about the people (programmers) behind “the computer”, and the people using the software as well. She follows exactly the pseudo-romantic separation between nature and the artificial.
Miriam Meckel, Next. Erinnerungen an eine Zukunft ohne uns, Rowohlt 2011.
14. Here we find a resemblance to Wittgenstein’s denial to attribute philosophy the role of an enabler of understanding. According to Wittgenstein, philosophy even does not and can not describe. It just can show.
15. This also concerns the issue of cross-culturality.
16. Due to some kind of cultural imprinting, a frequently and solitary exercised habit, people almost exclusively think of Cartesian spaces as soon as a “space” is needed. Yet, there is no necessary implication between the need for a space and the Cartesian type of space. Even Deleuze did not recognize the difficulties implied by the reference to the Cartesian space, not only in D&R, but throughout his work. Nevertheless, there are indeed passages (in What is philosophy? with “planes of immanence”, or in the “Fold”) where it seems that he could have smelled into a different conception of space.
17. For the role of „elements“ please see the article about „Elementarization“.
18. Vera Bühlmann [8]: „Insbesondere wird eine Neu-Bestimmung des aristotelischen Verhältnisses von Virtualität und Aktualität entwickelt, unter dem Gesichtspunkt, dass im Konzept des Virtuellen – in aller Kürze formuliert – das Problem struktureller Unendlichkeit auf das Problem der zeichentheoretischen Referenz trifft.“
19. which is also a leading topic of our collection of essays here.
20. e.g. Gerhard Gamm, Sybille Krämer, Friedrich Kittler
21. cf. G.C. Tholen [7], V.Bühlmann [8].
22. see the chapter about machinic platonism.
23. Actually, Augustine instrumentalises the discovered difficulty to propose the impossibility to understand God’s creation.
24. It is an „ancestry“ only with respect to the course in time, as the result of a process, not however in terms of structure, morphology etc.
25. cf. C.S. Peirce [16], Umberto Eco [17], Helmut Pape [18];
26. Note that in terms of abstract evolutionary theory rugged fitness landscapes enforce specialisation, but also bring along an increased risk for vanishing of the whole species. Flat fitness landscapes, on the other hand, allow for great diversity. Of course the fitness landscape is not a stable parameter space, neither locally not globally. IN some sense, it is even not a determinable space. Much like the choreostemic space, it would be adequate to conceive of the fitness landscape as a space built from 2-set of transformatory power and the power to remain stability. Both can be determined only in hindsight. This paradoxality is not by chance, yet it has not been discovered as an issue in evolutionary theory.
27. Of course I know that there are important differences between verbs and substantives, which we may level out in our context without loosing too much.
28. In many societies, believing has been thought to be tied to religion, the rituals around the belief in God(s). Since the renaissance, with upcoming scientism and profanisation of societies religion and science established sort of a replacement competition. Michel Serres described how scientists took over the positions and the funds previously held by the cleric. The impression of a competition is well-understandable, of course, if we consider the “opposite direction” of the respective vectors in the choreostemic space. Yet, it is also quite mistaken, maybe itself provoked by overly idealisation, since neither the clerk can make his day without models nor the scientist his one without beliefs.
29. The concept of “theory” referred to here is oriented towards a conceptualisation based on language game and orthoregulation. Theories need to be conceived as orthoregulative milieus of models in order to be able to distinguish between models and theories, something which can’t be accomplished by analytic concepts. See the essay about theory of theory.
30. Of course, we do not claim to cover completely the relation between experiments, experience, observation on the one side and their theoretical account on the other by that. We just would like to emphasize the inextricable dynamic relation between modeling and concepts in scientific activities, whether in professional or “everyday-type” of science. For instance, much could be said in this regard about the path of decoherence from information and causality. Both aspects, the decoherence and the flip from intensifying modeling over to a conceptual form has not been conceptualized before. The reason is simple enough: There was no appropriate theory about concepts.
When, for instance, Radder [28] contends that the essential step from experiment to theory is to disconnect theoretical concepts from the particular experimental processes in which they have been realized [p.157], then he not only misconceives the status and role of theories, he also does not realize that experiments are essentially material actualisations of models. Abstracting regularities from observations into models and shaping the milieu for such a model in order to find similar ones, thereby achieving generalization is anything but to disconnect them. It seems that he overshoot a bit in his critique of scientific constructivism. Additionally, his perspective does not provide any possibility to speak about the relation between concepts and models. Though Radder obviously had the feeling of a strong change in the way from putting observations into scene towards concepts, he fails to provide a fruitful picture about it. He can’t surpass that feeling towards insight, as he muses about “… ‘unintended consequences’ that might arise from the potential use of theoretical concepts in novel situations.” Such descriptions are close to scientific mysticism.
Radder’s account is a quite recent one, but others are not really helpful about the relation between experiment, model and concept either. Kuhn’s praised concept of paradigmatic changes [24] can be rated at most as a phenomenological or historizing description. Sure, his approach brought a fresh perspective in times of overdone reductionism, but he never provided any kind of abstract mechanism. Other philosophers of science stuck to concepts like prediction (cf. Reichenbach [20], Salmon [21]) and causality (cf. Bunge [22], Pearl [23]), which of course can’t say anything about the relation to the category of concepts. Finally, Nancy Cartwright [25], Isabelle Stengers [26], Bruno Latour [9] or Karin Knorr Cetina [10] are representatives for the various shades of constructivism, whether individually shaped or as a phenomenon embedded into a community, which also can’t say anything about concepts as categories. A screen through the Journal of Applied Measurement did not reveal any significantly different items.
Thus, so far philosophy of science, sociology and history of science have been unable to understand the particular dynamics between models and concepts as abstract categories, i.e. as _Modelsor _Concepts.
31. If the members of a community, or even the participants in random interactions within it, agree on the persistence of their relations, then they will tend to exhibit a stronger propensity towards collaboration. Robert Axelrod demonstrated that on the formal level by means of a computer experiment [33]. He has been the first one, who proposed game theory as a means to explain the choice of strategies between interactees.
32. Orig.: „Seit über 200 Jahren ist die Philosophie anthropologisch bestimmt. Was das genauer bedeutet, hat sie dagegen kaum erforscht.“
33. Orig.: „Nietzsches Idealismuskritik, die in vielen Schattierungen vorliegt und immer auf das philosophische Selbstmissverständnis eines reinen Geistes und reiner Begriffe zielt, richtet sich auch gegen ein bestimmtes Naturverständnis.“ (KAV439)
34. More precisely, in evolutionary processes the capability for generalization is selected under conditions of scarcity. Scarcity, however, is inevitably induced under the condition of growth or consumption. It is important to understand that newly emerging levels of generalization do not replace former levels of integration. Those undergo a transformation with regard to their relations and their functional embedding, i.e. with regard to their factuality. In morphology of biological specimens this is well-known as “Überformung”. For more details about evolution and generalization please see this.
35. The notions of “philosophy of nature” or even “natural philosophy” are strictly inappropriate. Both “kinds” of philosophy are not possible at all. They have to be regarded as a strange mixture of contemporarily available concepts from science (physics, chemistry, biology), mysticism or theism and the mistaken attempt to transfer topics as such from there to philosophy. Usually, the result is simply a naturalist fallacy with serious gaps regarding the technique of reflection. Think about Kant’s physicalistic tendencies throughout his philosophy, the unholy adaptation of Darwinian theory, analytic philosophy, which is deeply influenced by cybernetics, or the comeback of determinism and functionalism due to almost ridiculous misunderstandings of the brain.
Nowadays it must be clear that philosophy before the reflection of the role of language, or more general, before the role of languagability—which includes processes of symbolization and naming—can’t be regarded as serious philosophy. Results from sciences can be imported into philosophy only as formalized structural constraints. Evolutionary theory, for instance, first have to be formalized appropriately (as we did here), before it could be of any relevance to philosophy. Yet, what is philosophy? Besides Deleuze’s answer [35], we may conceive philosophy as a technique of asking about the conditionability of the possibility to reflect. Hence, Wittgenstein said that philosophy should be regarded as a cure. Thus philosophy includes fields like ethics as a theory of morality or epistemology, which we developed here into a “choreostemology”.
36. Orig.: „Der Punkt, um den es sich namentlich handelt, lässt sich ganz bestimmt angeben. Es ist gleichsam der Apfel in dem logischen Sündenfall der deutschen Philosophie nach Kant: das Verhältnis zwischen Subjekt und Objekt in der Erkenntnis.“
37. Despite Rölli usually esteems Deleuze’s philosophy of the differential, here he refers to the difference though. I think it should be read as “divergence and differential”.
38. Orig.: „Nach allem wird klarer geworden sein, dass es sich bei diesem Pragmatismus nicht um einen einfachen Pragmatismus handelt, sondern um einen mit aller philosophischen Raffinesse konstruierten Pragmatismus der Differenz.“
39. As scientific facts, Quantum physics, the probabilistic structure of the brain and the non-representationalist working of the brain falsify determinism as well as finiteness of natural processes, even if there should be something like “natural laws”.
40. See the article about the structure of comparison.
41. Even Putnam does so, not only in his early functionalist phase, but still in Representation and Reality [36].
42. Usually, philosophers are trained only in logics, which does not help much, since logic is not a process. Of course, being trained in mathematical structures does not imply that the resulting philosophy is reasonable at all. Take Alain Badiou as an example, who just blows up materialism.
43. A complete new theory of governmentality and sovereignty would be possible here.
44. The notion of “scopic” media as coined by Knorr Cetina means that modern media substantially change the point of view (“scopein”, looking, viewing). Today, we are not just immersed into them, but we deliberately choose them and search for them. The change of perspective is thought to be a multitude and contracting space and time. This however, is not quite typical for the new media.
45. Here we refer to our extended view onto “information” that goes far beyond the technical reduced perspective that is forming the main stream today. Information is a category that can’t be limited to the immaterial. See the chapter about “Information and Causality”.
46. Vera Bühlmann described certain aspects of Deleuze’s philosophy as an attempt to naturalize transcendentality in the context of emergence, as it occurs in complex systems. Deleuze described the respective setting in “Logic of Sense” [49] as the 14th series of paradoxes.
47. …which is not quite surprising, since we developed the choreostemic space together.
• [1] Gilles Deleuze, Difference and Repetition. Translated by Paul Patton, Athlon Press, 1994 [1968].
• [2] Ludwig Wittgenstein, Philosophical Investigations.
• [3] Wilhelm Vossenkuhl. Die Möglichkeit des Guten. Beck, München 2006.
• [4] Jürgen Habermas, Über Moralität und Sittlichkeit – was macht eine Lebensform »rational«? in: H. Schnädelbach (Hrsg.), Rationalität. Suhrkamp, Frankfurt 1984.
• [5] Alan Turing. Chemical Basis of Morphogenesis.
• [7] Georg Christoph Tholen. Die Zäsur der Medien. Kulturphilosophische Konturen. Suhrkamp, Frankfurt 2002.
• [8] Vera Bühlmann. Inhabiting media : Annäherungen an Herkünfte und Topoi medialer Architektonik. Thesis, University of Basel 2011. available online, summary (in German language) here.
• [9] Bruno Latour,
• [10] Karin Knorr Cetina (1991). Epistemic Cultures: Forms of Reason in Science. History of Political Economy, 23(1): 105-122.
• [11] Günther Ropohl, Die Unvermeidlichkeit der technologischen Aufklärung. In: Paul Hoyningen-Huene, & Gertrude Hirsch (eds.), Wozu Wissenschaftsphilosophie? De Gruyter, Berlin 1988.
• [12] Bas C. van Fraassen, Scientific Representation: Paradoxes of Perspective. Oxford University Press, New York 2008.
• [13] Ronald N. Giere, Explaining Science: A Cognitive Approach. Cambridge University Press, Cambridge 1988.
• [14] Aaron Ben-Ze’ev, Is There a Problem in Explaining Cognitive Progress? pp.41-56 in: Robert F. Goodman & Walter R. Fisher (eds.), Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences) SUNY Press, New York 1995.
• [15] Robert Brandom, Making it Explicit.
• [16] C.S. Peirce, var.
• [17] Umberto Eco,
• [18] Helmut Pape, var.
• [19] Vera Bühlmann, “Primary Abundance, Urban Philosophy — Information and the Form of Actuality.” pp.114-154, in: Vera Bühlmann (ed.), Printed Physics. Springer, New York 2012, forthcoming.
• [20] Hans Reichenbach, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge, University of Chicago Press, Chicago, 1938.
• [21] Wesley C. Salmon, Causality and Explanation. Oxford University Press, New York 1998.
• [22] Mario Bunge, Causality and Modern Science. Dover Publ. 2009 [1979].
• [23] Judea Pearl , T.S. Verma (1991) A Theory of Inferred Causation.
• [24] Thomas S. Kuhn, Scientific Revolutions
• [25] Nancy Cartwright. var.
• [26] Isabelle Stengers, Spekulativer Konstruktivismus. Merve, Berlin 2008.
• [27] Peter M. Stephan Hacker, “Of the ontology of belief”, in: Mark Siebel, Mark Textor (eds.), Semantik und Ontologie. Ontos Verlag, Frankfurt 2004, pp. 185–222.
• [28] Hans Radder, “Technology and Theory in Experimental Science.” in: Hans Radder (ed.), The Philosophy Of Scientific Experimentation. Univ of Pittsburgh 2003, pp.152-173
• [29] C. Alchourron, P. Gärdenfors, D. Makinson (1985). On the logic of theory change: Partial meet contraction functions and their associated revision functions. Journal of Symbolic Logic, 50: 510–530.
• [30] Sven Ove Hansson, Sven Ove Hansson (1998). Editorial to Thematic Issue on: “Belief Revision Theory Today”, Journal of Logic, Language, and Information 7(2), 123-126.
• [31] John L. Pollock, Anthony S. Gillies (2000). Belief Revision and Epistemology. Synthese 122: 69–92.
• [32] Michael Epperson (2009). Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse. Process Studies, 38:2, 339-366.
• [33] Robert Axelrod, Die Evolution der Kooperation. Oldenbourg, München 1987.
• [34] Marc Rölli, Kritik der anthropologischen Vernunft. Matthes & Seitz, Berlin 2011.
• [35] Deleuze, Guattari, What is Philosophy?
• [36] Hilary Putnam, Representation and Reality.
• [37] Giorgio Agamben, The State of Exception.University of Chicago Press, Chicago 2005.
• [38] Elena Bellina, “Introduction.” in: Elena Bellina and Paola Bonifazio (eds.), State of Exception. Cultural Responses to the Rhetoric of Fear. Cambridge Scholars Press, Newcastle 2006.
• [39] Friedrich Albert Lange, Geschichte des Materialismus und Kritik seiner Bedeutung in der Gegenwart. Frankfurt 1974. available online @
• [40] Michel Foucault, Archaeology of Knowledge.
• [41] Benjamin Morgan, Undoing Legal Violence: Walter Benjamin’s and Giorgio Agamben’s Aesthetics of Pure Means. Journal of Law and Society, Vol. 34, Issue 1, pp. 46-64, March 2007. Available at SSRN:
• [42] Michael Epperson, “Bridging Necessity and Contingency in Quantum Mechanics: The Scientific Rehabilitation of Process Metaphysics.” in: David R. Griffin, Timothy E. Eastman, Michael Epperson (eds.), Whiteheadian Physics: A Scientific and Philosophical Alternative to Conventional Theories. in process, available online; mirror
• [43] Gilles Deleuze, Empiricism and Subjectivity. An Essay on Hume’s Theory of HUman Nature. Columbia UNiversity Press, New York 1989.
• [44] Gilles Deleuze, Pure Immanence – Essays on A Life. Zone Books, New York 2001.
• [45] Isabelle Peschard
• [46] Knorr Cetina, Karin (2009): The Synthetic Situation: Interactionism for a Global World. In: Symbolic Interaction, 32 (1), S. 61-87.
• [47] Knorr Cetina, Karin (2012): Skopische Medien: Am Beispiel der Architektur von Finanzmärkten. In: Andreas Hepp & Friedrich Krotz (eds.): Mediatisierte Welten: Beschreibungsansätze und Forschungsfelder. Wiesbaden: VS Verlag, S. 167-195.
• [48] Vera Bühlmann, “Serialization, Linearization, Modelling.” First Deleuze Conference, Cardiff 2008) ; Gilles Deleuze as a Materialist of Ideality”, (lecture held at the Philosophy Visiting Speakers Series, University of Duquesne, Pittsburgh 2010.
• [49] Gilles Deleuze, Logic of Sense. Columbia University Press, New York 1991 [1990].
• [50] N. Malcolm, Nothing is Hidden: Wittgenstein’s Criticism of His Early Thought, Basil Blackwell, Oxford 1986.
• [51] David Blair, Wittgenstein, Language and Information: “Back to the Rough Ground!” Springer, New York 2006. mirror
• [52] Caroline Lyon, Chrystopher L Nehaniv, J Saunders (2012). Interactive Language Learning by Robots: The Transition from Babbling to Word Forms. PLoS ONE 7(6): e38236. Available online (doi:10.1371/journal.pone.0038236)
• [53] Peter M. Stephan Hacker, “Analytic Philosophy: Beyond the linguistic turn and back again”, in: M. Beaney (ed.), The Analytic Turn: Analysis in Early Analytic Philosophy and Phenomenology. Routledge, London 2006.
• [54] Hilary Putnam, The Meaning of “Meaning”, 1976.
Dealing with a Large World
June 10, 2012 § Leave a comment
The world as an imaginary totality of all actual and virtual
relationships between assumed entities can be described in innumerable ways. Even what we call a “characteristic” forms only in a co-dependent manner together with the formation processes of entities and relationships. This fact is particularly disturbing if we encounter something for the first time, without the guidance provided by more or less applicable models, traditions, beliefs or quasi-material constraints. Without those means any selection out of all possible or constructible properties is doomed to be fully contingent, subject to pure randomness.
Yet, this does not result in results that are similarly random. Given that the equipment with tools and methods is given for a task or situation at hand, modeling is for the major part the task to reduce the infiniteness of possible selections in such a way that the resulting representation can be expected to be helpful. Of course, this “utility” is not a hard measure in itself. It is not only dependent on the subjective attitude to risk, mainly the model risk and the prediction risk, utility is also relative to the scale of the scope, in other words, whether one is interested in motor or other purely physical aspects, tactical aspects or strategic aspects, whether one is interested in more local or global aspects, both in time and space, or whether one is interested in any kind of balanced mixture of those aspects. Establishing such a mixture is a modeling task in itself, of course, albeit one that is often accomplished only implicitly.
The randomness mentioned above is a direct corollary of the empirical underdetermination1. From a slightly different perspective, we also may say that it is an inevitable consequence of the primacy of interpretation. And we also should not forget that language and particularly metaphors in language—and any kind of analogical thinking as well—are means to deal constructively with that randomness, turning physical randomness into contingency. Even within the penultimate guidance of predictivity—it is only a soft guidance though—large parts of what we reasonably could conceive as facts (as temporarily fixed arrangement of relations) is mere collaborative construction, an ever undulating play between the individual and the general.
Even if analogical thinking indeed is the cornerstone, if not the Acropolis, of human mindedness, it is always preceded by and always rests upon modeling. Only a model allows to pick some aspect out of the otherwise unsorted impressions taken up from the “world”. In previous chapters we already discussed quite extensively the various general as well as some technical aspects of modeling, from an abstract as well as from a practical perspective.2 Here we focus on a particular challenge, the selection task regarding the basic descriptors used to set up a particular model.
Well, given a particular modeling task we have the practical challenge to reduce a large set of pre-specific properties into a small set of “assignates” that together represent in some useful way the structure of the dynamics of the system that we’d observed. How to reduce a set of properties created by observation that comprises several hundreds of them?
The particular challenge arises even in the case of linear systems if we try to avoid subjective “cut-off” points that are buried deeply into the method we use. Such heuristic means are wide-spread in statistically based methods. The bad thing about that is that you can’t control their influence onto the results. Since the task comprises the selection of properties for the description of the entities (prototypes) to be formed, such arbitrary thresholds, often justified or even enforced just by the method itself, will exert a profound influence on the semantic level. In other words it corroborates its own assumption of neutrality.
Yet, we also never should assume linearity of a system, because most of the interesting real systems are non-linear, even in the case of trivial machines. Brute force approaches are not possible, because the number of possible models is 2^n, with n the number of properties or variables. Non-linear models can’t be extrapolated from known ones, of course. The Laplacean demon3 became completely wrapped by Thomean folds4, being even quite worried by things like Turing’s formal creativity5.
When dealing with observations from “non-linear entities”, we are faced with the necessity to calculate and evaluate any selection of variables explicitly. Assuming a somewhat phantastic figure of 0.0000001 seconds (10e-6) needed to calculate a single model, we still would need 10E15 years to visit all models if we would have to deal with just 100 variables. To make it more palpable: It would take 80 million times longer than the age of the earth, which is roughly 4.8 billion years…
Obviously, we have to drop the idea that we can “proof” the optimality of a particular model. The only thing we can do is to minimize the probability that within a given time T we can find a better model. On the other hand, the data are not of unbounded complexity, since real systems are not either. There are regularities, islands of stability, so to speak. There is always some structure, otherwise the system would not persist as an observable entity. As a consequence, we can organize the optimization of “failure time probability”, we may even consider this as a second-order optimization. We may briefly note that the actual task thus is not only to select a proper set of variables, we also should identify the relations between the observed and constructed variables. Of course, there are always several if not many sets of variables that we could consider as “proper”, precisely for the reason that they form a network of relations, even if this network is probabilistic in nature and itself being kind of a model.
So, how to organize this optimization? Basically, everything has to be organized as nested, recurrent processes. The overall game we could call learning. Yet, it should be clear that every “move” and every fixation of some parameter and its value is nothing else than a hypothesis. There is no “one-shot-approach”, and no linear progression either.
If we want to avoid naive assumptions—and any assumption that remains untested is de facto a naive assumption—we have to test them. Everything is trial and error, or expressed in a more educated manner, everything has to be conceived as a hypothesis. Consequently we can reduce the number of variables only by a recurrent mechanism. As a lemma we conclude that any approach that reduces the number of variables not in a recurrent fashion can’t be conceived as a sound approach.
Contingent Collinearities
It is the structuredness of the observed entity that cause the similarity of any two observations across all available or apriori chosen properties. We also may expect that any two variables could be quite “similar”6 across all available observations. This provides the first two opportunities for reducing the size of the problem. Note that such reduction by “black-listing” applies only to the first steps in a recurrent process. Once we have evidence that certain variables do not contribute to the predictivity of our model, we may loosen the intensity of any of the reductions! Instead of removing it from the space of expressibility we may preferably achieve a weighted preference list in later stages of modeling.
So, if we find n observations or variables being sufficiently collinear, we could remove a portion p(n) from this set, or we could compress them by averaging.
R1: reduction by removing or compressing collinear records.
R2: reduction by removing or compressing collinear variables.
A feasible criterion for assessing the collinearity is the monotonicity in the relationship between two variables as it is reflected by Spearman’s correlation. We also could apply K-means clustering using all variables, then averaging all observations that are “sufficiently close” to the center of the clusters.
Albeit the respective thresholding is only a preliminary tactical move, we should be aware of the problematics we introduce by such a reduction. Firstly, it is the size of the problem that brings in a notion of irreversibility, even if we are fully aware of the preliminarity. Secondly, R1 is indeed critical because it is in some quite obvious way a petitio principii. Even tiny differences in some variables could be masked by larger differences in such variables that penultimately are recognized as irrelevant. Hence, very tight constraints should be applied when performing R1.
When removing collinear records we else have to care about the outcome indicator. Often, the focused outcome is much less frequent than its “opposite”. Preferably, we should remove records that are marked as negative outcome, up to a ratio of 1:1 between positive and negative outcome in the reduced data. Such “adaptive” sampling is similar to so-called “biased sampling”.
Directed Collinearities
Additionally to those two collinearities there is a third one, which is related to the purpose of the model. Variables that do not contribute to the predictive reconstruction of the outcome we could call “empirically empty”.
R3: reduction by removing empirically empty variables
Modeling without a purpose can’t be considered to be modeling at all7, so we always have a target variable available that reflects the operationalization of the focused outcome. We could argue that only those variables are interesting for a detailed inspection that are collinear to the target variable.
Yet, that’s a problematic argument, since we need some kind of model to draw the decision whether to exclude a variable or not, based on some collinearity measure. Essentially, that model claims to predict the predictivity of the final model, which of course is not possible. Any such apriori “determination” of the contribution of a variable to the final predictivity of a model is nothing else than a very preliminary guess. Thus, we indeed should treat it just as a guess, i.e. we should consider it as a propensity weight for selecting the variable. In the first explorative steps, however, we could choose an aggressive threshold, causing the removal of many variables from the vector.
R1 removes redundancy across observations. The same effect can be achieved by a technique called “bagging”, or similarly “foresting”. In both cases a comparatively small portion of the observations are taken to build a “small” model, where the “bag” or “forest” of all small models then are taken to build the final, compound model. Bagging as a technique of “split & reduce” can be applied also in the variable domain.
R4: reduction of complexity by splitting
Once an acceptable model or set of models has been built, we can check the postponed variables one after another. In the case of splitting, the confirmation is implicitly performed by weighting the individual small models.
Compression and Redirection
Elsewhere we already discussed the necessity and the benefits of separating the transformation of data from the association of observations. If we separate it, we can see that everything we need is an improvement or a preservation of the potential distinguishability of observations. The associative mechanism need not to “see” anything that even comes close to the raw data, as long as the resulting association of observations results in a proper derivation of prototypes.8
This opens the possibility for a compression of the observations, e.g. by the technique of random projection. Random projection maps vector spaces onto each other. If the dimensionality of the resulting vector of reduced size remains large enough (100+), then the separability of the vectors is kept intact. The reason is that in a high-dimensional vector space almost all vectors are “orthogonal” to each other. In other words, random projection does not change the structure of the relations between vectors.
R5: reduction by compression
During the first explorative steps one could construct a vector space of d=50, which allows a rather efficient exploration without introducing too much noise. Noise in normalized vector space essentially means to change the “direction” of the vectors, the effect of changing the length of vectors due to random projection is much less profound. Else note that introducing noise is not a bad thing at all: it helps to avoid overfitting, resulting in more robust models.
If we conceive of this compression by means of random projection as a transformation, we could store the matrix of random numbers as parameters of that transformation. We then could apply it in any subsequent classification task, i.e. when we would apply the model to new observations. Yet, The transformation by random projection destroys the semantic link between observed variables and the predictivity of the model. Any of the columns after such a compression contains information from more than one of the input variables. In order to support understanding, we have to reconstruct the semantic link.
That’s fortunately not a difficult task, albeit it is only possibly if we use an index that allows to identify the observations even after the transformation. The result of the building the model is a collection of groups of records, or indices, respectively. Based on these indices we simply identify those variables, which minimize the ratio of variance within the groups to the variance of the means per variable across the groups. This provides us the weights for the list of all variables, which can be used to drastically reduce the list of input variables for the final steps of modeling.
The whole approach could be described as sort of a redirection procedure. We first neglect the linkage between semantics of individual variables and prediction in order to reduce the size of the task, then after having determined the predictivity we restore the neglected link.
This opens the road for an even more radical redirection path. We already mentioned that all we need to preserve through transformation is the distinguishability of the observations without distorting the vectors too much. This could be accomplished not only by random projection though. If we’d interpret large vectors as a coherent “event” we can represent them by the coefficients of wavelets, built from individual observations. The only requirement is that the observations consist from a sufficiently large number of variables, typically n>500.
Compression is particularly useful, if the properties, i.e. the observed variables do not bear much semantic value in itself, as it is the case in image analysis, analysis of raw sensory data, or even in case of the modeling of textual information.
In this small essay we described five ways to reduce large sets of variables, or “assignates” (link) as they are called more appropriately. Since for pragmatic reasons a petitio principii can’t be avoided in attempting such a reduction, mainly due to the inevitable fact that we need a method for it, the reduction should be organized as a process that decreases the uncertainty in assigning a selection probability to the variables.
Regardless the kind of mechanism to associate observations into groups and forming thereby the prototypes, a separation of transformation and association is mandatory for such a recurrent organization being possible.
1. Quine [1]
2. see: the abstract model, modeling and category theory, technical aspects of modeling, transforming data;
3. The “Laplacean Demon” refers to Laplace’s belief that if all parts of the universe could be measured the future development of the universe could be calculated. Such it is the paradigmatic label for determinism. Today we know that even IF we could measure everything in the universe with arbitrary precision we (what we could not, of course) we even could NOT pre-calculate the further development of the universe. The universe does not develop, it performs an open evolution.
4. Rene Thom [2] was the first to explicate the mathematical theory of folds in parameter space, which was dubbed “catastrophe theory” in order to reflect the subjects experience moving around in folded parameter spaces.
5. Alan Turing not only laid the foundations of deterministic machines for performing calculations; he also derived as the first one the formal structure of self-organization [3]. Based on this formal insights we can design the degree of creativity of a system.
impossibility to know for sure is the first and basic reason for culture.
6. note that determining similarity also requires apriori decisions about methods and scales, that need to be confirmed. In other words we always have to start with a belief.
7. Modeling without a purpose can’t be considered to be modeling at all. Performing a clusterization by means of some algorithm is not creating a model until we do not use it, e.g. in order to get some impression. Yet, as soon as we indeed take a look following some goal we imply a purpose. Unfortunately, in this case we would be enslaved by the hidden parameters built into the method. Things like unsupervised modeling, or “just clustering” always implies hidden targets and implicit optimization criteria, determined by the method itself. Hence, such things can’t be regarded as a reasonable move in data analysis.
8. This sheds an interesting light to the issue of “representation”, which we could not follow here.
• [1] WvO Quine. Two Dogmas of Empiricism.
• [2] Rene Thom. Catastrophe Theory
• [3] Alan Turing (1956) Chemical basis of Morphogenesis
May 17, 2012 § Leave a comment
In the late 1980ies there was a funny, or strange, if you like,
discussion in the German public about a particular influence of the English language onto the German language. That discussion got not only teachers engaged in higher education going, even „Der Spiegel“, Germany’s (still) leading weekly news magazine damned the respective „anglicism“. What I am talking about here considers the attitude to „sense“. At those times well 20 years ago, it was meant to be impossible to say „dies macht Sinn“, engl. „this makes sense“. Speakers of German at that time understood the “make” as “to produce”. Instead, one was told, the correct phrase had to be „dies ergibt Sinn“, in a literal, but impossible translation something like „this yields sense“, or even „dies hat Sinn“, in a literal, but again wrong and impossible translation, „this has sense“. These former ways of building a reference to the notion of „sense“ feels even awkward for many (most?) speakers of German language today. Nowadays, the English version of the meaning of the phrase replaced the old German one, and one even can find in the “Spiegel“ now the analogue to “making” sense.
Well, the issue here is not just one historical linguistics or one of style. The differences that we can observe here are deeply buried into the structure of the respective languages. It is hard to say whether such idioms in German language are due to the history of German Idealism, or whether this particular philosophical stance developed on the basis of the structures in the language. Perhaps a bit of both, one could say from a Wittgensteinian point of view. Anyway, we may and can be relate such differences in “contemporary” language to philosophical positions.
It is certainly by no means an exaggeration to conclude that the cultures differ significantly in what their languages allow to be expressible. Such a thing as an “exact” translation is not possible beyond trivial texts or a use of language that is very close to physical action. Philosophically, we may assign a scale, or a measure, to describe the differences mentioned above in probabilistic means, and this measure spans between pragmatism and idealism. This contrast also deeply influences philosophy itself. Any kind of philosophy comes in those two shades (at least), often expressed or denoted by the attributes „continental“ and „anglo-american“. I think these labels just hide the relevant properties. This contrast of course applies to the reading of idealistic or pragmatic philosophers itself. It really makes a difference (1980ies German . . . „it is a difference“) whether a native English speaking philosopher reads Hegel, or a German native, whether a German native is reading Peirce or an American guy, whether Quine conducts research in logic or Carnap. The story quickly complicates if we take into consideration French philosophy and its relation to Heidegger, or the reading of modern French philosophers in contemporary German speaking philosophy (which is almost completely absent).1
And it becomes even more complicated, if not complex and chaotic, if we consider the various scientific sub-cultures as particular forms of life, formed by and forming their own languages. In this way it may well seem to be rather impossible—at least, one feels tempted to think so—to understand Descartes, Leibniz, Aristotle, or even the pre-Socratics, not to speak about the Cro-Magnon culture2, albeit it is probably more appropriate to reframe the concept of understanding. After all, it may itself be infected by idealism.
In the chapters to come you may expect the following sections. As we did before we’ll try to go beyond the mere technical description, providing the historical trace and the wider conceptual frame:
A Shift of Perspective
Here, I need this reference to the relativity as it is introduced in—or by —language for highlighting a particular issue. The issue concerns a shift in preference, from the atom, the point, from matter, substance, essence and metaphysical independence towards the relation and its dynamic form, the transformation. This shift concerns some basic relationships of the weave that we call “Lebensform” (form of life), including the attitude towards those empiric issues that we will deal with in a technical manner later in this essay, namely the transformation of “data”. There are, of course, almost countless aspects of the topos of transformation, such like evolutionary theory, the issue of development, or, in the more abstract domains, mathematical category theory. In some way or another we already dealt with these earlier (for category theory, for evolutionary theory). These aspects of the concept of transformation will not play a role here.
In philosophical terms the described difference between German and English language, and the change of the respective German idiom marks the transition from idealism to pragmatism. This corresponds to the transition from a philosophy of primal identity to one where difference is transcendental. In the same vein, we could also set up the contrast between logical atomism and the event as philosophical topoi, or between favoring existential approaches and ontology against epistemology. Even more remarkably, we also find an opposing orientation regarding time. While idealism, materialism, positivism or existentialism (and all similar attitudes) are heading backwards in time, and only backwards, pragmatism and, more generally, a philosophy of events and transformation is heading forward, and only forward. It marks the difference between settlement (in Heideggerian „Fest-Stellen“, English something like „fixing at a location“, putting something into the „Gestell“3) and anticipation. Settlements are reflected by laws of nature in which time does not—and shall not—play a significant role. All physical laws, and almost all theories in contemporary physics are symmetric with respect to time. The “law perspective” blinds against the concept of context, quite obviously so. Yet, being blinded against context also disables to refer to information in an adequate manner.
In contrast, within a framework that is truly based on the primacy of interpretation and thus following the anticipatory paradigm, it does not make sense to talk about “laws”. Notably, issues like the “problem” of induction exist only in the framework of the static perspective of idealism and positivism.
It is important to understand that these attitudes are far from being just “academic” distinctions. There are profound effects to be found on the level of empiric activity, how data are handled using which kind of methods. Further more, they can’t be “mixed”, once one of them have been chosen. Despite we may switch between them in a sequential manner, across time or across domains, we can’t practice them synchronously as the whole setup of the life form is influenced. Of course, we do not want to rate one of them as the “best”, we just want to ensure that it is clear that there are particular consequences of that basic choice.
Towards the Relational Perspective
As late as 1991, Robert Rosen’s work about „Relational Biology“ has been anything but nearby [1]. As a mathematician, Rosen was interested in the problematics of finding a proper way to represent living systems by formal means. As a result of this research, he strongly proposed the “relational” perspective. He identifies Nicolas Rashevsky as the originator of it, who mentioned about it around 1935 for the first time. It really sounds strange that relational biology had to be (re-)invented. What else than relations could be important in biology? Yet, still today the atomistic thinking is quite abundant, think alone about the reductionist approaches in genetics (which fortunately got seriously attacked meanwhile4). Or think about the still prevailing helplessness in various domains to conceive appropriately about complexity (see our discussion of this here). Being aware of relations means that the world is not conceived as made from items that are described by inputs and outputs with some analytics, or say deterministics, in between. Only such items could be said that they “function”. The relational perspective abolishes the possibility of the reduction of real “systems” to “functions”.
As it is already indicated by the appearance of Rashevsky, there is, of course, a historical trace for this shift, kind of soil emerging from intellectual sediments.5 While the 19th century could be considered as being characterized by the topos of population (of atoms)—cf. the line from Laplace and Carnot to Darwin and Boltzmann—we can observe a spawning awareness for the relation in the 20th century. Wittgenstein’s Tractatus started to oppose Frege and has been always in stark contrast to logical positivism, then accompanied by Zermelo (“axiom” of choice6), Rashevsky (relational biology), Turing (morphogenesis in complex systems), McLuhan (media theory), String Theory in physics, Foucault (field of propositions), and Deleuze (transcendental difference). Comparing Habermas and Luhmann on the one side—we may label their position as idealistic functionalism—with Sellars and Brandom on the other—who have been digging into the pragmatics of the relation as it is present in humans and their culture—we find the same kind of difference. We also could include Gestalt psychology as kind of a pre-cursor to the party of “relationalists,” mathematical category theory (as opposed to set theory) and some strains from the behavioral sciences. Researchers like Ekman & Scherer (FACS), Kummer (sociality expresses as dynamics in relative positions), or Colmenares (play) focused the relation itself, going far beyond the implicit reference to the relation as a secondary quality. We may add David Shane7 for architecture and Clarke or Latour8 for sociology. Of course, there are many, many other proponents who helped to grow the topos of the relation, yet, even without a detailed study we may guess that compared to the main streams they still remain comparatively few.
These difference could not be underestimated in the field of information sciences, computer sciences, data analysis, or machine-based learning and episteme. It makes a great difference whether one would base the design of an architecture or the design of use on the concept of interfaces, most often defined as a location of full control, notably in both directions, or on the concept of behavioral surfaces.9. In the field of empiric activities, that is modeling in its wide sense, it yields very different setups or consequences whether we start with the assumption of independence between our observables or between our observations or whether we start with no assumptions about the dependency between observables, or observations, respectively. The latter is clearly the preferable choice in terms of intellectual soundness. Even if we stick to the first of both alternatives, we should NOT use methods that work only if that assumption is satisfied. (It is some kind of a mystery that people believe that doing so could be called science.) The reason is pretty simple. We do not know anything about the dependency structures in the data before we have finished modeling. It would inevitably result in a petitio principii if we’d put “independence” into the analysis, wrapped into the properties of methods. We would just find. . . guess what. After destroying facts—in the Wittgensteinian sense understood as relationalities—into empiristic dust we will not be able to find any meaningful relation at all.
Positioning Transformation (again)
Similarly, if we treat data as a “true” mapping of an outside “reality”, as “givens” that eventually are distorted a bit by more or less noise, we will never find multiplicity in the representations that we could derive from modeling, simply because it would contradict the prejudice. We also would not recognize all the possible roles of transformation in modeling. Measurement devices act as a filter10, and as such it does not differ from any analytic transformation of the data. From the perspective of the associative part of modeling, where the data are mapped to desired outcomes or decisions, “raw” data are simply not distinguishable from “transformed” data, unless the treatment itself would not be encoded as data as well. Correspondingly, we may consider any data transformation by algorithmic means as additional measurement devices, which are responding to particular qualities in the observations on their own. It is this equivalence that allows for the change from the linear to a circular and even a self-referential arrangement of empiric activities. Long-term adaptation, I would say even any adaptation at all is based on such a circular arrangement. The only thing we’d to change to earn the new possibilities was to drop the “passivist” representationalist realism11.
Usually, the transformation of data is considered as an issue that is a function of discernibility as an abstract property of data (Yet, people don’t talk like that, it’s our way of speaking here). Today, the respective aphorism as coined by Bateson already became proverbial, despite its simplistic shape: Information is the difference that makes the difference. According to the context in which data are handled, this potential discernibility is addressed in different ways. Let us distinguish three such contexts: (i) Data warehousing, (ii) statistics, and (iii) learning as an epistemic activity.
In Data Warehousing one is usually faced with a large range of different data sources and data sinks, or consumers, where the difference of these sources and sinks simply relates to the different technologies and formats of data bases. The warehousing tool should “transform” the data such that they can be used in the intended manner on the side of the sinks. The storage of the raw data as measured from the business processes and the efforts to provide any view onto these data has to satisfy two conditions (in the current paradigm). It has to be neutral—data should not be altered beyond the correction of obvious errors—and its performance, simply in terms of speed, has to be scalable, if not even independent from the data load. The activities in Data Warehousing are often circumscribed as “Extract, Transform, Load”, abbreviated ETL. There are many and large software solutions for this task, commercial ones and open source (e.g. Talend). The effect of DWH is to disclose the potential for an arbitrary and quickly served perspective onto the data, where “perspective” means just re-arranged columns and records from the database. Except cleaning and simple arithmetic operations, the individual bits of data itself remain largely unchanged.
In statistics, transformations are applied in order to satisfy the conditions for particular methods. In other words, the data are changed in order to enhance discernibility. Most popular is the log-transformation that shifts the mode of a distribution to the larger values. Two different small values that consequently are located nearby are separated better after a log-transformation, hence it is feasible to apply log-transformation to data that form a left-skewed distribution. Other transformations are aiming at a particular distribution, such as the z-score, or Fisher’s z-transformation. Interestingly, there is a further class of powerful transformations that is not conceived as such. Residuals are defined as deviation of the data from a particular model. In linear regression it is the square of the distance to the regression line.
The concept, however, can be extended to those data which do not “follow” the investigated model. The analysis of residual has two aspects, a formal one and an informal one. Formally, it is used as a complex test whether the investigated model does fit or whether it does not. The residual should not show any evident “structure”. That’s it. There is no institutional way back to the level of the investigated model, there are no rules about that, which could be negotiated in a yet to establish community. The statistical framework is a linear one, which could be seen as a heritage from positivism. It is explicitly forbidden to “optimize” a correlation by multiple actualization. Yet, informally the residuals may give hints on how to change the basic idea as represented by the model. Here we find a circular setup, where the strategy is to remove any rule-based regularity, i.e. discernibility form the data.
The effect of this circular arrangement takes completely place in the practicing human as kind of a refinement. It can’t be found anywhere in the methodological procedure itself in a rule-based form. This brings us to the third area, epistemic learning.
In epistemic learning, any of the potentially significant signals should be rendered in such a way as to allow for an optimized mapping towards a registered outcome. Such outcomes often come as dual values, or as a small group of ordinal values in the case of multi-constraint, multi-target optimization. In epistemic learning we thus find the separation of transformation and association in its most prominent form, despite the fact that data warehousing and statistics as well also are intended to be used for enhancing decisions. Yet, their linearity simply does not allow for any kind of institutionalized learning.
This arbitrary restriction to the linear methodological approach in formal epistemic activities results in two related quite unfavorable effects: First, the shamanism of “data exploration”, and second, the infamous hell of methods. One can indeed find thousands, if not 10s of thousands of research or engineering articles trying to justify a particular new method as the most appropriate one for a particular purpose. These methods themselves however are never identified as a „transformation“. Authors are all struggling for the “best” method, the whole community being neglecting the possibility—and the potential—of combining different methods after shaping them as transformations.
The laborious and never-ending training necessary to choose from the huge amount of possible methods then is called methodology… The situation is almost paradox. First, the methods are claimed to tell something about the world, despite this is not possible at all, not just because those methods are analytic. It is an idealistic hope, which has been abolished already by Hume. Above all, only analytic methods are considered to be scientific. Then, through the large population of methods the choice for a particular one becomes aleatory, which renders the whole activity into a deeply non-scientific one. Additionally, it is governed by the features of some software, or the skills of the user of such software, not by a conceptual stance.
Now remember that any method is also a specific filter. Obviously, nothing could be known about the beneficiality of a particular method before the prediction that is based on the respective model had been validated. This simple insight renders “data exploration” into meaninglessness. It can only play its role within linear empirical frameworks, which are inappropriate any way. Data exploration is suggested to be done “intuitively”, often using methods of visualization. Yet, those methods are severely restricted with regard to the graspable dimensionality. More than 6 to 8 dimensions can’t be “visualized” at once. Compare this to the 2n (n: number of variables) possible models and you immediately see the problem. Else, the only effect of visualization is just a primitive form of clustering. Additionally, visual inputs are images, above all, and as images they can’t play a well-defined epistemological role.12
Complementary to the non-concept of “exploring” data13, and equally misconceived, is the notion of “preparing” data. At least, it must be rated as misconceived as far as it comprises transformations beyond error correction and arranging data into tables. The reason is the same: We can’t know whether a particular “cleansing” will enhance the predictive power of the model, in other words, whether it comprises potential information that supports the intended discernibility, before the model has been built. There is no possibility to decide which variables to include before having finished the modeling. In some contexts the information accessible through a particular variable could be relevant or even important. Yet, if we conceive transformations as preliminary hypothesis we can’t call them “preparation” any more. “Preparation” for what? For proofing the petitio principii? Certainly the peak of all preparatory nonsense is the “imputation” of missing values.
Dorian Pyle [11] calls such introduced variables “pseudo variables”, others call them “latent” or even “hidden variables”.14 Any of these labels is inappropriate, since the transformation is nothing else than a measurement device. Introduced variables are just variables, nothing else.
Indeed, these labels are reliable markers: whenever you meet a book or article dealing with data exploration, data preparation, the “problem” of selecting a method, or likewise, selecting an architecture within a meta-method like the Artificial Neural Networks, you can know for sure that the author is not really interested in learning and reliable predictions. (Or, that he or she is not able to distinguish analysis from construction.)
In epistemic learning the handling of residuals is somewhat inverse to their treatment in statistics, again as a result of the conceptual difference between the linear and the circular approach. In statistics one tries to prove that the model, say: transformation, removes all the structure from the data such that the remaining variation is pure white noise. Unfortunately, there are two drawbacks with this. First, one has to define the model before removing the noise and before checking the predictive power. Secondly, the test for any possibly remaining structure again takes place within the atomistic framework.
In learning we are interested in the opposite. We are looking for such transformations which remove the noise in a multi-variate manner such that the signal-noise ratio is strongly enhanced, perhaps even to the proto-symbolic level. Only after the de-noising due to the learning process, that is after a successful validation of the predictive model, the structure is then described for the (almost) noise-free data segment15 as an expression that is complementary to the predictive model.
In our opinion an appropriate approach would actualize as an instance of epistemic learning that is characterized by
• – conceiving any method as transformation;
• – conceiving measurement as an instance of transformation;
• – conceiving any kind of transformation as a hypothesis about the “space of expressibility” (see next section), or, similarly, the finally selected model;
• – the separation of transformation and association;
• – the circular arrangement of transformation and association.
The Abstract Perspective
We now have to take a brief look onto the mechanics of transformations in the domain of epistemic activities.16 For doing this, we need a proper perspective. As such we choose the notion of space. Yet, we would like to emphasize that this space is not necessarily Euclidean, i.e. flat, or open, like the Cartesian space, i.e. if quantities running to infinite. Else, dimensions need not be thought of as being “independent”, i.e. orthogonal on each other. Distance measures need to be defined only locally, yet, without implying ideal continuity. There might be a certain kind of “graininess” defined by a distance D, below which the space is not defined. The space may even contain “bubbles” of lower dimensionality. So, it is indeed a very general notion of “space”.
Observations shall be represented as “points” in this space. Since these “points” are not independent from the efforts of the observer, these points are not dimensionless. To put it more precisely, they are like small “clouds”, that are best described as probability densities for “finding” a particular observation. Of course, this “finding” is kind of an inextricable mixture of “finding” and “constructing”. It does not make much sense to distinguish both on the level of such cloudy points. Note, that the cloudiness is not a problem of accuracy in measurement! A posteriori, that is, subsequent to introducing an irreversible move17, such a cloud could also be interpreted as an open set of the provoked observation and virtual observations. It should be clear by now that such a concept of space is very different from the Euclidean space that nowadays serves as a base concept for any statistics or data mining. If you think that conceiving such a space is not necessary or even nonsense, then think about quantum physics. In quantum physics we also are faced with the break-down of observer and observable, and they ended up quite precisely in spaces as we described it above. These spaces then are handled by various means of renormalization methods.18 In contrast to the abstract yet still physical space of quantum theory, our space need not even contain an “origin”. Elsewhere we called such a space aspectional space.
Now let us take the important step in becoming interested in only a subset of these observations. Assume we not only want to select a very particular set of observations—they are still clouds of probabilities, made from virtual observations—by means of prediction. This selection now can be conceived in two different ways. The first way is the one that is commonly applied and consists of the reconstruction of a “path”. Since in the contemporary epistemic life form of “data analysts” Cartesian spaces are used almost exclusively, all these selection paths start from the origin of the coordinate system. The endpoint of the path is the point of interest, the “outcome” that should be predicted. As a result, one first gets a mapping function from predictor variables to the outcome variable. All possible mappings form the space of mappings, which is a category in the mathematical sense.
The alternative view does not construct such a path within a fixed coordinate system, i.e. with a space with fixed properties. Quite to the contrast, the space itself gets warped and transformed until very simple figures appear, which represent the various subsets of observations according to the focused quality.
Imagine an ordinary, small, blown-up balloon. Next, imagine a grid in the space enclosed by the balloon’s hull, made by very thin threads. These threads shall represent the space itself. Of course, in our example the space is 3d, but it is not limited to this case. Now think of two kinds of small pearls attached to the threads all over the grid inside the balloon, blue ones and red ones. It shall be the red ones in which we are interested. The question now is what can we do to separate the blue ones from the red ones?
The way to proceed is pretty obvious, though the solution itself may be difficult to achieve. What we can try is to warp and to twist, to stretch, to wring and to fold the balloon in such a way that the blue pearls and the red pearls separate as nicely as possible. In order to purify the groups we may even consider to compress some regions of the space inside the balloon such that they are turn into singularities. After all this work—and beware it is hard work!—we introduce a new grid of threads into the distorted space and dissolve the old ones. All pearls automatically attach to the threads closest nearby, stabilizing the new space. Again, conceiving of such a space may seem weird, but again we can find a close relative in physics, the Einsteinian space of space-time. Gravitation effectively is warping that space, though in a continuous manner. There are famous empirical proofs of that warping of physical space-time.19
Analytically, these two perspectives, the path reconstruction on the hand and the space warping on the other, are (almost) equivalent. The perspective of space warping, however, offers a benefit that is not to be underestimated. We arrive at a new space for which we can define its own properties and in which we again can define measures that are different from those possible in the original space. The path reconstruction does not offer such a “a derived space”. Hence, once the path is reconstructed, the story stops. It is a linear story. Our proposal thus is to change perspective.
Warping the space of measurability and expressibility is an operation that inverts the generation of cusp catastrophes.20 (see Figure 1 below). Thus it transcends the cusp catastrophes. In the perspective of path reconstruction one has to avoid the phenomenon of hysteresis and cusps altogether, hence loosing a lot of information about the observed source of data.
In the Cartesian space and the path reconstruction methodology related to it, all operations are analytic, that is organized as symbolic rewriting. The reason for this is the necessity for the paths remaining continuous and closed. In contrast, space warping can be applied locally. Warping spaces in dealing with data is not an exotic or rare activity at all. It happens all the time. We know it even from (simple) mathematics, when we define different functions, including the empty function, for different sets of input parameter values.
The main consequence of changing the perspective from path reconstruction to space warping is an enlargement of the set of possible expressions. We can do more without the need to call it “heuristics”. Our guess is that any serious theory of data and measurement must follow the opened route of space warping, if this theory of data tries to avoid positivistic reductionism. Most likely, such a theory will be kind of a renormalization theory in a connected, relativistic data space.
Revitalizing Punch Cards and Stacks
In this section we will introduce the outline of a tool that allows to follow the circular approach in epistemic activities. Basically, this tool is about organizing arbitrary transformations. While for analytic (mathematical) expressions there are expression interpreters it is also clear that analytic expressions form only a subset of the set of all possible transformations, even if we consider the fact that many expression interpreters have been growing to some kind of programming languages, or script language. Indeed, Java contains an interpreting engine for JavaScript by default, and there are several quite popular ones for mathematical purposes. One could also conceive mathematical packages like Octave (open source), MatLab or Mathematica (both commercial) as such expression interpreters, even as their most recent versions can do much, much more. Yet, using MatLab & Co. are not quite suitable as a platform for general purpose data transformation.
The structural metaphor that proofed to be as powerful as it was sustainable for more than 10 years now is the combination of the workbench with the punch card stack.
Image 1: A Punched Card for feeding data into a computer
Any particular method, mathematical expression or arbitrary computational procedure resulting in a transformation of the original data is conceived as a “punch card”. This provides a proper modularization, and hence standardization. Actually, the role of these “functional compartments” is extremely standardized, at least enough to define an interface for plugins. Like the ancient punch cards made from paper, each card represents a more or less fixed functionality. Of course, these functionality may be defined by a plugin that itself connects to Matlab…
Else, again like the ancient punch cards, the virtualized versions can be stacked. For instance, we first put the treatment for missing values onto the stack, simply to ensure that all NULLS are written as -1. The next card then determines minimum and maximum in order to provide the data for linear normalization, i.e. the mapping of all values into the interval [0..1]. Then we add a card for compressing the “fat tail” of the distribution of values in a particular variable. Alternatively we may use a card to split the “fat tail” off into a new variable! Finally we apply the card=plugin for normalizing the data to the original and the new data column.
I think you got the idea. Such a stack is not only maintained for any of the variables, it is created on the fly according to the needs as these got detected by simple rules. You may think of the cards also as the set of rules that describe the capabilities of agents, which constantly check the data whether they could apply their rules. You also may think of these stacks as a device that works like a tailored distillation column , as it is used for fractional distillation in petro-chemistry.
Image 2: Some industrial fractional distillation columns for processing mineral oil. Dependent on the number of distillation steps different products result.
These stacks of parameterized procedures and expressions represent a generally programmable computer, or more precisely, operating system, quite similar to a spreadsheet, albeit the purpose of the latter, and hence the functionality, actualizes in a different form. The whole thing may even be realized as a language! In this case, one would not need a graphical user-interface anymore.
The effect of organizing the transformation of data in this way, by means of plugins that follow the metaphor of the “punch card stack”, is dramatic. Introducing transformations and testing them can be automated. At this point we should mention about the natural ally of the transformation workbench, the maximum likelihood estimation of the most promising transformations that combine just two or three variables into a new one. All three parts, the transformation stack engine, the dependency explorer, and the evolutionary optimized associative engine (which is able to create a preference weighting for the variables) can be put together in such a way that finding the “optimal” model can be run in a fully automated manner. (Meanwhile the SomFluid package has grown into a stage where it can accomplish this. . . download it here, but you need still some technical expertise to make it running)
The approach of the “transformation stack engine” is not just applicable to tabular data, of course. Given a set of proper plugins, it can be used as a digester for large sets of images or time series as well (see below).
Transforming Data
In this section we now will take a more practical and pragmatic perspective. Actually, we will describe some of the most useful transformations, including their parameters. We do so, because even prominent books about “data mining” have been handling the issue of transforming data in a mistaken or at least seriously misleading manner.21,22
If we consider the goal of the transformation of numerical data, increasing the discernibility of assignated observations , we will recognize that we may identify a rather limited number of types of such transformations, even if we consider the space of possible analytic functions, which combine two (or three) variables.
We will organize the discussion of the transformations into three sub-sections, whose subjects are of increasing complexity. Hence, we will start with the (ordinary) table of data.
Tabular Data
Tables may comprise numerical data or strings of characters. In its general form it may even contain whole texts, a complete book in any of the cells of a column (but see the section about unstructured data below!). If we want to access the information carried by the string data, we more sooner than later have to translate them into numbers. Unlike numbers, string data, and the relations between data points made from string data, must be interpreted. As a consequence, there are always several, if not many different possibilities of that representation. Besides referring to the actual semantics of the strings that could be expressed by means of the indices of some preference orders, there are also two important techniques of automatic scaling available, which we will describe below.
Besides string data, dates are further multi-dimensional category of data. A date encodes not only a serial number relative to some (almost) arbitrarily chosen base date, which we can use to express the age of the item represented by the observation. We have, of course, day of week, day of month, number of week, number of month, and not to forget about season as an approximate class. It depends a bit on the domain whether these aspects play any role at all. Yet, think about the rhythms in the city or on the stock markets across the week, or the “black Monday/ Tuesday/Friday effect” in production plants or hospitals then it is clear that we usually have to represent the single date value by several “informational extracts”.
A last class of data types that we have to distinguish are time values. We already mentioned the periodicity in other aspects of the calendar. In which pair of time values we find a closer similarity, T1( 23:41pm, 0:05pm), or T2(8:58am;3:17pm)? In case of any kind of distance measure the values of T2 are evaluated as much more similar than those in T1. What we have to do is to set a flag for “circularity” in order to calculate the time distances correctly.
Numerical Data: Numbers, just Numbers?
Numerical data are data for which in principle any value from within a particular interval could be observed. If such data are symmetrically normal distributed then we have little reasons to guess that there is something interesting within these sample of values. As soon as the distribution becomes asymmetrical, it starts to become interesting. We may observe “fat tails” (large values are “over-represented), or multi-modal distributions. In both cases we could suspect that there are at least two different processes, one dominating the other differentially across peaks. So we should split the variable into two (called “deciling”) and ceteris paribus check out the effect on the predictive power of the model. Typically one splits the values at the minimum between the peaks, but it is also possible to implement an overlap, where some records are present in both of the new variables.
Long tails indicate some aberrant behavior of the items represented by the respective records, or, like in medicine even pathological contexts. Strongly left-skewed distribution often indicate organizational or institutional influences. Here we could compress the long tail, log-shift, and then split the variable, that is decile it into two. 21
In some domains, like the finances, we find special values at which symmetry breaks. For ordinary money values the 0 is such a value. We know in advance that we have to split the variable into two, because the semantic and the structural difference between +50$ and -75$ is much bigger than between 150$ and 2500$… probably. As always, we transform it such that we create additional variables as kind of a hypotheses, for which we have to evaluate their (positive) contribution to the predictive power of the model.
In finances, but also in medicine, and more general in any system that is able to develop meta-stable regions, we have to expect such points (or regions) with increased probability of breaking symmetry and hence strong semantic or structural difference. René Thom first described similar phenomena by his theory that he labeled “catastrophe theory”. In 3d you can easily think about cusp catastrophes as a hysteresis in x-z direction that is however gradually smoothed out in y-direction.
Figure 1: Visualization of folds in parameters space, leading to catastrophes and hystereses.
In finances we are faced with a whole culture of rule following. The majority of market analysts use the same tools, for instance “stochasticity,” or a particularly parameterized MACD for deriving “signals”, that is, indicators for points of actions. The financial industries have been hiring a lot of physicists, and this population sticks to greatly the same mathematics, such as GARCH, combined with Monte-Carlo-Simulations. Approaches like fractal geometry are still regarded as exotic.23
Or think about option prices, where we find several symmetry breaks by means of contract. These points have to be represented adequately in dedicated, means derived variables. Again, we can’t emphasize it enough, we HAVE to do so as a kind of performing hypothesizing. The transformation of data by creating new variables is, so to speak, the low-level operationalization of what later may grow into a scientific hypothesis. Creating new variables poses serious problems for most methods, which may count as a reason why many people don’t follow this approach. Yet, for our approach it is not a problem, definitely not.
In medicine we often find “norm values”. Potassium in blood serum may take any value within a particular range without reflecting any physiologic problem. . . if the person is healthy. If there are other risk factors the story may be a different one. The ratio of potassium and glucose in serum provides us an example for a significant marker. . . if the person has already heart problems. By means of such risk markers we can introduce domain-specific knowledge. And that’s actually a good message, since we can identify our own “markers” and represent it as a transformation. The consequence is pretty clear: a system that is supposed to “learn” needs a suitable repository for storing and handling such markers, represented as a relational system (graph).
Let us return to the norm ranges briefly again. A small difference outside the norm range could be rated much more strongly than within the norm range. This may lead to the weight functions shown in the next figure, or more or less similar ones. For a certain range of input values, the norm range, we leave the values unchanged. The output weight equals 1. Outside of this range we transform them in a way that emphasizes the difference to the respective boundary value of the norm range. This could be done in different ways.
Figure 2: Examples for output weight configurations in norm-range transformation
Actually, this rationale of the norm range can be applied to any numerical data. As an estimate of the norm range one could use the 80% quantile, centered around the median and realized as +/-40% quantiles. On the level of model selection, this will result in a particular sensitivity for multi-dimensional outliers, notably before defining any criterion apriori of what an outlier should be.
From Strings to Orders to Numbers
Many data come as some kind of description or label. Such data are described as nominal data. Think for instance about prescribed drugs in a group of patients included into an investigation of risk factors for a disease, or think about the name or the type of restaurants in a urbanological/urbanistic investigation. Nominal data are quite frequent in behavioral, organizational or social data, that is, in contexts that are established mainly on a symbolic level.
It should be avoided to perform measurements only on the nominal scale, yet, sometimes it is not possible to circumvent it. It could be avoided at least partially by including further properties that can be represented by numerical values. For instance, instead using only the names cities in a data set, one can use the geographical location, number of inhabitants, or when referring to places within a city one can use descriptors that cover some properties of the respective area, such items as density of traffic, distance to similar locations, price level of consumer goods, economical structure etc. If a direct measurement is not possible, estimates can do the job as well, if the certainty of the estimate is expressed. The certainty then can be used to generate surrogate data. If the fine grained measurement creates further nominal variables, they could be combined for form a scale. Such enrichment is almost always possible, irrespective the domain. One should keep in mind, however, that any such enrichment is nothing else than a hypothesis.
Sometimes, data on the nominal level, technically a string of alphanumerical characters, already contains valuable information. For instance, the contain numerical values, as in the name of cars. If we would deal with things like names of molecules, where these names often come as compounds, reflecting the fact that molecules themselves are compounds, we can calculate the distance of each name to a virtual “average name” by applying a technique called “random graph”. Of course, in case of molecules we would have a lot of properties available that can be expressed as numerical values.
Ordinal data are closely related to nominal data. Essentially, there are two flavors of them. In case of the least valuable of them the numbers to not express a numerical value, the cipher is just used as kind of a letter, indicating that there is a set of sortable items. Sometimes, values of an ordinal scale represent some kind of similarity. Despite this variant is more valuable it still can be misleading, because the similarity may not scale isodistantly with the numerical values of the ciphers. Undeniably, there is still a rest of a “name” in it.
We are now going to describe some transformations to deal with data from low-level scales.
The least action we have to apply to nominal data is a basic form of encoding. We use integer values instead of the names. The next, though only slightly better level would be to reflect the frequency of the encoded item in the ordinal value. One would, for instance not encode the name into an arbitrary integer value, but into the log of the frequency. A much better alternative, however, is provided by the descendants of the correspondence analysis. These are called Optimal Scaling and the Relative Risk Weight. The drawback for these method is that some information about the predicted variable is necessary. In the context of modeling, by which we always understand target-oriented modeling—as opposed to associative storage24—we usually find such information, so the drawback is not too severe.
First to optimal scaling (OSC). Imagine a variable, or “assignate” as we prefer to call it25, which is scaled on the nominal or the low ordinal scale. Let us assume that there are just three different names or values. As already mentioned, we assume that a purpose has been selected and hence a target variable as its operationalization is available. Then we could set up the following table (the figures are denoting frequencies).
Table 1: Summary table derived from a hypothetical example data set. av(i) denote three nominally scaled assignates.
marginal sum
tf (focused)
marginal sum
From these figures we can calculate the new scale values by the formula
For the assignate av1 this yields
Table 2: Here, various encodings are contrasted.
literal encoding
normalized log(freq)
optimal scaling
normalized OSC
Using these values we could replace any occurrence of the original nominal (ordinal) values by the scaled values. Alternatively—or better additionally—, we could sum up all values for each observation (record), thereby collapsing the nominally scaled assignates into a single numerically scaled one.
Now we will describe the RRW. Imagine a set of observations {o(i)} where each observation is described by a set of assignates a(i). Also let us assume that some of these assignates are on the binary level, that is, the presence of this quality in the observation is encoded by “1”, its missing by “0”. This usually results in sparsely filled (regions of ) the data table. Depending on the size of the “alphabet”, even more than 99.9% of all values could simply be equal to 0. Such data can not be grouped in a reasonable manner. Additionally, if there are further assignates in the table that are not binary encoded, the information in the binary variables would be neglected almost completely without applying a rescaling like the RRW.
For the assignate av1 this yields
As you can see, the RRW uses the marginal from the rows, while the optimal scaling uses the marginal from the columns. Thus, the RRW uses slightly more information. Assuming a table made from binary assignates av(i), which could be summarized into table 1 above, the formula yields the following RRW factors for the three binary scaled assignates:
Table 3: Relative Risk Weights (RRW) for the frequency data shown in table 1.
raw RRWi
normalized RRW
The ranking of av(i) based RRW is equal to that returned by OSC, even the normalized score values are quite similar. Yet, while in the case of nominal variables assignates are usually not collapsed, this will be done always in case of binary variables.
So, let us summarize these simple methods in the following table.
Table 4: Overview about some of the most important transformations for tabular data.
Effect, New Value
Properties, Conditions
analytic function
analytic combination
explicit analytic function (a,b)→f(a,b)
enhancing signal-to-noise ratio for the relationship between predictors and predicted, 1 new variable
targeted modeling
empiric combinational recoding
using simple clustering methods like KNN or K-means for a small number of assignates
distance from cluster centers and, or cluster center as new variables
targeted modeling
upon evaluation of properties of the distribution
2 new variables
based on extreme-value quantiles
1 new variable, better distinction for data in frequent bins
optimal scaling
numerical encoding and/or rescaling using marginal sums
enhancing the scaling of the assignate from nominal to numerical
targeted modeling
relative risk weight
collapsing sets of sparsely filled variables
targeted modeling
Obviously, the transformation of data is not an analytical act, on both sides. Left-hand it refers to structural and hence semantic assumptions, while right hand it introduces hypotheses about those assumptions. Numbers are never ever just values, much like sentences and words do not consists just from letters. After all, the difference between both is probably less than one could initially presume. Later we will address this aspect from the opposite direction, when it comes to the translation of textual entities into numbers.
Time Series and Contexts
Time series data are the most valuable data. They allow the reconstruction of the flow of information in the observed system, either between variables intrinsic to the measurement setup (reflecting the “system”) or between treatment and effects. In the recent years, so-called “causal FFT” gained some popularity.
Yet, modeling time series data poses the same problematics as tabular data. We do not know apriori which variables to include, or how to transform variables in order to reflect particular parts of the information in the most suitable way. Simply pressing a FFT onto the data is nothing but naive. FFT assumes a harmonic oscillation, or a combination thereof, which certainly is not appropriate. Even if we interpret a long series of FFT terms as an approximation to an unknown function, it is by no means clear whether the then assumed stationarity26 is indeed present in the data.
Instead, it is more appropriate to represent the aspects of a time series in multiple ways. Often, there are many time series available, one for each assignate. This brings the additional problem of careful evaluation of cross-correlations and auto-correlations, and all of this under the condition that it is not known apriori whether the evolution of the system is stationary.
Fortunately, the analysis of multiple time series, even from non-stationary processes, is quite simple, if we follow the approach as outlined so far. Let us assume a set of assignates {a(i)} for which we have their time series measurement available, which are given by equidistant measurement points. A transformation then is constructed by a method m that is applied to a moving window of size md(k). All moving windows of any size are adjusted such that their endpoints meet at the measurement point at time t(m(k)). Let us call this point the prediction base point, T(p). The transformed values consist either from the residuals resulting from this methods values and the measurement data, or the parameters of the method fitted to the moving window. A example for the latter case are for instance given by the wavelet coefficients, which provide a quite suitable, multi-frequency perspective onto the development up to T(p). Of course, the time series data of different assignates could be related to each other by any arbitrary functional mapping.
The target value for the model could be any set of future points relative to t(m(k)). The model may predict a singular point, averages some time in the future, the volatility of the future development of the time series, or even the parameters of a particular mapping function relating several assignates. In the latter case the model would predict several criteria at once.
Such transformations yield a table that contain a lot more variables than originally available. The ratio may grow up to 1:100 in complex cases like the global financial markets. Just to be clear: If you measure, say the index values of 5 stock markets, some commodities like gold, copper, precious metals and “electronics metals”, the money market, bonds and some fundamentals alike, that is approx. 30 basic input variables, even a superficial analysis would have to inspect 3000 variables… Yes, learning and gaining experience can take quite a bit! Learning and experience do not become cheaper only for that we use machines to achieve it. Just exploring is more easy nowadays, not requiring life times any more. The reward consists from stable models about complex issues.
Each point in time is reflected by the original observational values and a lot of variables that express the most recent history relative to the point in time represented by the respective record. Any of the synthetic records thus may be interpreted as a set of hypothesis about the future development, where this hypothesis comes as a multidimensional description of the context up to T(p). It is then the task of the evolutionarily optimized variable selection based on the SOM to select the most appropriate hypothesis. Any subgroup contained in the SOM then represents comparable sets of relations between the past relative to T(p) and the respective future as it is operationalized into the target variable.
Typical transformations in such associative time series modeling are
• – moving average and exponentially decaying moving average for de-seasoning or de-trending;
• – various correlational methods: cross- and auto-correlation, including the result parameters of the Bartlett test;
• – Wavelet-, FFT-, or Walsh- transforms of different order, residuals to the denoised reconstruction;
• – fractal coefficients like Lyapunov coefficient or Hausdorff dimension
• – ratios of simple regressions calculated over moving windows of different size;
• – domain specific markers (think of technical stock market analysis, or ECG.
Once we have expressed a collection of time series as series of contexts preceding the prediction point T(p), the further modeling procedure does not differ from the modeling of ordinary tabular data, where the observations are independent from each other. From the perspective of our transformation tool, these time series transformation are nothing else than “methods”, they do not differ from other plugin methods with respect to the procedure calls in their programing interface.
„Unstructurable“ „Data“: Images and Texts
The last type of data for which we briefly would like to discuss the issue of transformation is “unstructurable” data. Images and texts are the main representatives for this class of entities. Why are these data “unstructurable”?
Let us answer this question from the perspective of textual analysis. Here, the reason is obvious, actually, there are several obvious reasons. Patrizia Violi [17] for instance emphasizes that words are creating their own context, upon which they are then going to be interpreted. Douglas Hofstadter extended the problematics to thinking at large, arguing that for any instance of analogical thinking—and any thinking he claimed as being analogical—it is impossible to define criteria that would allow to set up a table. Here on this site we argued repeatedly that it is not possible to define any criteria apriori that would capture the “meaning” of a text.
Else, understanding language, as well as understanding texts can’t be mapped to the problematics of predicting a time series. In language, there is no such thin as a prediction point T(p), and there is no positively definable “target” which could be predicted. The main reason for this is the special dynamics between context (background) and proposition (figure). It is a multi-level, multi-scale thing. It is ridiculous to apply n-grams to text, then hoping to catch anything “meaningful”. The same is true for any statistical measure.
Nevertheless, using language, that is, producing and understanding is based on processes that select and compose. In some way there must be some kind of modeling. We already proposed a structure, or more, an architecture, for this in a previous essay.
The basic trick consists of two moves: Firstly, texts are represented probabilistically as random contexts in an associative storage like the SOM. No variable selection takes place here, no modeling and no operationalization of a purpose is present. Secondly, this representation then is used as a basis for targeted modeling. Yet, the “content” of this representation does not consist from “language” data anymore. Strikingly different, it contains data about the relative location of language concepts and their sequence as they occur as random contexts in a text.
The basic task in understanding language is to accomplish the progress from a probabilistic representation to a symbolic tabular representation. Note that any tabular representation of an observation is already on the symbolic level. In the case of language understanding precisely this is not possible: We can’t define meaning, and above all, not apriori. Meaning appears as a consequence of performance and execution of certain rules to a certain degree. Hence we can’t provide the symbols apriori that would be necessary to set up a table for modeling, assessing “similarity” etc.
Now, instead of probabilistic non-structured representation we also could say arbitrary unstable structure. From this we should derive a structured, (proto-)symbolic and hence tabular and almost stable structure. The trick to accomplish this consists of using the modeling system itself as measurement device and thus also as a “root” for further reference in the then possible models. Kohonen and colleagues demonstrated this crucial step in their WebSom project. Unfortunately (for them), they then actualized several misunderstandings regarding modeling. For instance, they misinterpreted associative storage as a kind of model.
The nice thing with this architecture is that once the symbolic level has been achieved, any of the steps of our modeling approach can be applied without any change, including the automated transformation of “data” as described above.
Understanding the meaning of images follows the same scheme. The fact that there are no words renders the task more complicated and more simple at the same time. Note that so far there is no system that would have learned to “see”, to recognize and to understand images, despite many titles claim that the proposed “system” can do so. All computer vision approaches are analytic by nature, hence they are all deeply inadequate. The community is running straight into the method hell as the statisticians and the data miners did before, mistaking transformations as methods, conflating transformation and modeling, etc.. We discussed this issues at length above. Any of the approaches might be intelligently designed, but all are victimized by the representationalist fallacy, and probably even by naive realism. Due to the fact that the analytic approach is first, second and third mainstream, the probabilistic and contextual bottom-up approach is missing so far. In the same way as a word is not equal to the grapheme, a line is not defined on the symbolic level in the brain. We else and again meet the problem of analogical thinking even on the most primitive graphical level. When is a line still a line, when is a triangle still a triangle?
In order to start in the right way we first have to represent the physical properties of the image along different dimensions, such as textures, edges, or salient points, and all of those across different scales. Probably one can even detect salient objects by some analytic procedure. From any of the derived representations the random contexts are derived and arranged as vectors. A single image is represented as a table that contains random contexts derived from the image as a physical entity. From here on, the further processing scheme is the same as for texts. Note, that there is no such property as “line” in this basic mapping.
In case of texts and images the basic transformation steps thus consist in creating the representation as random contexts. Fortunately, this is “only” a question of the suitable plugins for our transformation tool. In both cases, for texts as well as images, the resulting vectors could grow considerably. Several thousands of implied variables must be expected. Again, there is already a solution, known as random projection, which allows to compress even very large vectors (say 20’000+) into one of say maximal 150 variables, without loosing much of the information that is needed to retain the distinctive potential. Random projection works by multiplying a vector of size N with a matrix of uniformly distributed random values of size NxM, which results in a vector of size M. Of course, M is chosen suitably (100+). The reason why this works is that with that many dimension all vectors are approximately orthogonal to each other! Of course, the resulting fields in such a vector do not “represent” anything that could be conceived as a reference to an “object”. Internally, however, that is from the perspective of a (population of) SOMs, it may well be used as a (almost) fixed “attribute”. Yet, neither the missing direct reference not the subjectivity poses a problem, as the meaning is not a mental entity anyway. Q.E.D.
Here in this essay we discussed several aspects related to the transformation of data as an epistemic activity. We emphasized that an appropriate attitude towards the transformation of data requires a shift in perspective and the focus of another vantage point. One of the more significant changes in attitude consider, perhaps, the drop of any positivist approach as one of the main pillars of traditional modeling. Remember that statistics is such a positivist approach. In our perspective, statistical methods are just transformations, nothing less, but above all also nothing more, characterized by a specific set of rather strong assumptions and conditions for their applicability.
We also provided some important practical examples for the transformation of data, whether tabular data derived from independent observations, time series data or “unstructurable” “data” like texts and images. According to the proposed approach we else described a prototypical architecture for a transformation tool, that could be used universally. In particular, it allows a complete automation of the modeling task, as it could be used for instance in the field of so-called data mining. The possibility for automated modeling is, of course, a fundamental requirement for any machine-based episteme.
1. The only reason why we do not refer to cultures and philosophies outside Europe is that we do not know sufficient details about them. Yet, I am pretty sure that taking into account Chinese or Indian philosophy would severe the situation.
2. It was Friedrich Schleiermacher who first observed that even the text becomes alien and at least partially autonomous to its author due to the necessity and inevitability of interpretation. Thereby he founded hermeneutics.
3. In German language these words all exhibit a multiple meaning.
4. In the last 10 years (roughly) it became clear that the gene-centered paradigms are not only not sufficient [2], they are even seriously defect. Evely Fox-Keller draws a detailed trace of this weird paradigm [3].
5. Michel Foucault [4]
6. The „axiom of choice“ is one of the founding axioms in mathematics. Its importance can’t be underestimated. Basically, it assumes that “something is choosable”. The notion of “something choosable” then is used to construct countability as a derived domain. This implies three consequences. First, this avoids to assume countability, that is, the effect of a preceding symbolification, as a basis for set theory. Secondly, it puts performance at the first place. These two implications render the “Axiom of Choice” into a doubly-articulated rule, offering two docking sites, one for mathematics, and one for philosophy. In some way, it thus can not count as an “axiom”. Those implications are, for instance, fully compatible with Wittgenstein’s philosophy. For these reasons, Zermelo’s “axiom” may even serve as a shared point (of departure) for a theory of machine-based episteme. Finally, the third implication is that through the performance of the selection the relation, notably a somewhat empty relation is conceived as a predecessor of countability and the symbolic level. Interestingly, this also relates to Quantum Darwinism and String Theory.
7. David Grahame Shane’s theory on cities and urban entities [5] is probably the only theory in urbanism that is truly a relational theory. Additionally, his work is full of relational techniques and concepts, such as the “heterotopy” (a term coined by Foucault).
8. Bruno Latour developed the Actor-Network-Theory [6,7], while Clarke evolved “Grounded Theory” into the concept of “Situational Analysis” [8]. Latour, as well as Clarke, emphasize and focus the relation as a significant entity.
9. behavioral coating, and behavioral surfaces ;
10. See Information & Causality about the relation between measurement, information and causality.
11. „Passivist“ refers to the inadequate form of realism according to which things exist as-such independently from interpretation. Of course, interpretation does affect the material dimension of a thing. Yet, it changes its relations insofar the relations of a thing, the Wittgensteinian “facts”, are visible and effective only if we assign actively significance to them. The “passivist” stance conceives itself as a re-construction instead of a construction (cf. Searle [9])
12. In [10] we developed an image theory in the context of the discussion about the mediality of facades of buildings.
13. nonsense of „non-supervised clustering“
14. In his otherwise quite readable book [11], though it may serve only as an introduction.
15. This can be accomplished by using a data segment for which the implied risk equals 0 (positive predictive value = 1). We described this issue in the preceding chapter.
16. hint to particle physics…
17. See our previous essay about the complementarity of the concepts of causality and information.
18. For an introduction of renormalization (in physics) see [12], and a bit more technical [13]
19. see the Wiki entry about so-called gravitational lenses.
20. Catastrophe theory is a concept invented and developed by French mathematician Rene Thom as a field of Differential Topology. cf. [14]
21. In their book, Witten & Eibe [15] recognized the importance of transformation and included a dedicated chapter about it. They also explicitly mention the creation of synthetic variables. Yet, they do also explicitly retreat from it as a practical means for the reason of computational complexity (=here, the time needed to perform a calculation in relation to the amount of data). After all, their attitude towards transformation is somehow that towards an unavoidable evil. They do not recognize its full potential. After all, as a cure for the selection problem, they propose SVM and their hyperplanes, which is definitely a poor recommendation.
22. Dorian Pyle [11]
23. see Benoit Mandelbrot [16].
24. By using almost meaningless labels target-oriented modeling is often called supervised modeling as opposed to “non-supervised modeling”, where no target variable is being used. Yet, such a modeling is not a model, since the pragmatics of the concept of “model” invariably requires a purpose.
25. About assignates: often called property, or feature… see about modeling
26. Stationarity is a concept in empirical system analysis or description, which denotes the expectation that the internal setup of the observed process will not change across time within the observed period. If a process is rated as “stationary” upon a dedicated test, one could select a particular, and only one particular method or model to reflect the data. Of course, we again meet the chicken-egg problem. We can decide about stationarity only by means of a completed model, that is after the analysis. As a consequence, we should not use linear methods, or methods that depend on independence, for checking the stationarity before applying the “actual” method. Such a procedure can not count as a methodology at all. The modeling approach should be stable against non-stationarity. Yet, the problem of the reliability of the available data sample remains, of course. As a means to “robustify” the resulting model against the unknown future one can apply surrogating. Ultimately, however, the only cure is a circular, or recurrent methodology that incorporates learning and adaptation as a structure, not as a result.
• [1] Robert Rosen, Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life. Columbia University Press, New York 1991.
• [2] Nature Insight: Epigenetics, Supplement Vol. 447 (2007), No. 7143 pp 396-440.
• [3] Evelyn Fox Keller, The Century of the Gene. Harvard University Press, Boston 2002. see also: E. Fox Keller, “Is There an Organism in This Text?”, in P. R. Sloam (ed.), Controlling Our Destinies. Historical, Philosophical, Ethical, and Theological Perspectives on the Human Genome Project, Notre Dame (Indiana), University of Notre Dame Press, 2000, pp. 288-289
• [4] Michel Foucault, Archeology of Knowledge. 1969.
• [5] David Grahame Shane. Recombinant Urbanism: Conceptual Modeling in Architecture, Urban Design and City Theory
• [6] Bruno Latour. Reassembling The Social. Oxford University Press, Oxford 2005.
• [7] Bruno Latour (1996). On Actor-network Theory. A few Clarifications. in: Soziale Welt 47, Heft 4, p.369-382.
• [8] Adele E. Clarke, Situational Analysis: Grounded Theory after the Postmodern Turn. Sage, Thousand Oaks, CA 2005).
• [9] John R. Searle, The Construction of Social Reality. Free Press, New York 1995.
• [11] Dorian Pyle, Data Preparation for Data Mining. Morgan Kaufmann, San Francisco 1999.
• [12] John Baez (2009). Renormalization Made Easy. Webpage
• [13] Bertrand Delamotte (2004). A hint of renormalization. Am.J.Phys. 72: 170-184. available online.
• [14] Tim Poston & Ian Stewart, Catastrophe Theory and Its Applications. Dover Publ. 1997.
• [15] Ian H. Witten & Frank Eibe, Data Mining. Practical Machine Learning Tools and Techniques (2nd ed.). Elsevier, Oxford 2005.
• [16] Benoit Mandelbrot & Richard L. Hudson, The (Mis)behavior of Markets. Basic Books, New York 2004.
• [17] Patrizia Violi (2000). Prototypicality, typicality, and context. in: Liliana Albertazzi (ed.), Meaning and Cognition – A multidisciplinary approach. Benjamins Publ., Amsterdam 2000. p.103-122.
Prolegomena to a Morphology of Experience
May 2, 2012 § Leave a comment
Experience is a fundamental experience.
Epistemic Modeling
The Bridge
The world is everything that is the case.
The following list provides an overview about the following chapters:
The Modeling Statement
How to conclude and what to conclude from measured data?
Predictability and Predictivity
The Independence Assumption
The Model Selection Problem
Methods, Models, Variables
The Perils of Universalism
Genetics, revisited
Noise, and Noise
It is said that Einstein once said
Make things as simple as possible, but not simpler.
Describing Classifiers
Utilization of Information
Table 1a: A Confusion matrix for a quite performant classifier.
condition Pos
condition Neg
test Pos
100 (TP)
3 (FP)
test Neg
28 (FN)
1120 (TN)
condition Pos
condition Neg
test Pos
0 (50)
0 (39)
0.0 (0.56)
test Neg
50 (0)
39 (0)
0.44 (1.0)
0.0 (1.0)
1.0 (0.0)
condition Pos
condition Neg
test Pos
test Neg
Observations and Probabilities
The Result of Modeling
Analogical Thinking, revisited. (II)
March 20, 2012 § Leave a comment
In this second part of the essay about a fresh perspective on
analogical thinking—more precise: on models about it—we will try to bring two concepts together that at first sight represent quite different approaches: Copycat and SOM.
Why engaging in such an endeavor? Firstly, we are quite convinced that FARG’s Copycat demonstrates an important and outstanding architecture. It provides a well-founded proposal about the way we humans apply ideas and abstract concepts to real situations. Secondly, however, it is also clear that Copycat suffers from a few serious flaws in its architecture, particularly the built-in idealism. This renders any adaptation to more realistic domains, or even to completely domain-independent conditions very, very difficult, if not impossible, since this drawback also prohibits structural learning. So far, Copycat is just able to adapt some predefined internal parameters. In other words, the Copycat mechanism just adapts a predefined structure, though a quite abstract one, to a given empiric situation.
Well, basically there seem to be two different, “opposite” strategies to merge these approaches. Either we integrate the SOM into Copycat, or we try to transfer the relevant yet to be identified parts from Copycat to a SOM-based environment. Yet, at the end of day we will see that and how the two alternatives converge.
In order to accomplish our goal of establishing a fruitful combination between SOM and Copycat we have to take mainly three steps. First, we briefly recapitulate the basic elements of Copycat and the proper instance of a SOM-based system. We also will describe the extended SOM system in some detail, albeit there will be a dedicated chapter on it. Finally, we have to transfer and presumably adapt those elements of the Copycat approach that are missing in the SOM paradigm.
Crossing over
The particular power of (natural) evolutionary processes derives from the fact that it is based on symbols. “Adaptation” or “optimization” are not processes that change just the numerical values of parameters of formulas. Quite to the opposite, adaptational processes that span across generations parts of the DNA-based story is being rewritten, with potential consequences for the whole of the story. This effect of recombination in the symbolic space is particularly present in the so-called “crossing over” during the production of gamete cells in the context of sexual reproduction in eukaryotes. Crossing over is a “technique” to dramatically speed up the exploration of the space of potential changes. (In some way, this space is also greatly enlarged by symbolic recombination.)
What we will try here in our attempt to merge the two concepts of Copycat and SOM is exactly this: a symbolic recombination. The difference to its natural template is that in our case we do not transfer DNA-snippets between homologous locations in chromosomes, we transfer whole “genes,” which are represented by elements.
Elementarizations I: C.o.p.y.c.a.t.
In part 1 we identified two top-level (non-atomic) elements of Copycat
Since the first element, covering evolutionary aspects such as randomness, population and a particular memory dynamics, is pretty clear and a whole range of possible ways to implement it are available, any attempt for improving the Copycat approach has to target the static, strongly idealistic characteristics of the the structure that is called “Slipnet” by the FARG’s. The Slipnet has to be enabled for structural changes and autonomous adaptation of its parameters. This could be accomplished in many ways, e.g. by representing the items in the Slipnet as primitive artificial genes. Yet, we will take a different road here, since the SOM paradigm already provides the means to achieve idealizations.
At that point we have to elementarize Copycat’s Slipnet in a way that renders it compatible with the SOM principles. Hofstadter emphasizes the following properties of the Slipnet and the items contained therein (pp.212).
• (1) Conceptual depth allows for a dynamic and continuous scaling of “abstractness” and resistance against “slipping” to another concept;
• (2) Nodes and links between nodes both represent active abstract properties;
• (3) Nodes acquire, spread and lose activation, which knows an switch-on threshold < 1;
• (4) The length of links represents conceptual proximity or degree of association between the nodes.
As a whole, and viewed from the network perspective, the Slipnet behaves much like a spring system, or a network built from rubber bands, where the springs or the rubber bands are regulated in their strength. Note that our concept of SomFluid also exhibits the feature of local regulation of the bonds between nodes, a property that is not present in the idealized standard SOM paradigm.
Yet, the most interesting properties in the list above are (1) and (2), while (3) and (4) are known in the classic SOM paradigm as well. The first item is great because it represents an elegant instance of creating the possibility for measurability that goes far beyond the nominal scale. As a consequence, “abstractness” ceases to be nominal none-or-all property, as it is present in hierarchies of abstraction. Such hierarchies now can be recognized as mere projections or selections, both introducing a severe limitation of expressibility. The conceptual depth opens a new space.
The second item is also very interesting since it blurs the distinction between items and their relations to some extent. That distinction is also a consequence of relying too readily on the nominal scale of description. It introduces a certain moment of self-reference, though this is not fully developed in the Slipnet. Nevertheless, a result of this move is that concepts can’t be thought without their embedding into other a neighborhood of other concepts. Hofstadter clearly introduces a non-positivistic and non-idealistic notion here, as it establishes a non-totalizing meta-concept of wholeness.
Yet, the blurring between “concepts” and “relations” could be and must be driven far beyond the level Hofstadter achieved, if the Slipnet should become extensible. Namely, all the parts and processes of the Slipnet need to follow the paradigm of probabilization, since this offers the only way to evade the demons of cybernetic idealism and control apriori. Hofstadter himself relies much on probabilization concerning the other two architectural parts of Copycat. Its beyond me why he didn’t apply it to the Slipnet too.
Taken together, we may derive (or: impose) the following important elements for an abstract description of the Slipnet.
• (1) Smooth scaling of abstractness (“conceptual depth”);
• (2) Items and links of a network of sub-conceptual abstract properties are instances of the same category of “abstract property”;
• (3) Activation of abstract properties represents a non-linear flow of energy;
• (4) The distance between abstract properties represents their conceptual proximity.
A note should be added regarding the last (fourth) point. In Copycat, this proximity is a static number. In Hofstadter’s framework, it does not express something like similarity, since the abstract properties are not conceived as compounds. That is, the abstract properties are themselves on the nominal level. And indeed, it might appear as rather difficult to conceive of concepts as “right of”, “left of”, or “group” as compounds. Yet, I think that it is well possible by referring to mathematical group theory, the theory of algebra and the framework of mathematical categories. All of those may be subsumed into the same operationalization: symmetry operations. Of course, there are different ways to conceive of symmetries and to implement the respective operationalizations. We will discuss this issue in a forthcoming essay that is part of the series “The Formal and the Creative“.
The next step is now to distill the elements of the SOM paradigm in a way that enables a common differential for the SOM and for Copycat..
Elementarizations II: S.O.M.
The self-organizing map is a structure that associates comparable items—usually records of values that represent observations—according to their similarity. Hence, it makes two strong and important assumptions.
• (1) The basic assumption of the SOM paradigm is that items can be rendered comparable;
• (2) The items are conceived as tokens that are created by repeated measurement;
The first assumption means, that the structure of the items can be described (i) apriori to their comparison and (ii) independent from the final result of the SOM process. Of course, this assumption is not unique to SOMs, any algorithmic approach to the treatment of data is committed to it. The particular status of SOM is given by the fact—and in stark contrast to almost any other method for the treatment of data—that this is the only strong assumption. All other parameters can be handled in a dynamic manner. In other words, there is no particular zone of the internal parametrization of a SOM that would be inaccessible apriori. Compare this with ANN or statistical methods, and you feel the difference… Usually, methods are rather opaque with respect to their internal parameters. For instance, the similarity functional is usually not accessible, which renders all these nice looking, so-called analytic methods into some kind of subjective gambling. In PCA and its relatives, for instance, the similarity is buried in the covariance matrix, which in turn is only defined within the assumption of normality of correlations. If not a rank correlation is used, this assumption is extended even to the data itself. In both cases it is impossible to introduce a different notion of similarity. Else, and also as a consequence of that, it is impossible to investigate the particular dependency of the results proposed by the method from the structural properties and (opaque) assumptions. In contrast to such unfavorable epistemo-mythical practices, the particular transparency of the SOM paradigm allows for critical structural learning of the SOM instances. “Critical” here means that the influence of internal parameters of the method onto the results or conclusions can be investigated, changed, and accordingly adapted.
The second assumption is implied by its purpose to be a learning mechanism. It simply needs some observations as results of the same type of measurement. The number of observations (the number of repeats) has to exceed a certain lower threshold, which, dependent on the data and the purpose, is at least 8, typically however (much) more than 100 observations of the same kind are needed. Any result will be within the space delimited by the assignates (properties), and thus any result is a possibility (if we take just the SOM itself).
The particular accomplishment of a SOM process is the transition from the extensional to the intensional description, i.e. the SOM may be used as a tool to perform the step from tokens to types.
From this we may derive the following elements of the SOM:1
• (1) a multitude of items that can be described within a common structure, though not necessarily an identical one;
• (2) a dense network where the links between nodes are probabilistic relations;
• (3) a bottom-up mechanism which results in the transition from an extensional to an intensional level of description;
As a consequence of this structure the SOM process avoids the necessity to compare all items (N) to all other items (N-1). This property, together with the probabilistic neighborhoods establishes the main difference to other clustering procedures.
It is quite important to understand that the SOM mechanism as such is not a modeling procedure. Several extensions have to be added and properly integrated, such as
• – operationalization of the target into a target variable;
• – validation by separate samples;
• – feature selection, preferably by an instance of a generalized evolutionary process (though not by a genetic algorithm);
• – detecting strong functional and/or non-linear coupling between variables;
• – description of the dependency of the results from internal parameters by means of data experiments.
We already described the generalized architecture of modeling as well as the elements of the generalized model in previous chapters.
Yet, as we explained in part 1 of this essay, analogy making is conceptually incompatible to any kind of modeling, as long as the target of the model points to some external entity. Thus, we have to choose a non-modeling instance of a SOM as the starting point. However, clustering is also an instance of those processes that provide the transition from extensions to intensions, whether this clustering is embedded into full modeling or not. In other words, both the classic SOM as well as the modeling SOM are not suitable as candidates for a merger with Copycat.
SOM-based Abstraction
Fortunately, there is already a proposal, and even a well-known one, that indeed may be taken as such a candidate: the two-layer SOM (TL-SOM) as it has been demonstrated as essential part of the so-called WebSom [1,2].
Actually, the description as being “two layered” is a very minimalistic, if not inappropriate description what is going on in the WebSom. We already discussed many aspects of its architecture here and here.
Concerning our interests here, the multi-layered arrangement itself is not a significant feature. Any system doing complicated things needs a functional compartmentalization; we have met a multi-part, multi-compartment and multi-layered structure in the case of Copycat too. Else, the SOM mechanism itself remains perfectly identical across the layers.
The real interesting features of the approach realized in the TL-SOM are
• – the preparation of the observations into probabilistic contexts;
• – the utilization of the primary SOM as a measurement device (the actual trick).
The domain of application of the TL-SOM is the comparison and classification of texts. Texts belong to unstructured data and the comparison of texts is exposed to the same problematics as the making of analogies: there is no apriori structure that could serve as a basis for modeling. Also, as the analogies investigated by the FARG the text is a locational phenomenon, i.e. it takes place in a space.
Let us briefly recapitulate the dynamics in a TL-SOM. In order to create a TL-SOM the text is first dissolved into overlapping, probabilistic contexts. Note that the locational arrangement is captured by these random contexts. No explicit apriori rules are necessary to separate patterns. The resulting collection of contexts then gets “somified”. Each node then contains similar random contexts that have been derived from various positions in different texts. Now the decisive step will be taken, which consists in turning the perspective by “90 degrees”: We can use the SOM as the basis for creating a histogram for each of the texts. The nodes are interpreted as properties of the texts, i.e. each node represents a bin of the histogram. The values of the individual bins measure the frequency of the text as it is represented by the respective random context. The secondary SOM then creates a clustering across these histograms, which represent the texts in an abstract manner.
This way the primary lattice of the TL-SOM is used to impose a structure on the unstructured entity “text.”
Figure 1: A schematic representation of a two-layered SOM with built-in self-referential abstraction. The input for the secondary SOM (foreground) is derived as a collection of histograms that are defined as a density across the nodes of the primary SOM (background). The input for the primary SOM are random contexts.
To put it clearly: the secondary SOM builds an intensional description of entities that results from the interaction of a SOM with a probabilistic description of the empirical observations. Quite obviously, intensions built this way about intensions are not only quite abstract, the mechanism could even be stacked. It could be described as “high-level perception” as justified as Hofstadter uses the term for Copycat. The TL-SOM turns representational intensions into abstract, structural ones.
The two aspects from above thus interact, they are elements of the TL-SOM. Despite the fact that there are still transitions from extensions to intensions, we also can see that the targeted units of the analysis, the texts get probabilistically distributed across an area, the lattice of the primary SOM. Since the SOM maps the high-dimensional input data onto its map in a way that preserves their topological properties, it is easy to recognize that the TL-SOM creates conceptual halos as an intermediate.
So let us summarize the possibilities provided by the SOM.
• (1) SOMs are able to create non-empiric, or better: de-empirified idealizations of intensions that are based on “quasi-empiric” input data;
• (2) TL-SOMs can be used to create conceptual halos.
In the next section we will focus on this spatial, better: primarily spatial effect.
The Extended SOM
Kohonen and co-workers [1,2] proposed to build histograms that reflect the probability density of a text across the SOM. Those histograms represent the original units (e.g. texts) in a quite static manner, using a kind of summary statistics.
Yet, texts are definitely not a static phenomenon. At first sight there is at least a series, while more appropriately texts are even described as dynamic networks of own associative power [3]. Returning to the SOM we see that additionally to the densities scattered across the nodes of the SOM we also can observe a sequence of invoked nodes, according to the sequence of random contexts in the text (or the serial observations)
The not so difficult question then is: How to deal with that sequence? Obviously, it is again and best conceived as a random process (though with a strong structure), and random processes are best described using Markov models, either as hidden (HMM) or as transitional models. Note that the Markov model is not a model about the raw observational data, it describes the sequence of activation events of SOM nodes.
The Markov model can be used as a further means to produce conceptual halos in the sequence domain. The differential properties of a particular sequence as compared to the Markov model then could be used as further properties to describe the observational sequence.
(The full version of the extended SOM comprises targeted modeling as a further level. Yet, this targeted modeling does not refer to raw data. Instead, its input is provided completely by the primary SOM, which is based on probabilistic contexts, while the target of such modeling is just internal consistency of a context-dependent degree.)
The Transfer
Just to avoid misunderstanding: it does not make sense to try representing Copycat completely by a SOM-based system. The particular dynamics and phenomenologically behavior depends a lot on Copycat’s tripartite morphology as represented by the Coderack (agents), the Workspace and the Slipnet. We are “just” in search for a possibility to remove the deep idealism from the Slipnet in order to enable it for structural learning.
Basically, there are two possible routes. Either we re-interpret the extended SOM in a way that allows us to represent the elements of the Slipnet as properties of the SOM, or we try to replace the all items in the Slipnet by SOM lattices.
So, let us take a look which structures we have (Copycat) or what we could have (SOM) on both sides.
Table 1: Comparing elements from Copycat’s Slipnet to the (possible) mechanisms in a SOM-based system.
Copycat extended SOM
1. smoothly scaled abstraction Conceptual depth (dynamic parameter) distance of abstract intensions in an integrated lattice of a n-layered SOM
2. Links as concepts structure by implementation reflecting conceptual proximity as an assignate property for a higher-level
3. Activation featuring non-linear switching behavior structure by implementation x
4. Conceptual proximity link length (dynamic parameter) distance in map (dynamic parameter)
5. Kind of concepts locational, positional symmetries, any
From this comparison it is clear that the single most challenging part of this route is the possibility for the emergence of abstract intensions in the SOM based on empirical data. From the perspective of the SOM, relations between observational items such as “left-most,” “group” or “right of”, and even such as “sameness group” or “predecessor group”, are just probabilities of a pattern. Such patterns are identified by functions or dynamic combinations thereof. Combinations ot topological primitives remain mappable by analytic functions. Such concepts we could call “primitive concepts” and we can map these to the process of data transformation and the set of assignates as potential properties.2 It is then the job of the SOM to assign a relevancy to the assignates.
Yet, Copycat’s Slipnet comprises also rather abstract concepts such as “opposite”. Further more, the most abstract concepts often act as links between more primitive concepts, or, in Hofstadter terms, conceptual items of lower “conceptual depth”.
My feeling here is that it is a fundamental mistake to implement concepts like “opposite” directly. What is opposite of something else is a deeply semantic concept in itself, thus strongly dependent on the domain. I think that most of the interesting concepts, i.e. the most abstract ones are domain-specific. Concepts like “opposite” could be considered as something “simple” only in case of geometric or spatial domains.
Yet, that’s not a weakness. We should use this as a design feature. Take the following rather simple case as shown in the next figure as an example. Here we mapped simply triplets of uniformly distributed random values onto a SOM. The three values can be readily interpreted as parts of a RGB value, which renders the interpretation more intuitive. The special thing here is that the map has been a really large one: We defined approximately 700’000 nodes and fed approx. 6 million observations into it.
Figure 2: A SOM-based color map showing emergence of abstract features. Note that the topology of the map is a borderless toroid: Left and right borders touch each other (distance=0), and the same applies to the upper and lower borders.
We can observe several interesting things. The SOM didn’t come up with just any arbitrary sorting of the colors. Instead, a very particular one emerged.
First, the map is not perfectly homogeneous anymore. Very large maps tend to develop “anisotropies”, symmetry breaks if you like, simply due to the fact the the signal horizon becomes an important issue. This should not be regarded as a deficiency though. Symmetry breaks are essential for the possibility of the emergence of symbols. Second, we can see that two “color models” emerged, the RGB model around the dark spot in the lower left, and the YMC model around the bright spot in the upper right. Third, the distance between the bright, almost white spot and the dark, almost black one is maximized.
In other words, and not quite surprising, the conceptual distance is reflected as a geometrical distance in the SOM. As it is the case in the TL-SOM, we now could use the SOM as a measurement device that transforms an unknown structure into an internal property, simply by using the locational property in the SOM as an assignate for a secondary SOM. In this way we not only can represent “opposite”, but we even have a model procedure for “generalized oppositeness” at out disposal.
It is crucial to understand this step of “observing the SOM”, thereby conceiving the SOM as a filter, or more precisely as a measurement device. Of course, at this point it becomes clear that a large variety of such transposing and internal-virtual measurement devices may be thought of. Methodologically, this opens an orthogonal dimension to the representation of data, resembling strongly to the concept of orthoregulation.
The map shown above even allows to create completely different color models, for instance one around yellow and another one around magenta. Our color psychology is strongly determined by the sun’s radiated spectrum and hence it reflects a particular Lebenswelt; yet, there is no necessity about it. Some insects like bees are able to perceive ultraviolet radiation, i.e. their colors may have 4 components, yielding a completely different color psychology, while the capability to distinguish colors remains perfectly.3
“Oppositeness” is just a “simple” example for an abstract concept and its operationalization using a SOM. We already mentioned the “serial” coherence of texts (and thus of general arguments) that can be operationalized as sort of virtual movement across a SOM of a particular level of integration.
It is crucial to understand that there is no other model besides the SOM that combines the ability to learn from empirical data and the possibility for emergent abstraction.
There is yet another lesson that we can take home from the simple example above. Well, the example doesn’t not remain that simple. High-level abstraction, items of considerable conceptual depth, so to speak, requires rather short assignate vectors. In the process of learning qua abstraction it appears to be essential that the masses of possible assignates derived from or imposed by measurement of raw data will be reduced. On the one hand, empiric contexts from very different domains should be abstracted, i.e. quite literally “reduced”, into the same perspective. On the other hand, any given empiric context should be abstracted into (much) more than just one abstract perspective. The consequence of that is that we need a lot of SOMs, all separated “sufficiently” from each other. In other words, we need a dynamic population of Self-organizing maps in order to represent the capability of abstraction in real-life. “Dynamic population” here means that there are developmental mechanisms that result in a proliferation, almost a breeding of new SOM instances in a seamless manner. Of course, the SOM instances themselves have to be able to grow and to differentiate, as we have described it here and here.
In a population of SOM the conceptual depth of a concept may be represented by the efforts to arrive at a particular abstract “intension.” This not only comprises the ordinary SOM lattices, but also processes like Markov models, simulations, idealizations qua SOMs, targeted modeling, transition into symbolic space, synchronous or potential activations of other SOM compartments etc. This effort may be represented finally as a “number.”
The structure of multi-layered system of Self-organizing Maps as it has been proposed by Kohonen and co-workers is a powerful model to represent emerging abstraction in response to empiric impressions. The Copycat model demonstrates how abstraction could be brought back to the level of application in order to become able to make analogies and to deal with “first-time-exposures”.
Here we tried to outline a potential path to bring these models together. We regard this combination in the way we proposed it (or a quite similar one) as crucial for any advance in the field of machine-based episteme at large, but also for the rather confined area of machine learning. Attempts like that of Blank [4] appear to suffer seriously from categorical mis-attributions. Analogical thinking does not take place on the level of single neurons.
We didn’t discuss alternative models here (so far, a small extension is planned). The main reasons are that first it would be an almost endless job, and second that Hofstadter already did it and as a result of his investigation he dismissed all the alternative approaches (from authors like Gentner, Holyoak, Thagard). For an overview Runco [5] about recent models on creativity, analogical thinking, or problem solving provides a good starting point. Of course, many authors point to roughly the same direction as we did here, but mostly, the proposals are circular, not helpful because the problematic is just replaced by another one (e.g. the infamous and completely unusable “divergent thinking”), or can’t be implemented for other reasons. Thagard [6] for instance, claim that a “parallel satisfaction of the constraints of similarity, structure and purpose” is key in analogical thinking. Given our analysis, such statements are nothing but a great mess, mixing modeling, theory, vagueness and fluidity.
For instance, in cognitive psychology and in the field of artificial intelligence as well, the hypothesis of Structural Mapping (STM) finds a lot of supporters [7]. Hofstadter discusses similar approaches in his book. The STM hypothesis is highly implausible and obviously a left-over of the symbolic approach to Artificial Intelligence, just transposed into more structural regions. The STM hypothesis has not only to be implemented as a whole, it also has to be implemented for each domain specifically. There is no emergence of that capability.
The combination of the extended SOM—interpreted as a dynamic population of growing SOM instances—with the Copycat mechanism indeed appears as a self-sustaining approach into proliferating abstraction and—quite significant—back from it into application. It will be able to make analogies on any field already in its first encounter with it, even regarding itself, since both the extended SOM as well as the Copycat comprise several mechanisms that may count as precursors of high-level reflexivity.
After this proposal little remains to be said on the technical level. One of those issues which remain to be discussed is the conditions for the possibility of binding internal processes to external references. Here our favorite candidate principle is multi-modality, that is the joint and inextricable “processing” (in the sense of “getting affected”) of words, images and physical signals alike. In other words, I feel that we have come close to the fulfillment of the ariadnic question this blog:”Where is the Limit?” …even in its multi-faceted aspects.
A lot of implementation work has now to be performed, eventually commented by some philosophical musings about “cognition”, or more appropriate the “epistemic condition.” I just would like to invite you to stay tuned for the software publications to come (hopefully in the near future).
1. see also the other chapters about the SOM, SOM-based modeling, and generalized modeling.
2. It is somehow interesting that in the brain of many animals we can find very small groups of neurons, if not even single neurons, that respond to primitive features such as verticality of lines, or the direction of the movement of objects in the visual field.
3. Ludwig Wittgenstein insisted all the time that we can’t know anything about the “inner” representation of “concepts.” It is thus free of any sense and meaning to claim knowledge about the inner state of oneself as well as of that of others. Wilhelm Vossenkuhl introduces and explains the Wittgensteinian “grammatical” solipsism carefully and in a very nice way.[8] The only thing we can know about inner states is that we use certain labels for it, and the only meaning of emotions is that we do report them in certain ways. In other terms, the only thing that is important is the ability to distinguish ones feelings. This, however, is easy to accomplish for SOM-based systems, as we have been demonstrating here and elsewhere in this collection of essays.
4. Don’t miss Timo Honkela’s webpage where one can find a lot of gems related to SOMs! The only puzzling issue about all the work done in Helsinki is that the people there constantly and pervasively misunderstand the SOM per se as a modeling tool. Despite their ingenuity they completely neglect the issues of data transformation, feature selection, validation and data experimentation, which all have to be integrated to achieve a model (see our discussion here), for a recent example see here, or the cited papers about the Websom project.
• [1] Timo Honkela, Samuel Kaski, Krista Lagus, Teuvo Kohonen (1997). WEBSOM – Self-Organizing Maps of Document Collections. Neurocomputing, 21: 101-117.4
• [2] Krista Lagus, Samuel Kaski, Teuvo Kohonen in Information Sciences (2004)
Mining massive document collections by the WEBSOM method. Information Sciences, 163(1-3): 135-156. DOI: 10.1016/j.ins.2003.03.017
• [3] Klaus Wassermann (2010). Nodes, Streams and Symbionts: Working with the Associativity of Virtual Textures. The 6th European Meeting of the Society for Literature, Science, and the Arts, Riga, 15-19 June, 2010. available online.
• [4 ]Douglas S. Blank, Implicit Analogy-Making: A Connectionist Exploration.Indiana University Computer Science Department. available online.
• [5] Mark A. Runco, Creativity-Research, Development, and Practice Elsevier 2007.
• [6] Keith J. Holyoak and Paul Thagard, Mental Leaps: Analogy in Creative Thought.
MIT Press, Cambridge 1995.
• [7] John F. Sowa, Arun K. Majumdar (2003), Analogical Reasoning. in: A. Aldo, W. Lex, & B. Ganter (eds.), “Conceptual Structures for Knowledge Creation and Communication,” Proc.Intl.Conf.Conceptual Structures, Dresden, Germany, July 2003. LNAI 2746, Springer New York 2003. pp. 16-36. available online.
• [8] Wilhelm Vossenkuhl. Solipsismus und Sprachkritik. Beiträge zu Wittgenstein. Parerga, Berlin 2009.
Where Am I?
You are currently browsing entries tagged with measurement at The "Putnam Program". |
465e372c52c5f24b | Quantum Mechanics: Hydrogen Atom
By Dragica Vasileska1, Gerhard Klimeck2
1. Arizona State University 2. Purdue University
Download (PDF)
Licensed according to this deed.
Published on
The solution of the Schrödinger equation (wave equations) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the "orbitals") are not necessarily isotropic themselves, their dependence on the angular coordinates follows completely generally from this isotropy of the underlying potential: The eigenstates of the Hamiltonian (= energy eigenstates) can be chosen as simultaneous eigenstates of the angular momentum operator. This corresponds to the fact that angular momentum is conserved in the orbital motion of the electron around the nucleus. Therefore, the energy eigenstates may be classified by two angular momentum quantum numbers, l and m (integer numbers). The "angular momentum" quantum number l = 0, 1, 2, ... determines the magnitude of the angular momentum. The "magnetic" quantum number m = −l, .., +l determines the projection of the angular momentum on the (arbitrarily chosen) z-axis.
In addition to mathematical expressions for total angular momentum and angular momentum projection of wavefunctions, an expression for the radial dependence of the wave functions must be found. It is only here that the details of the 1/r Coulomb potential enter (leading to Laguerre polynomials in r). This leads to a third quantum number, the principal quantum number n = 1, 2, 3, ... The principal quantum number in hydrogen is related to atom's total energy.
Note that the maximum value of the angular momentum quantum number is limited by the principal quantum number: it can run only up to n − 1, i.e. l = 0, 1, ..., n − 1.
Due to angular momentum conservation, states of the same l but different m have the same energy (this holds for all problems with rotational symmetry). In addition, for the hydrogen atom, states of the same n but different l are also degenerate (i.e. they have the same energy). However, this is a specific property of hydrogen and is no longer true for more complicated atoms which have a (effective) potential differing from the form 1/r (due to the presence of the inner electrons shielding the nucleus potential).
Taking into account the spin of the electron adds a last quantum number, the projection of the electron's spin angular momentum along the z axis, which can take on two values. Therefore, any eigenstate of the electron in the hydrogen atom is described fully by four quantum numbers. According to the usual rules of quantum mechanics, the actual state of the electron may be any superposition of these states. This explains also why the choice of z-axis for the directional quantization of the angular momentum vector is immaterial: An orbital of given l and m' obtained for another preferred axis z' can always be represented as a suitable superposition of the various states of different m (but same l) that have been obtained for z.
Sponsored by
Cite this work
Researchers should cite this work as follows:
• ww.eas.asu.edu/~vasilesk
• Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Hydrogen Atom," http://nanohub.org/resources/4993.
BibTex | EndNote |
4972c7c28067a77f | Critical Point Theory and Its Applications - download pdf or read online
By Wenming Zou
ISBN-10: 038732965X
ISBN-13: 9780387329659
ISBN-10: 0387329684
ISBN-13: 9780387329680
This booklet offers the various most recent learn in serious aspect concept, describing equipment and providing the most recent purposes. assurance comprises extrema, even valued functionals, vulnerable and double linking, signal altering recommendations, Morse inequalities, and cohomology teams. purposes defined contain Hamiltonian platforms, Schrödinger equations and platforms, leaping nonlinearities, elliptic equations and structures, superlinear difficulties and beam equations.
Show description
Read or Download Critical Point Theory and Its Applications PDF
Similar philosophy: critical thinking books
Time-Critical Targeting Predictive versus Reactionary by Lieutenant Colonel, USAF, Gregory S. Marzolf, Air PDF
Concentrating on has lengthy been a prime main issue for our air forces; it took 1000 aircraft raids in international conflict II to damage a manufacturing unit. The innovative profits in precision guns of the final dozen years have eradicated the requirement for the air strength armada and highlighted new parts of development, quite a wish to smash more challenging, fleeting goals of chance.
Amending the Abject Body: Aesthetic Makeovers in Medicine - download pdf or read online
Examines the results and meanings of the makeover and aesthetic surgical procedure in American pop culture. Feminist theorists have frequently argued that aesthetic surgical procedures and physique makeovers dehumanize and disempower ladies sufferers, whose efforts at self-improvement bring about their objectification.
Francisco Javier Lopez Frias's Etica y Deporte en el singlo XXI: Una introducción PDF
Este libro pretende ser una pequeña introducción a los principales autores y corrientes que podemos encontrar dentro del ámbito de los angeles ética del deporte. En un momento del texto afirmo que “es los angeles introducción que nunca tuve en castellano” cuando yo me inicié en este terreno. He de aclarar que no se trata de un texto centrado en el contenido concreto de l. a. ética del deporte –dopaje, comercialización, mejoramiento genético, cuestiones de sexo e identidad– sino en las diversas metodologías que han tratado de desarrollar una visión ética de dicho fenómeno.
Extra resources for Critical Point Theory and Its Applications
Sample text
HIGH ENERGY SOLUTIONS 47 estimates: $AW > l\\uf-^Ml-^\\u\\; > \hf-c\\u\\i > \hf-cal\\ur. If we choose rk := (4cpQ^^)^/^^~^\ then for u e Zk with ||i^|| = r/c, we get that ^xiu)>iAcpalf/(^-''\\-^):=h. It follows that bkW '= inf >bk^oo ueZk,\\u\\=rk as A: ^ oo uniformly for A. e. A G [1,2], there exists a sequence {u^{X)}'^^i such that sup||«^(A)||<^,$',(«n(A))-0 n and ^A(4(A))^Cfe(A)>6fe(A)>6fe, as n ^ oo. 41, {u^{X)}'^^i has a convergent subsequence. Hence, there exists a z^{X) such that ^^(z^(A)) = 0 and ^x{z^{X)) G [bk^Ck].
If i(; ^ 0 in ^ , and lim |ii|^oo U —CO in (D4)-(2), then, for n large enough, by Fatou's Lemma, we have that 1 + 0(1) -g{x,u{X„))u{X„) a ^ '""' ' " >c+[ ^ 2J -g(a:^A)MA„)^^^^,^^ oo, a contradiction. It is similar if lim |ii|^oo ^— = oo in (D4)-(3). Therefore, U w = 0. Define ^x (tnu(Xn)) := max ^x (tu(Xn)), then lim ^x (^n'^(^n)) = OO, {^Xn{tnU{Xn)),tnU{Xn)) OO = < tG[0,l] "^ = 0. I t folloWS t h a t . lim^ ( ^x^{tnU{Xn)) - l i m Xn I ( -f{x,tnU{Xn))tnU{Xn) n^oo /o V 2 n^oo -{^xA^nU{Xn)),tnU{Xn)) - F{x,tnU{Xn)) ) dx In + / I -g{x,tnU{Xn))tnU{Xn) - G{x,tnU{Xn)) ] dx.
Zou [326]. Possibly, it can be proved by other methods such as the degree theory or the contraction mapping principle. 4 has far more extended applications. We would like to leave them to the readers. Chapter 3 Even Functionals In this chapter we present some abstract theorems which concern the existence of infinitely many critical points for even functionals. The Palais-Smale type compactness condition is not necessary for the new results. By taking advantage of the abstract theorems, we study the existence of infinitely many large energy solutions for nonlinear Schrodinger equations and of infinitely many small energy solutions for semilinear elliptic equations with concave and convex nonlinear it ies.
Download PDF sample
Critical Point Theory and Its Applications by Wenming Zou
by Charles
Rated 4.76 of 5 – based on 47 votes |
a5e096b78d76b424 | In the Eyring equation (EE),
$$k = \frac{k_\mathrm B T}{h} \exp\left(\frac{-\Delta G_{\mathrm f}}{RT}\right),$$
the units of $k$ are $\mathrm{s^{-1}}$. However, in general rate constants are usually expressed in $\frac{\mathrm{rad}}{\mathrm{s}}$. For instance, in expressions for damped oscillations of the form $\exp[(\mathrm{i} \omega - k)t]$, where $\omega$ is by definition in $\frac{\mathrm{rad}}{\mathrm{s}}$ and $k$ has to bear units providing consistent dimensions.
By expressing in the EE the energy $k_\mathrm B T$ as $\nu h$, where frequency $\nu$ is in cycles per second $\left(\mathrm{Hz}=\frac{\mathrm{cyc}}{\mathrm{s}} = \mathrm{cps}\right)$, it appears that $k$ is also given in $\mathrm{cps}$ in this equation. The desired units of $\frac{\mathrm{rad}}{\mathrm{s}}$ would be obtained if $h$ were replaced by $\hbar$. However, I have never seen the EE formulated that way. Has anybody an idea in this matter?
• 4
$\begingroup$ Related: Rate Constant Units and Eyring Equation $\endgroup$ – user7951 Dec 30 '15 at 18:37
• 1
$\begingroup$ I don't believe this question is a duplicate. This question relates to the unit of the rate constant. The other SO question pertains to the order of the reaction and how the dimension of $k$ is accomodated for that. $\endgroup$ – Ivo Filot Dec 30 '15 at 21:30
• $\begingroup$ Concur, this is not a duplicate. The problem of $\mathrm{Hz}$ units carrying an implicit "cycles" in the numerator $\left(\mathrm{Hz}=\frac{\mathrm{cyc}}{\mathrm{s}}\neq\mathrm{s}^{-1} \equiv \frac{ \mathrm{rad} }{\mathrm{s}}\right)$ is rife throughout chemistry, spectroscopy in particular. It explains why we use both $h$ and $\hbar = \frac{h}{2 \pi}$, for example. $\endgroup$ – hBy2Py Jan 3 '16 at 23:08
• $\begingroup$ slaw, radians are dimensionless units: $\theta$ in radians is the arclength in meters traversed by a point per meter of distance away from the center of rotation about which $\theta$ is applied. So, $\mathrm{rad} = \frac{ \mathrm{m}}{\mathrm{m}} = 1$, and thus $\mathrm{rad \over s} = {1 \over \mathrm{s}}$. That said, the units collision in your question stands unaffected. $\endgroup$ – hBy2Py Jan 3 '16 at 23:23
• $\begingroup$ See my answer to chemistry.stackexchange.com/questions/10115/… which explains the units among other things. $\endgroup$ – porphyrin Sep 9 '16 at 12:36
I have looked at the original article of Henry Eyring (J. Chem. Phys. 1935, 3, 107). In that article, Eyring explains the derivation of his famous equation. It basically boils down to the following:
You assume an equilibrium between the initial and transition state and consider that once a species crosses the barrier, it goes to the product state:
$$R \leftrightarrows R^{\dagger} \rightarrow P$$
The rate expression is then simply:
$$k = \nu K$$
where $k$ is the reaction rate constant, $\nu$ the crossing frequency and $K$ the equilibrium constant between the IS and TS.
I will not go into detail about how to calculate $K$, but just remember that $K$ can be obtained by calculating the quotient of the partition functions of the system in the transition and initial state, respectively. How $\nu$ is calculated, will hopefully answer your question.
Let us assume that in the transition state, the bonds to be formed or broken are weak. In that case, we can model the reaction coordinate (the direction of the reaction) as a loose vibration. The partition function of a vibration is
$$ f = \frac{1}{1 - \exp \left(\frac{-h \nu}{k_{b}T} \right)} $$
Herein, $\nu$ is the frequency of the loose vibration. We assumed that this vibration was very weak (loose). In that case we can apply a Taylor expansion to the exponent and cut off the series after the first term. This gives us:
$$ f = \frac{1}{1 - 1 + \frac{h \nu}{k_{b}T}} = \frac{k_{b}T}{h \nu} $$
Plugging this result back into our original rate expression gives
$$ k = \nu \frac{k_{b}T}{h \nu} K^{\dagger} = \frac{k_{b}T}{h} K^{\dagger} $$
Here, I have put a $\dagger$ after the equilibrium constant to make clear that I have extracted one partition function (that of the loose vibration) out of the equilibrium constant.
From the dimensional analysis of $\frac{k_{b}T}{h}$, you can see that $k$ should have units of $s^{-1}$ and not something else.
| improve this answer | |
• $\begingroup$ Except your last sentence is not the case: the units of $h$ are $\mathrm{\frac{J}{Hz}} \equiv \mathrm{\frac{J~s}{cyc}}$, which should provide units of $\mathrm{\frac{cyc}{s}}$ to $k$. $\endgroup$ – hBy2Py Jan 4 '16 at 12:42
• 1
$\begingroup$ In the original paper of Planck (Ann. d. Phys., 1901, 4, 553), the units for $h$ are given as $erg \cdot s$, which in S.I. would be $J \cdot s$. Since then, I believe we agree upon the dimensionality of this constant. For instance, IUPAC also defines it as $J \cdot s$. Where do these cycles originate from? $\endgroup$ – Ivo Filot Jan 4 '16 at 14:24
• $\begingroup$ It originates in the difference between angular and cyclic frequency. A wave with cyclic frequency $f = 1~\mathrm{\frac{cyc}{s}} = 1~\mathrm{Hz}$ has an angular frequency of $\nu = 2\pi~\mathrm{\frac{rad}{s}}$. 'Cycles' are generally taken as dimensionless, and radians are strictly dimensionless; thus we end up with two sets of units that, propagated incautiously, are both written as "$\mathrm{s}^{-1}$" but that differ numerically by a factor of $2\pi$. $\endgroup$ – hBy2Py Jan 4 '16 at 14:37
• 1
$\begingroup$ Sure. That I understand. But neither Eyring nor Planck actually mention these in their publications. To be honest, I also never have heard of this during my own education and this (the cycles part) is also not mentioned in the course books that I used. I also looked in the books of Pauling (Introduction to Quantum Mechanics) and the one of Glasstone & Eyring (The Theory of Rate Processes); it is also not mentioned in these. What am I missing? $\endgroup$ – Ivo Filot Jan 4 '16 at 14:43
• 1
$\begingroup$ For going to the effort of tracking down Eyring's original paper and working through a bunch of the math: BOUNTY! $\endgroup$ – hBy2Py Jan 11 '16 at 16:28
The Eyring equation is numerically correct, despite the apparent units problem.
To understand the origin of the problem, one must go all the way back to the underlying statistical and quantum mechanics, since Eyring treated the motion across the transition state as being effectively a translation (J Chem Phys 3: 107, 1935, p109, emphasis added):
The activated state is because of its definition always a saddle point with positive curvature in all degrees of freedom except the one which corresponds to crossing the barrier for which it is of course negative. ... A configuration of atoms corresponding to the activated state thus has all the properties of a stable compound except in the normal mode corresponding to decomposition and this mode because of the small curvature can be treated statistically as a translational degree of freedom.
The starting point is the Hamiltonian for a particle in a box:
$$ H = - {\hbar^2 \over 2m}{d^2 \over dx^2} \tag{1} $$
Additional to Eq. $\left(1\right)$, of course, is that the particle is confined by an infinite potential to the domain $x=\left(0,L\right)$. After a standard undergraduate physical chemistry derivation, the wavefunctions $\Psi_n\!\left(x\right)$ of the time-independent Schrödinger equation $H\Psi_n=E_n\Psi_n$ are:
$$ \Psi_n\!\left(x\right) = \sqrt{2\over L}\sin{\left(n\pi {x\over L}\right)}\tag{2} $$
Differentiating Eq. $\left(2\right)$ twice, substituting into the Schrödinger equation, and comparing the result to $E_n\Psi_n$ yields the following energy levels for the particle:
$$ E_n = {\hbar^2 n^2 \pi^2 \over 2mL^2} \tag{3} $$
Eq. $\left(3\right)$ can be converted to the more commonly used form, that of Eq. $\left(53\right)$ of Salzman, by a simple substitution of $\hbar = {h \over 2\pi}$:
$$ E_n = {\hbar^2 n^2 \pi^2 \over 2mL^2} = {h^2 \over 4\pi^2}{n^2\pi^2 \over 2mL^2} = {h^2n^2 \over 8mL^2}\tag{4} $$
This is where the units problem in the Eyring equation originates. The factor of $\pi^2$ in the numerator derives from the form of $\Psi_n$ and $H$, where the Hamiltonian requires twice-differentiating a sine function with the factor $n\pi$ in the argument. This $n\pi$ is a fully unitless scaling factor for the non-dimensional position $x/L$ that is needed for $\Psi_n\!\left(0\right) = \Psi_n\!\left(L\right) = 0$ to hold, as required by the particle-in-a-box problem definition and the mathematical properties of the sine and cosine functions. I assume a key motivation for performing the transformation of Eq. $\left(4\right)$ is cosmetic, as it removes an apparently superfluous factor of $\pi^2$. But, it admixes into the overall expression the $4\pi^2 \rightarrow \left({2\pi\ \mathrm{rad} / \mathrm{cyc}}\right)^2$ factor in the denominator that is required to maintain the correct units downstream, obfuscating the dimensionality.
The next step is to obtain the translational partition function $q_\mathrm{t}$ which, per Eqs. $\left(54\right)$ and $\left(55\right)$ of Salzman, is:
$$ q_\mathrm{t} = \sum_{n\,=\,1}^\infty{\exp\!\left[{-{1\over k_\mathrm{B}T}{\hbar^2n^2\pi^2 \over 2mL^2}}\right]} \left\{= \sum_{n\,=\,1}^\infty{\exp\!\left[{-{1\over k_\mathrm{B}T}{h^2n^2\over 8mL^2}}\right]}\right\} \tag{5} $$
$$ q_\mathrm{t} \approx \int_0^\infty{\exp\!\left[-{1\over k_\mathrm{B}T}{\hbar^2n^2\pi^2 \over 2mL^2}\right] dn} \left\{= \int_0^\infty{\exp\!\left[-{1\over k_\mathrm{B}T}{h^2n^2\over 8mL^2}\right] dn}\right\} \tag{6} $$
In the above equations, I have provided the results for $E_n$ of Eq. $\left(3\right)$ first, with the final results using the $E_n$ of Eq. $\left(4\right)$ following it in curly brackets. I will continue with this convention below as needed.
Per, e.g., Wolfram Alpha, the general form of the Gaussian integrals of Eq. $\left(6\right)$ is:
$$ \int_0^\infty{e^{-ax^2}dx} = \frac{1}{2}\sqrt{\pi \over a} $$
$$ q_\mathrm{t} \approx {L\over \hbar} \sqrt{m k_\mathrm{B}T \over 2\pi} \left\{= {L\over h}\sqrt{2\pi m k_\mathrm{B}T}\right\} \tag{7} $$
The bracketed expression in Eq. $\left(7\right)$ matches exactly the expression given by Eyring on p108. Using manipulations I have not taken the time to retrace in detail, Eyring on p110 asserts the following expression for the prefactor of his now-eponymous equation:
$$ \left({\sqrt{2\pi m^* k_\mathrm{B}T} \over h}\right)\cdot {\overline{p}\over m^*} = {k_\mathrm{B}T \over h} \tag{8} $$
This is made possible by derivation of the following expression (p110 and preceding):
$$ {\overline{p} \over m^*} = {k_\mathrm{B}T \over \sqrt{2\pi m^* k_\mathrm{B}T}} \tag{9} $$
The expression in parentheses on the LHS of Eq. $\left(8\right)$ was apparently obtained by substituting $m^*$ for $m$ and setting $L=1$ (p108) in the bracketed expression of Eq. $\left(7\right)$:
If we set the length $l_i\!=\!1$ we have the number of unit cells per cm of length, a quantity frequently used in what follows.
Using instead the unbracketed expression of Eq. $\left(7\right)$, transformed as in Eyring, with Eqs. $\left(8\right)$ and $\left(9\right)$ gives:
$$ {1\over\hbar}\sqrt{m^*k_\mathrm{B}T\over 2\pi}\cdot{\overline{p}\over m^*} = {1\over\hbar}\sqrt{m^*k_\mathrm{B}T\over 2\pi}\cdot{k_\mathrm{B}T \over \sqrt{2\pi m^* k_\mathrm{B}T}} = {k_\mathrm{B}T \over 2\pi\hbar} \tag{10} $$
Thus, the result of Eq. $\left(10\right)$ is numerically equal to the prefactor reported by Eyring. So, happily (and unsurprisingly!), none of the work performed with it in the last eight decades needs to be revisited. However, the cycles units discrepancy in Eyring's version arises because, as noted above, the factor of $2\pi$ appearing in the denominator of Eq. $\left(10\right)$ is a consequence of differentiation of the trigonometric translational wavefunction $\Psi_n$ underpinning the prefactor derivation, and not present as a conversion factor of $2\pi\ \mathrm{rad}\over\mathrm{cyc}$.
| improve this answer | |
In the derivation of the Eyring equation (EE) quoted above by Ivo Filot, substitute consistently $\nu$ by $\omega = 2\pi\nu$ and $h$ by $\hbar$ wherever these two quantities appear. This will do no harm to the entire reasoning, but in the final equation the prefactor would be $$\frac{k_{b}T}{\hbar}$$ and $k$ would be expressed in $rad/s^{-1}$, as it should be. I dare to claim that this would be the correct answer to the problem. I do not have enough courage to submit such a note to a peer-reviewed chemical journal indexed by ISI.
| improve this answer | |
• 1
$\begingroup$ This unfortunately doesn't work numerically: $h\nu = \hbar\omega$; but $h\nu \neq \hbar\left(2\pi \omega\right)$. If one were to substitute $h=2\pi \hbar$, the final prefactor would be $k_\mathrm{B}T \over 2\pi \hbar$. My hunch is that there is a factor of $2\pi$ in the denominator that arises somewhere in the manipulation of the translational partition function, that was numerically (but not dimensionally) sensible to combine with $\hbar$ to yield the $h$ of the EE. My stat mech isn't strong enough to retrace the derivation, though. $\endgroup$ – hBy2Py Jan 9 '16 at 14:27
• $\begingroup$ Thanks for joining Chem.SE to post an answer, though! :-) $\endgroup$ – hBy2Py Jan 9 '16 at 14:29
• $\begingroup$ It's OK, @Brian. If in the standard EE $h$ is substituted by $2\pi\hbar$ then the rate constant in the l.h.s. will keep its units, i.e., Hz or cps, unchanged. Then, multiplying both sides of EE by $2/pi$ will remove the offending term $2\pi$ in the denominator of the prefactor while $k$ will be converted to $k' = 2\pi k$, where $k'$ will be in rad/s. $\endgroup$ – user24239 Jan 9 '16 at 19:03
• $\begingroup$ I have converted your other answer to a comment here, please have a look at the help center for more information. $\endgroup$ – Martin - マーチン Jan 9 '16 at 19:11
• $\begingroup$ Thanks for giving it a shot, user24239, but this answer just notes what one might do to alter the units of Eyring's expression. It doesn't reach into Eyring's derivation, such as calculation of the partition function &c., to explain the presence of $h$ and how one might resolve the apparent units conflict. $\endgroup$ – hBy2Py Jan 10 '16 at 18:21
Your Answer
|
3b1ad172a01c2006 | Talk:Philosophy of the physical sciences
From Hindupedia, the Hindu Encyclopedia
Revision as of 14:16, 16 May 2010 by Subramanyan (Talk | contribs)
Philosophy of the Physical Sciences Dr N Mukunda
The various philosophical traditions of the world form an important part of the intellectual and cultural achievements of the civilizations which produced them. Typically, their roots go back thousands of years—as in the cases of India and Greece. There is in them much poetic imagery and logical and deep thinking, as well as a sizeable speculative component. In contrast, modern science as we know it developed barely four hundred years ago, in the seventeenth century, arising in the main out of the combined efforts of Copernicus, Kepler, Galileo, and Newton. It was only then that the importance of controlled experiments and careful and systematic quantitative study of natural phenomena was clearly recognized. However, in spite of these great differences in age, at least in the Western tradition the interactions between modern physical science and philosophy have been deep and profound.
I am not a professional philosopher. I have only been attracted to some philosophical questions, and been impressed by certain philosophical systems, as a result of a study of physics. Thus the content of this article may sometimes reveal a sense of naivety as regards formal philosophical matters, schools of thought, traditions, and the like. Nevertheless, I hope that what follows will be of interest to the readers of this journal, most of whom may not be professional scientists but would still have a lively interest in these matters.
It may not be out of place to mention here some contrasting attitudes to the possible roles and value of philosophical thinking that are evident in the developments in physics over the past century. As a consequence of the European continental tradition, the general writings of the two discoverers of quantum mechanics—Werner Heisenberg from Germany and Erwin Schrödinger from Austria—show great familiarity with and interest in various philosophical systems of thought, from the Greeks onwards. While the writings of Niels Bohr and Albert Einstein also often have a philosophical bent, their references to formal systems of philosophy tend to be fewer, but nevertheless important. In contrast, when the focus of work in the new physics shifted from Europe to the US around the middle of the twentieth century, this regard for general philosophical thinking among the leading professional physicists does seem to have weakened. Typical statements of Richard Feynman and Steven Weinberg, for instance, display a certain degree of disdain, or certainly a lack of sympathy, for the value of philosophical thinking in the physical sciences. In any case, in the present account I assume that there is value in looking at the growth of modern physical science from a ‘philosophical point of view’, though it may require some degree of maturity as well as sympathy to adopt this attitude.
We may say for our present purposes that philosophy of science is generally concerned with the nature of knowledge, the way we acquire it, the meaning of understanding, and the evolution of concepts, all in the context of the phys-ical sciences. It may in addition be ultimately concerned with an appreciation of our place in nature. Philosophy of science deals with the understanding of natural phenomena and how this understanding is achieved, with the general features common to the various branches of science, and with the interdependence of these branches. It is more interested in the overall pattern of natural laws than in the details of any particular area of science.
Our aim will be to come up to the modern era in physics, and to see what it has taught us with regard to questions of a philosophical nature. Along the way we shall briefly review some historical developments and ways of thinking or schools of thought, both in philosophy and in physical science. We will consider how concepts are created, how they grow, and how they have sometimes to be greatly modified or even abandoned. Naturally, developments in physics will be covered in slightly greater detail than those in formal philosophy.
Rationalism and Empiricism
In our account of the beginnings of science and philosophical thinking we go back to Greek times. The major creative period, lasting about four hundred years, began with Thales of Miletus (c. 624–c. 546 BCE) and included, among many renowned thinkers, Pythagoras (c. 580–c. 500 BCE), Anaximander (610–c. 545 BCE), Democritus (c. 460–c. 370 BCE), Leucippus (fl. 5th cent. BCE), Plato (427–347 BCE), Aristotle (384–322 BCE), and Euclid (fl. c. 300 BCE). In the early period, with Thales, there was a strong impulse towards, as Benjamin Farrington puts it, ‘a new commonsense way of looking at the world of things … the whole point of which is that it gathers together into a coherent picture a number of observed facts without letting Marduk [the Babylonian Creator] in.’ The attempt was to deal with nature on its own, not bringing in mystical or mythical leanings. To quote from Heisenberg: ‘The strongest impulse had come from the immediate reality of the world in which we live and which we perceive by our senses. This reality was full of life, and there was no good reason to stress the distinction between matter and mind or between body and soul.’
Thales was familiar with the knowledge of geometry developed by the Egyptians, the basic facts of static electricity, and the magnetic properties of lodestone. Later, Democritus and Leucippus propounded the atomic concept of matter, not in a casual manner but based on careful reasoning. However, it goes without saying that philosophical thinking in these early times had a considerable speculative content, and there were others such as Plato and Aristotle who later strongly opposed the atomic hypothesis. This should come as no surprise at all, since as late as the end of the nineteenth century there were influential figures—Ernst Mach and Wilhelm Ostwald—who were still opposed to the idea of atoms. This idea finally triumphed only thanks to the heroic efforts of Ludwig Boltzmann, and Einstein’s work on Brownian movement.
The knowledge of geometry brought by the Greeks from Egypt was perfected and presented in an axiomatic form by Euclid of Alexandria around 300 BCE. The fact that this subject could be presented as a deductive system—a large number of consequences or theorems following logically from a very few ‘self-evident’ axioms or ‘obvious’ truths—must have made a deep impression on the Greek mind. It led in course of time to the idea that the behaviour and laws of nature could be derived from pure reason, without the help of direct inputs from experience. This was the so-called rationalist philosophy of science, which lay in stark contrast to the initial empiricist approach of Thales and Democritus. Plato held that ‘knowledge of Nature does not require observation and is attainable through reason alone’. Before Plato, Pythagoras too espoused this point of view, other illustrious followers being Aristotle and, in much later times, René Descartes, Wilhelm Leibniz, and Benedict de Spinoza. One may say that this rationalist philosophy accords a privileged position to human beings in the scheme of things.
The opposite—empiricist—point of view holds that knowledge comes ultimately from experience of phenomena and not from reason. As we saw, this was the attitude of both Thales and Democritus; and in later centuries it was revived by Francis Bacon and carried forward by John Locke, George Berkeley, and David Hume as a reaction to the rationalist view on the European continent. We shall return to some of these contrasting philosophies later, only noting now that empiricism goes with a more modest attitude towards our place in nature.
From Galileo and Newton to Kantian Philosophy Modern science emerged in Europe during the Renaissance— the reawakening of classical ideals in arts, literature, and philosophy during the fourteenth to seventeenth centuries, brought about by a combination of social, political, and religious factors. This is not the place to go into this crucial advance in any detail, but we note that it occurred against the background of a liberating intellectual and philosophical atmosphere to which many—including Descartes, Leibniz, and Spinoza—contributed.
Empirical Advances • Nicolaus Copernicus initiated the movement away from a human-centred view of nature with his heliocentric model of the solar system, and Francis Bacon showed the way to freedom from reason alone as the source of all knowledge. Indeed, Bacon said of Aristotle: ‘He did not consult experience as he should have done … but having first determined the question according to his will, he then resorts to experience, and … leads her about like a captive in a procession.’ Copernicus’s work, as well as Kepler’s discovery of the three laws of planetary motion during the years 1609–19, was but preparation for what was to come in the work of Galileo and Newton.
Galileo, rightly regarded as the founder of modern science, not only discovered the law of inertia in mechanics, the kinematic description of motion, and the law of free fall, but also stressed the importance of performing controlled experiments, of quantitative measurement, and of the use of mathematics in expressing experimental results. He stated this last point with particular emphasis, saying about the ‘book of nature’: ‘It cannot be read until we have learned the language and become familiar with the characters in which it is written. It is written in mathematical language.’
It was Isaac Newton, born the year Galileo died (at least by one calendar), who completed the work initiated by Galileo and his other illustrious predecessors, and paved the way for the systematic scientific investigation of physical phenomena over the succeeding centuries. We can say that without Newton’s crowning achievements, this tradition— the Galilean-Newtonian world view—would not have been securely established. Speaking of the importance of what Galileo and Newton achieved, Max Born says: ‘The distinctive quality of these great thinkers was their ability to free themselves from the metaphysical traditions of their time and to express the results of observations and experiments in a new mathematical language regardless of any philosophical preconceptions.’
Scientific Method • Newton expressed clearly his views on the independent and absolute natures of space and time, stated his three laws of motion for material bodies as axioms, enunciated his law of universal gravitation, and established mechanics as a deductive system. His whole approach and accomplishments made explicit and clear all the steps in the chain of scientific work: observation and experimental data → analysis using mathematics →discovery and enunciation of fundamental laws → further mathematical deduction → predictions to be tested by new experiments. As he put it: ‘To derive two or three general Principles of Motion from Phaenomena, and afterwards to tell us how the Properties and Actions of all corporeal Things follow from those manifest Principles, would be a very great step in Philosophy, though the causes of those Principles were not yet discover’d.’
Absolute Space and Time • For the purpose of developing mechanics, Newton invented the calculus. In his presentation he adopted the Greek attitude to geometry and the style of Euclid. Thus he converted knowledge obtained inductively from (refined!) experience—extension from the particular to the general— into a deductive style of presentation. From his laws of motion and universal gravitation, all the empirical laws of Kepler and Galileo followed as logical mathematical consequences. His clear statements about the natures of space and time were of critical importance at this juncture. They mark an important phase in our understanding of these key components of nature, and as we emphasize later, this understanding is never final but develops continually ‘in time’ as we gather more and more experience. Who better than Einstein to express all this: ‘It required a severe struggle [for Newton] to arrive at the concept of independent and absolute space, indispensable for the development of theory. Newton’s decision was, in the contemporary state of science, the only possible one, and particularly the only fruitful one. But the subsequent development of the problems, proceeding in a roundabout way which no one could then possibly foresee, has shown that the resistance of Leibniz and Huygens, intuitively well-founded but supported by inadequate arguments, was actually justified. … It has required no less strenuous exertions subsequently to overcome this concept [of absolute space].’
Theory and Experiment • In Newton’s work we see a confluence of the inductive and deductive methods, each playing its due role. There was a unification of celestial and terrestrial gravitational phenomena, and many previously intractable problems became amenable to analysis and understanding. At one point he went so far as to claim that he made no hypotheses—‘Hypotheses non fingo’—hinting at pure empiricism; but this actually shows that modern science was still young. As Einstein aptly said: ‘The more primitive the status of science is the more readily can the scientist live under the illusion that he is a pure empiricist.’ Today the level of sophistication of the physical sciences is such that every worthwhile experiment is heavily dependent on previous and current theory for its motivations, goals, methods, and analysis.
Over the course of the eighteenth century, the Galilean-Newtonian approach to physical science was amazingly successful. It was applied to problems of celestial mechanics or astronomy, fluid dynamics, and elastic media among others. A distinguished line of mathematical physicists—Leonhard Euler, Joseph Lagrange, Pierre Simon de Laplace, and many others—took part in this endeavour. At one point Lagrange complained that, after Newton, there was nothing left to be discovered! Towards the end of the century, the laws of static electricity and magnetism also fell into the Galilean- Newtonian pattern.
Thought as a Synthetic A Priori • Around this time, the philosopher of the Enlightenment, Immanuel Kant, was so impressed by these successes of the Galilean-Newtonian approach that he created a philosophical system to explain or justify them. We mentioned earlier the contrasting rationalist and empiricist schools of philosophy. Kant tried to bring them together and offered an explanation of the triumphs of Galilean-Newtonian science along the following lines. He distinguished between a priori and a posteriori forms of knowledge—respectively in advance of, and as a result of, experience of nature— and between two kinds of statements: the analytic, which are empty (such as definitions and statements of a logical nature), and the synthetic, which had nontrivial content and could in principle be false. He saw two paths to knowledge about nature—that which is a priori, and that which results from experience. Some of the basic physical ideas underlying Galilean-Newtonian physics, which were actually the results of long human experience and experiment, were regarded by him as synthetic a priori principles. Thus they were claimed to be available to us innately—as a result, one might say, of pure reason—and were necessarily valid and obeyed by natural phenomena. Some of these synthetic a priori principles were the separate and absolute natures of space and time, as expressed by Newton; the validity of Euclidean geometry for space; the law of causality; and later on even the permanence of matter and the law of conservation of mass. In effect, Kant took the knowledge of physical phenomena available in his time and made some of it necessarily and inevitably true and binding on nature. These synthetic a priori principles were present in our minds before any experience of nature; they were thought of as preconditions for, rather than results of, science.
Kant’s attempt was made about two centuries ago, and today it is clear that it was tied to his age and to the science of his time. Schrödinger characterizes well the impulse that lay behind Kant’s attempt: ‘One is very easily deceived into regarding an acquired habit of thought as a peremptory postulate imposed by our mind on any theory of the physical world.’ We will shortly look at some of the ways in which physical science has gone beyond Kant’s framework, and will describe a fascinating new way of understanding the origin of synthetic a priori principles of thought.
Physical Science in the Nineteenth and Twentieth Centuries Fields as Distinct from Matter • At the start of the nineteenth century the fields of optics, electricity, and magnetism were separate from one another and from mechanics. Chemistry was a distinct discipline. But over the century many advances were made, which we can only briefly describe here. An early step forward was in the understanding of the nature of light. Thomas Young’s experiments on interference brought the wave theory of light back into favour, as against Newton’s corpuscular ideas. This was carried forward and firmly established by Augustin Fresnel. Then, as a result of fundamental experimental discoveries by Hans Oersted, André Ampère, and Michael Faraday, the concepts of timedependent electric and magnetic fields came into being. There were things in nature in addition to and distinct from matter.
Meanwhile, celestial mechanics continued to record stunning successes. Perhaps the most striking example was the prediction by both John Adams and Urbain Le Verrier, based on Newtonian mechanics and gravitation, of the existence of a new planet, Neptune, to account for the observed discrepancies in the motion of Uranus. In 1846 it was found exactly where the astronomers were told to look. (However, a later similar attempt to trace discrepancies in the motion of Mercury to a perturbing planet Vulcan was unsuccessful. The answer came from an entirely unexpected direction—general relativity.)
Electromagnetism and Light • After Faraday’s powerful intuition had led to the idea of electric and magnetic fields, James Maxwell put all the known laws in the subject of electricity and magnetism into a coherent mathematical form. He then found an important discrepancy, saw the way to correct it, and was thus led to his comprehensive classical unified theory of electromagnetic phenomena. A prediction of this theory was the possibility of self-supporting electromagnetic waves whose speed when calculated turned out to be exactly the known speed of light. Then Maxwell identified light with these waves, and optics became a part of electromagnetism. During this period, following Fresnel’s work, it was believed that the propagation of light needed a material medium, the so-called luminiferous ether, and this concept was taken over by Maxwell as well.
Non-Euclidean Geometry • In the area of mathematics, the subject of geometry witnessed a major advance. We saw that Kant in his philosophy had made Euclidean geometry an inevitable or inescapable property of physical space—it was a synthetic a priori principle. Within mathematics, for centuries the status of one of Euclid’s postulates—the fifth one, the parallel postulate (that there is exactly one parallel to a given line through a given point)— had been repeatedly studied: was it logically independent of the other postulates or a consequence of them? During the first half of the nineteenth century, three mathematicians—Karl Gauss, Nikolai Lobachevsky, and János Bolyai—independently showed that it was a logically independent statement. It could be altered, allowing one to create logically consistent alternatives to Euclidean geometry. Thus was born within mathematics the concept of non-Euclidean geometry, which, as we will soon see, was to enter physical science just under a century later.
Statistical Physics • Over the latter half of the nineteenth century, statistical physics and statistical mechanics became established as foundations of thermodynamics. Thus by the century’s end the principal components of the physicist’s view of the world were Newton’s mechanics, Maxwell’s electromagnetism, and statistical ideas and thermodynamics. Relativity • The important departures from the Kantian picture of physical science—from the framework Kant developed to justify the successes of Galilean-Newtonian ideas— came one by one with the revolutionary theories of twentieth- century physics. First came the special theory of relativity, the resolution of a clash between Newton’s mechanics and Maxwell’s electromagnetism. It turned out that Newton’s views of separate and absolute space and time, and the Galilean transformations that go with them, were incompatible with Maxwell’s electromagnetic equations. These equations led to a profoundly different view of the properties of space and time. What special relativity achieved was to make clear these properties, show that there was no need for ether as a carrier of electromagnetic waves, and then amend Newton’s mechanics of material particles to make it consistent with electromagnetism. The earlier separateness and individual absoluteness of space and time— included among Kant’s synthetic a priori principles— gave way to a unified view in which only a combined space-time was common to and shared by all observers of natural phenomena. However, each observer could choose how he or she would split space-time in a physically meaningful way into separate space and time. The earlier absoluteness of the concept of simultaneity was lost, and now varied from observer to observer. For each observer, though, space continued to obey the laws of Euclidean geometry. Special relativity took one step beyond the Kantian framework—now only a combined law of conservation of matter and energy was valid, not separate laws for matter and for energy. (To be concluded)
The other major twentieth-century development in physics was the discovery of the quantum nature of phenomena and the formulation of quantum theory. In many ways quantum theory is more profound in its implications than the relativity theories. Quantum theory arose out of a clash between Maxwell’s electromagnetism and the principles of statistical physics, which, as we saw, provide the foundation for thermodynamics. We can only try to convey why quantum theory has had such a profound influence on the philosophy of science, and cannot venture into much technical detail. The view of the nature of light has swung back towards Newton’s corpuscular conception— with important and subtle differences—expressed in the concept of the photon. As for the mechanics of matter, the Galilean-Newtonian picture and description of motion has given way to a much more mathematically elaborate and subtle complex of ideas, which goes by the name of quantum mechanics. Material particles no longer travel along well-defined paths or trajectories in space in the course of time. Their evolution in time can only be given in the language of probability—that is, all the predictive statements of quantum mechanics are probabilistic in nature. The quantitative description of physical properties of systems undergoes two important changes in quantum mechanics: on the one hand, many physical variables show a quantization of the values they can possess—thus, typically, energies are restricted to a discrete set of values rather than a continuum. On the other hand, the physical variables of a given system have such mathematical properties, or are of such nature, that we cannot imagine that each of them always possesses some numerical value which, if we so wish, can be revealed by a measurement. According to Bohr, we can never speak of a quantum system as having such and such a value for such and such a physical property on its own, independent of our measurement of it. And with a pair of so-called incompatible properties, an effort to measure one of them automatically precludes any effort to simultaneously measure the other as well.
We have to learn to use language with much more caution or circumspection when speaking of quantum phenomena than was the case earlier. Many classically meaningful and answerable questions become devoid of meaning in the quantum domain. The kind of ‘visualizability’ of physical systems in complete detail which was possible in classical physics is denied by quantum mechanics.
From the perspective of Kantian thinking, quantum mechanics has made us give up strict determinism, substituting a kind of statistical causality for it. On the other hand, it has supplied the basic theoretical concepts for all of chemistry, for atomic, molecular, nuclear, and elementary particle phenomena, and for all processes involving radiation. The old law of the permanence of matter has gone, as it can be converted to radiation, and vice versa. Up to the present time, the agreement of quantum mechanics with experiments has been outstanding— nature does seem to behave, in many situations, in classically unreasonable ways.
The Reinterpretation of Kantian Ideas It is understandable that when physics advanced into new territories involving the very fast, the very large, and the very small—as judged by everyday standards and experience—some of the Kantian synthetic a priori principles had to be given up. As we said, Kant’s ideas were rooted in the physical science and Galilean-Newtonian tradition of his time; he could not have foreseen the revolutionary developments that were to come later. This much is natural. However, what is remarkable is that the ‘problem’ with his philosophical basis for physical science has been illumined during the midtwentieth century from a rather unexpected direction— namely, biology and the theory of evolution by natural selection. One might wonder if, apart from having to give up particular synthetic a priori principles as a result of advances in physical science, the very concept of such principles has also to be given up. After all, one might ask how principles supposedly known in advance of experience could necessarily constrain our later experiences. The answer to this question involves a subtle reinterpretation of Kant’s notions, using ideas not available to him. This fascinating development—the work of Konrad Lorenz—leads to a better understanding of the entire situation, and has been eloquently presented by Max Delbrück.
The basic contrast is between the slow evolution of species governed by the force of natural selection, involving innumerable generations and enormous stretches of time; and the relatively short life span of an individual member of the species. In the former process—phylogenesis—those abilities thrown up by random genetic changes which are beneficial to biological survival are retained and refined. The others are discarded. Those retained include the ability to recognize the most important physical features of the world around us at our own scales of length and time, because it is just these scales that are relevant for biological evolution. Thus, gradual evolution of species governed by natural selection develops these useful capacities, and then endows each individual with them at birth. From the point of view of the individual’s development over a single life time—ontogenesis— the capacities in question seem to be given readymade at birth, in advance of experience; they seem to be a priori. But this argument shows that from a longer time perspective there is nothing a priori about them, as they are the fruits of experience of the species. In Delbrück’s words: It appears therefore that two kinds of learning are involved in our dealing with the world. One is phylogenetic learning, in the sense that during evolution we have evolved very sophisticated machinery for perceiving and making inferences about a real world. … Collectively and across history, the human species has learned to deal with signals coming from the outside world by constructing a model of it. In other words, whereas in the light of modern understanding of evolutionary processes, we can say the individual approaches perception a priori, this is by no means true when we consider the history of mankind as a whole. What is a priori for individuals is a posteriori for the species. The second kind of learning involved in dealing with the world is ontogenetic learning, namely the lifelong acquisition of cultural, linguistic, and scientific knowledge.
The one added subtle point is that species evolution endows each individual with the capacity to acquire knowledge about the world outside, but not the knowledge itself. This has to be acquired through the experiences of infancy and childhood, and indeed is a lifelong endeavour. The difference between capacity and content is profound.
In this way Kant’s conceptions acquire new meaning. We also learn that the biologically evolved Kantian a prioris can only be expected to work for a limited range of natural phenomena, and our ‘sense of intuition’ is based on this range alone. We should therefore not be surprised if Galilean- Newtonian principles do not extend beyond this limited world to the world of the very fast, very large, or very small. But the truth is that our intuition is so much a part of us that it is very difficult to escape from or transcend it.
Some Important Features of Physical Science Returning to physical science, there are several important features it has acquired, some more recently than others, with significant philosophical implications. The descriptions and understanding of natural phenomena given by physical science are always developing or evolving, always provisional and never final. Since this is so very important, let me cite several examples which lead one to this sobering point of view. There have been occasions in the past—with Lagrange in the eighteenth century, and William Thompson (Lord Kelvin) at the end of the nineteenth century—when the feeling was expressed that all the laws of physics had been found and nothing remained to be discovered. Our experiences since then have made us much more modest in our claims. We both recognize the existence of limits of validity for every physical theory or body of laws, even for those yet to be discovered; and admit that future experience can always lead to unexpected surprises. In this important sense nature is inexhaustible: we will always be learning from her. The lack of finality of every physical theory in this sense means that we can only continually increase the accuracy of our description of the phenomena of ‘the real world out there’, but can never say we have been able to describe them exactly as they are, or have reached true reality. Our first example to drive these points home is connected with the Newtonian description of universal gravitation as an instantaneous attraction between any two mass points governed by an inverse square law. Before Newton, the prevailing idea was Descartes’ theory of vortices—all physical actions or influences were by contact alone. Newton’s law was a major change, giving rise to the concept of action at a distance. Privately, Newton himself expressed uneasiness at what seemed an unreasonable aspect of his law—how could material bodies influence one another instantaneously across intervening empty space? But his law worked, its quantitative predictions agreed with experience (at that time!), and with the passage of time the idea of action at a distance became gradually accepted. Even the initial laws of electricity and magnetism—in the static limit—were expressed in such a framework. The return to action by contact via an intervening field came about in the case of gravitation only in 1915 with Einstein’s theory of general relativity.
The next example concerns the nature of light. As we have discussed earlier, the corpuscular viewpoint championed by Newton was replaced by the wave concept after Young’s experiments on interference. After Maxwell’s classical electromagnetism arrived, light was identified with the propagating waves of Maxwell’s theory: now one ‘knew’ what the waves were made of. But when Einstein developed the photon concept in 1905, our understanding moved once more in the direction of the corpuscular viewpoint, involving a subtle combination of wave and particle concepts which can be properly expressed only in the language and imagery of quantum mechanics. At none of the above stages of development could one claim that one had finally understood the real nature and properties of light. It was always a movement towards improved understanding.
Our third example concerns the explanation of the spectrum of the simplest atom in nature, hydrogen. Bohr’s 1913 theory was the first breakthrough; it gave the vital clue to the wealth of data in the field of spectroscopy. Spectral lines corresponded to transitions of electrons between atomic states with various discrete energies. His model for the hydrogen atom was able to explain the spectral lines of the so-called Balmer series, and also several other series. This vital first step fell within the framework of the old quantum theory. A few years later, Arnold Sommerfeld introduced special relativistic corrections to the Bohr model, and was thus able to explain the so-called fine structure in the spectrum. This was then regarded as a triumph of the existing theoretical framework. But after the advent of quantum mechanics in 1925–6, the ‘correct’ understanding of the spectrum of hydrogen was supplied by the Schrödinger equation and its solutions. The framework of physical ideas was completely different from Bohr’s, but the data explained was the same. Then in 1928, after Dirac had found the relativistic wave equation for the electron, the fine structure came out as a straightforward consequence. After this, the Sommerfeld explanation became a fortuitous coincidence, not to be taken seriously anymore. Almost two decades later, as improved experimental techniques and measurements revealed new and finer details of the hydrogen spectrum—the so-called Lamb shift—one had to go beyond the Dirac equation and appeal to the theory of quantum electrodynamics (QED) for an explanation. This turned out to be one of the triumphs of that theory. Clearly at no stage could we have said that we had understood the origin of the lines of the spectrum of hydrogen in complete detail, or that we had the complete and real truth in our possession.
Turning from physics to mathematics, in the field of geometry we have seen a similar evolution, though over a much longer period of time. As we mentioned earlier, only after almost two millennia was it realized that Euclid’s geometry is not the only logically possible system of geometry for space; other non-Euclidean geometries are certainly conceivable and consistent. And after general relativity, the changeable geometry of space-time has become an ingredient in the laws of physics, specifically of gravitation. Today there is talk of the quantum features of geometry, one more step in the continuing effort to understand the natures of space and time.
These examples, and many others, teach us that the problem of what is physically real is a time-dependent one: it always depends on what is known at each epoch in the growth of physical science, and can see dramatic changes at certain points. Concepts like phlogiston and ether seemed essential at certain stages in the history of physics, but were later given up in the light of improved understanding.
The accuracy of observations and measurements and the sophistication of the instruments available for experimental investigation also continually increase, so they too contribute to the transitoriness of physical theories. But it should also be pointed out that at any given time we have trust in certain tested and successful ideas and theories, and keep working with them until we are compelled by new experience to go beyond them; then we modify them or in some cases even abandon them. Thus at the present time we have full confidence that, within their respective domains of validity, Newton’s mechanics, Maxwell’s electromagnetism, and the nonrelativistic quantum mechanics and its later developments can certainly be used.
Mathematics: The Language of Nature Next we turn to the important role of mathematics in physical science. Galileo’s remark about mathematics being the language of nature has turned out to be true, at least in physical science, to a degree far beyond what anyone might have imagined. In the eighteenth and much of the nineteenth centuries, as the concepts about the physical universe grew in complexity and subtlety, so did the mathematics used to describe them. The same gifted individuals contributed to both disciplines in these periods— Euler, Lagrange, Laplace, Fourier, Gauss, Hamilton, and Jacobi, to name a few. Thereafter, there was to some extent a parting of ways. The relativity and quantum revolutions in the twentieth century exploited mathematical ideas previously and independently developed purely within mathematics. In any event, there has been a steadily increasing role for mathematical ideas in physical science. In one sense this is connected to the reinterpretation of Kantian ideas sketched in the preceding section. As we move away from the domain of normal daily experience and into unfamiliar realms, it is understandable that our intuition often fails us, and then we depend increasingly on the mathematical structure of physical theory for guidance. Furthermore, the accuracy with which effects can be predicted by modern physical theories, and then checked by experiments, is truly staggering. In Eugene Wign-er’s view, there seems to be no rational explanation for this to be so.
There are some who regard the body of mathematical truths as an independently existing ‘continent out there’, and the process of mathematical discovery as the result of continual exploration of this continent. However, it is likely that this is a psychological response from some gifted individuals who have made really deep discoveries in mathematics based on a variety of motivations. A more modest and less problematic attitude is to regard mathematics as a human invention, similar to but far more compact and rigorous than language, given that in the first place evolution has equipped us with the capacity to create it. But then the extraordinary degree of detail and verification of physical theories via their predictions—this is what seems difficult to explain, and what Wigner terms a miracle. In Dirac’s view, the reason why the method of mathematical reasoning works so well in physical science is along these lines: ‘This must be ascribed to some mathematical quality in Nature, a quality which the casual observer of Nature would not suspect, but which nevertheless plays an important role in Nature’s scheme.’
Another related point stressed by Dirac should also be mentioned. It turns out that in the long run the deductive method is not suitable for physical science. One cannot base one’s ideas on a fixed, initially stated, and unchanging set of axioms, and then rely on logic to obtain all possible physical consequences. One may adopt this strategy—inspired by Euclid—to a limited extent to grasp the logical structure of a particular set of ideas in a compact way, but one is bound sooner or later to transcend the confines of such a structure. This has been the case, for instance, with Newton’s axiomatic approach to mechanics—witness the changes wrought by special relativity on the one hand, and quantum theory on the other. Such may well be the case with the present highly successful quantum mechanics as well. Turning to Dirac:
The steady progress of physics requires for its theoretical formulation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected … was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more and more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundations and gets more abstract. … It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalization of the axioms at the base of the mathematics rather than with a logical development of any one mathematical scheme on a fixed foundation.
Looking Back Philosophically Philosophical insights into and speculations about nature go far back in time; modern science in comparison is very recent. We have followed the growth of physical science from its modern beginnings at the hands of Galileo and Newton, and the impact it had on philosophy in that period. We saw how classical physics seemed to have achieved a kind of completeness at the end of the nineteenth century, after which the relativity and quantum revolutions occurred.
In discussing or evaluating ancient philosophical ideas in the light of knowledge attained much later, a great sense of balance is needed. Such comparisons can easily be misunderstood. On this point, Heisenberg explains :
It may seem at first sight that the Greek philosophers have by some kind of ingenious intuition come to the same or very similar conclusions as we have in modern times only after several centuries of hard labour with experiments and mathematics. This interpretation of our comparison would, however, be a complete misunderstanding. There is an enormous difference between modern science and Greek philosophy, and that is just the empiricist attitude of modern science. … This possibility of checking the correctness of a statement experimentally with very high precision and in any number of details gives an enormous weight to the statement that could not be attached to the statements of early Greek philosophy. All the same, some statements of ancient philosophy are rather near to those of modern science
It is important to stress, as Bohr particularly did, that science is a social human activity crucially dependent on communication among individuals. Each scientific theory is properly viewed as a human creation. Here is Yakov Zeldovich’s expression of this aspect: ‘Fundamental science is … needed, among other things, because it satisfies man’s spiritual requirements. Scientific endeavour is a remarkable manifestation of human intellect. It perfects human intelligence and ennobles the soul.’
We have seen how difficult it is to give precise definitions of what is physically real; any statement reflects the state of knowledge at the time it is made, and may have to be revised later. From a philosophical stance, the importance of mathematics in physical science, and the changing ways in which it is used, are noteworthy. In the discussions about quantum mechanics we see the extreme care required in the use of language (not to mean, of course, that we can be careless in other realms!).
Again, from a philosophical standpoint, we see that pure empiricism and a purely deductive approach are both limited in scope. We need to combine caution, flexibility, and rigour—all at the same time. Nature is inexhaustible, and only experience hand in hand with reason can guide us to dependable knowledge. These seem to be the characteristics of a philosophy useful for the physical sciences.
Bibliography 1. Benjamin Farrington, Greek Science (Nottingham: Spokesman, 1980). 2. Erwin Schrödinger, Nature and the Greeks (Cambridge, 1996). 3. Hans Reichenbach, The Rise of Scientific Philosophy (Berkeley: University of California, 1959). 4. Werner Heisenberg, Physics and Philosophy (New York: Harper and Row, 1962). 5. Konrad Lorenz, The Natural Science of the Human Species (Cambridge: MIT, 1997). 6. Max Delbrück, Mind from Matter? An Essay on Evolutionary Epistemology (Palo Alto: Blackwell, 1986). 7. Jean-Pierre Changeux and Alain Connes, Conversations on Mind, Matter and Mathematics (Princeton University, 1995). 8. Eugene P Wigner, Symmetries and Reflections—Scientific Essays (Woodbridge: Ox Bow, 1979).
David Bohm and Renée Weber on Physics and Maths Weber The modern physicist is more like the materialist. Bohm Basically, except for this tremendous emphasis on mathematics, which is like saying that God is a mathematician. If you emphasize mathematics as much as scientists now do, without any physical picture of matter, you are tacitly saying that the essence of the world is something abstract and almost spiritual, if you really think about it. Weber Mathematics is pure thought. Bohm That’s right. You won’t find it anywhere in matter. Weber You are saying that even today’s physicists who might be least inclined towards anything spiritual are practically forced to assume that it is beyond the material. Bohm Tacitly, anyway. Physicists may not accept this, but they are attributing qualities to matter that are beyond those usually considered to be material. They are more like spiritual qualities in so far as we say there is this mathematical order which prevails, which has no picture in material terms that we can correlate with it. Weber Is it an aesthetic principle of something deeper still that makes them hold out for one rather than for three or four ultimate laws? Is it a spiritual drive, without their realizing it? Bohm It probably is a universal human drive, the same one which drives people to mysticism or to religion or art. … Weber Feynman said that those who don’t understand mathematics don’t realize the beauty in the universe. Beauty keeps coming up, together with order and simplicity and other Pythagorean and Platonic categories. Bohm Order and simplicity and unity, and something behind all that which we can’t describe. —Dialogues with Scientists and Sages: The Search for Unity |
d9a1b612dee03f68 | We gratefully acknowledge support from
the Simons Foundation and member institutions.
Numerical Analysis
Authors and titles for recent submissions, skipping first 80
[ total of 77 entries: 1-25 | 3-27 | 28-52 | 53-77 ]
Wed, 24 Feb 2021 (continued, showing last 4 of 15 entries)
[53] arXiv:2102.11830 (cross-list from stat.ML) [pdf, other]
Title: Solving high-dimensional parabolic PDEs using the tensor train format
Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Numerical Analysis (math.NA); Probability (math.PR)
[54] arXiv:2102.11644 (cross-list from math.DS) [pdf, other]
Title: Higher order phase averaging for highly oscillatory systems
[55] arXiv:2102.11636 (cross-list from physics.flu-dyn) [pdf, other]
Title: Using a deep neural network to predict the motion of under-resolved triangular rigid bodies in an incompressible flow
[56] arXiv:2102.11379 (cross-list from math.OC) [pdf, other]
Title: Actor-Critic Method for High Dimensional Static Hamilton--Jacobi--Bellman Partial Differential Equations based on Neural Networks
Comments: 23 pages, 4 figures
Tue, 23 Feb 2021
[57] arXiv:2102.11167 [pdf, other]
Title: On Theoretical and Numerical Aspect of Fractional Differential Equations with Purely Integral Conditions
Comments: 24 pages
Subjects: Numerical Analysis (math.NA); Analysis of PDEs (math.AP)
[58] arXiv:2102.11125 [pdf, ps, other]
Title: Convergence error estimates at low regularity for time discretizations of KdV
Subjects: Numerical Analysis (math.NA)
[59] arXiv:2102.10988 [pdf, ps, other]
Title: Energy stable arbitrary order ETD-MS method for gradient flows with Lipschitz nonlinearity
Subjects: Numerical Analysis (math.NA)
[60] arXiv:2102.10941 [pdf, other]
Title: Singular Euler-Maclaurin expansion on multidimensional lattices
Subjects: Numerical Analysis (math.NA)
[61] arXiv:2102.10907 [pdf, other]
Title: Multisymplectic variational integrators for barotropic and incompressible fluid models with constraints
Subjects: Numerical Analysis (math.NA); Fluid Dynamics (physics.flu-dyn)
[62] arXiv:2102.10887 [pdf, ps, other]
Title: Kernel quadrature by applying a point-wise gradient descent method to discrete energies
Comments: 21 pages, 12 figures
Subjects: Numerical Analysis (math.NA)
[63] arXiv:2102.10677 [pdf, other]
Title: Improve Unscented Kalman Inversion With Low-Rank Approximation and Reduced-Order Model
Comments: 27 pages, 9 figures
[64] arXiv:2102.10547 [pdf, ps, other]
Title: A new efficient operator splitting method for stochastic Maxwell equations
Subjects: Numerical Analysis (math.NA)
[65] arXiv:2102.10408 [pdf, other]
Title: Numerical analysis of a topology optimization problem for Stokes flow
[66] arXiv:2102.10393 [pdf, ps, other]
Title: On the tensor nuclear norm and the total variation regularization for image and video completion
Subjects: Numerical Analysis (math.NA)
[67] arXiv:2102.10351 [pdf, other]
Title: Nonlinear dimension reduction for surrogate modeling using gradient information
Subjects: Numerical Analysis (math.NA)
[68] arXiv:2102.10327 [pdf, ps, other]
Title: Graph Laplacian for image deblurring
Subjects: Numerical Analysis (math.NA)
[69] arXiv:2102.10297 [pdf, ps, other]
Title: A novel spectral method for the semi-classical Schrödinger equation based on the Gaussian wave-packet transform
Subjects: Numerical Analysis (math.NA)
[70] arXiv:2102.10203 [pdf, other]
Title: Multirate Linearly-Implicit GARK Schemes
Subjects: Numerical Analysis (math.NA)
[71] arXiv:2102.10199 [pdf, other]
Title: Information-Theoretic Bounds for Integral Estimation
Subjects: Numerical Analysis (math.NA); Machine Learning (cs.LG)
[72] arXiv:2102.10159 [pdf, other]
Title: A convergent finite difference method for computing minimal Lagrangian graphs
Comments: 25 pages, 8 figures, submitted to "Communications on Pure and Applied Analysis"
[73] arXiv:2102.10939 (cross-list from cs.DS) [pdf, ps, other]
Title: A High-dimensional Sparse Fourier Transform in the Continuous Setting
Authors: Liang Chen
Subjects: Data Structures and Algorithms (cs.DS); Information Theory (cs.IT); Numerical Analysis (math.NA)
[74] arXiv:2102.10621 (cross-list from math.AP) [pdf, ps, other]
Title: Convergence rate of DeepONets for learning operators arising from advection-diffusion equations
Subjects: Analysis of PDEs (math.AP); Numerical Analysis (math.NA)
[75] arXiv:2102.10576 (cross-list from physics.flu-dyn) [pdf, other]
Title: Bifurcation analysis of two-dimensional Rayleigh--Bénard convection using deflation
Comments: 17 pages, 12 figures
Subjects: Fluid Dynamics (physics.flu-dyn); Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)
[76] arXiv:2102.10426 (cross-list from eess.SP) [pdf, other]
Title: Dynamic Selective Positioning for High-Precision Accuracy in 5G NR V2X Networks
Comments: In Proceedings of IEEE 93rd Vehicular Technology Conference (VTC-Spring), Helsinki, Finland, Apr. 2021
Subjects: Signal Processing (eess.SP); Numerical Analysis (math.NA)
[77] arXiv:2102.10374 (cross-list from nlin.SI) [pdf, ps, other]
Title: Reflectionless propagation of Manakov solitons on a line:A model based on the concept of transparent boundary conditions
Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Numerical Analysis (math.NA)
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, math, new, 2102, contact, help (Access key information) |
c1247e23d940177c | Evolutionary branching
Evolutionary branching via replicator-mutator equations
Matthieu Alfaro and Mario Veruete IMAG, Université de Montpellier, CC051, 34095, Montpellier, France matthieu.alfaro@umontpellier.fr, mario.veruete@umontpellier.fr
We consider a class of non-local reaction-diffusion problems, referred to as replicator-mutator equations in evolutionary genetics. For a confining fitness function, we prove well-posedness and write the solution explicitly, via some underlying Schrödinger spectral elements (for which we provide new and non-standard estimates). As a consequence, the long time behaviour is determined by the principal eigenfunction or ground state. Based on this, we discuss (rigorously and via numerical explorations) the conditions on the fitness function and the mutation rate for evolutionary branching to occur.
Key words and phrases:
Evolutionary genetics, dynamics of adaptation, branching phenomena, long time behaviour, Schrödinger eigenelements
2010 Mathematics Subject Classification:
92B05, 92D15, 35K15, 45K05
1. Introduction
In this paper we first study the existence, uniqueness and long time behaviour of solutions , , , to the integro-differential Cauchy problem
which serves as a model for the dynamics of adaptation, and where is a confining fitness function (see below for details). Next, we enquire on the possibility, depending on the function and the parameter , for a solution to split from uni-modal to multi-modal shape, thus reproducing evolutionary branching.
The above equation is referred to as a replicator-mutator model. This type of model has found applications in different fields such as economics and biology [25], [4]. In the field of evolutionary genetics, a free spatial version of equation (1) was introduced by Tsimring, Levine and Kessler in [40], where they propose a mean-field theory for the evolution of RNA virus population. Without mutations, and under the constraint of constant mass , the dynamics is given by
with in [40]. In this context, represents the density of a population (at time and per unit of phenotypic trait) on a one-dimensional phenotypic trait space. The function represents the fitness of the phenotype and models the individual reproductive success; thus the non-local term
stands for the mean fitness at time .
As a first step to take into account evolutionary phenomena, mutations are modelled by the local diffusion operator , where is the mutation rate, so that equation (2) is transferred into (1). We refer to the recent paper [41] for a rigorous derivation of the replicator-mutator problem (1) from individual based models.
Equation (1) is supplemented with a non-negative and bounded, initial data such that , so that, formally, for later times. Indeed, integrating formally (1) over , the total mass
solves the initial value problem
Hence, by Gronwall’s lemma, , as long as is meaningful.
The case of linear fitness function, , was the first introduced in [40], but little was known concerning existence and behaviours of solutions. Let us here mention the main result of Biktashev [5]: for compactly supported initial data, solutions converge, as goes to infinity, to a Gaussian profile, where the convergence is understood in terms of the moments of . In a recent paper [2], Alfaro and Carles proved that, thanks to a tricky change of unknown based on the Avron-Herbst formula (coming from quantum mechanics), equation (1) can be reduced to the heat equation. This enables to compute the solution explicitly and describe contrasted behaviours depending on the tails of the initial datum: either the solution is global and tends, as tends to infinity, to a Gaussian profile which is centred around (acceleration) and is flattening (extinction in infinite horizon), or the solution becomes extinct in finite time (or even immediately) thus contradicting the conservation of the mass, previously formally observed.
For quadratic fitness functions, , it turns out that the equation can again be reduced to the heat equation [3], up to an additional use of the generalized lens transform of the Schrödinger equation. In the case , for any initial data, there is extinction at a finite time which is always bounded from above by . Roughly speaking, both the right and left tails quickly enlarge, so that, in order to conserve the mass, the central part is quickly decreasing. Then the non-local mean fitness term becomes infinite very quickly and equation (1) becomes meaningless (extinction). On the other hand, when , for any initial data, the solution is global and tends, as tends to infinity, to an universal stationary Gaussian profile.
The aforementioned cases and share the property of being unbounded from above, meaning that some phenotypes are infinitely well-adapted. This unlimited growth rate of in (1) yields rich mathematical behaviours (acceleration, extinction) but is not admissible for biological applications. To deal with such a problem, for the linear fitness case, some works consider a “cut-off version” of (1) at large [40], [35], [37], or provide a proper stochastic treatment for large phenotypic trait region [34].
On the other hand, is referred to as a confining fitness function, typically preventing extinction phenomena. However, it does not suffice to take into account more realistic cases for which fitness functions are defined by a linear combination of two components (e.g. birth and death rates), each maximized by different optimal values of the underlying trait, a typical case being .
Our main goal is thus to provide a rigorous treatment of the Cauchy problem (1) when the fitness function is confining. For a relatively large class of such fitness functions, we prove well-posedness, and show that the solution of (1) converges to the principal eigenfunction (or ground state) of the underlying Schrödinger operator divided by its mass. This requires rather non-standard estimates on the eigenelements. Also, from a modelling perspective, this enables to reproduce evolutionary branching, consisting of the spontaneous splitting from uni-modal to multi-modal distribution of the trait.
Such splitting phenomena have long been discussed and analysed in different frameworks, see e.g. [29] via Hamilton-Jacobi technics, [42] within finite populations, or [27] for a Lotka-Volterra system in a bounded domain. In a replicator-mutator context, let us notice that, while branching in (1) is mainly induced by the fitness function, it was recently obtained in [18] through different means. Precisely, the authors study the case of linear fitness but non-local diffusion (mutation kernel), namely
Their approach [30], [18] is based on Cumulant Generating Functions (CGF): it turns out that the CGF satisfies a first order non-local partial differential equation that can be explicitly solved, thus giving access to many informations such as mean trait, variance, position of the leading edge. When a purely deleterious mutation kernel balances the infinite growth rate of , they reveal some branching scenarios.
The paper is organized as follows. In Section 2 we present the underlying linear material. In Section 3 we prove the well-possessedness of the Cauchy problem associated to (1). We also provide an explicit expression of the solution and studies its long time behaviour. In Section 4 we discuss, through rigorous details or numerical explorations, the conditions on the shape of the fitness function and on the mutation parameter for branching phenomenon to occur. Finally, we briefly conclude in Section 5.
2. Some spectral properties
In this section, we present some linear material. We first quote some very classical results [39], [33], [1], [21], [22], [20], [15] for Schrödinger operators, and then prove less standard estimates on the eigenfunctions, which are crucial for later analysis.
2.1. Confining fitness functions and eigenvalues properties
Confining fitness functions tend to at infinity. In quantum mechanics, this corresponds to potentials describing the evolution of quantum particles subject to an external field force that prevents them from escaping to infinity, that is, particles have a high probability of presence in a bounded spatial region.
Assumption 1 (Confining fitness function).
The fitness function is continuous and confining, that is
Proposition 2.1 (Spectral basis).
Let satisfy Assumption 1. Then the operator
is essentially self-adjoint on , and has discrete spectrum: there exists an orthonormal basis of consisting of eigenfunctions of
with corresponding eigenvalues
of finite multiplicity.
Remark 1.
Proposition 2.1 is a classical result [33], [38, Chapter 3, Theorem 1.4 and Theorem 1.6] for Schrödinger operators with confining potential . In this paper, the minus sign is due to the biological interpretation of the fitness function .
In the quantum mechanics terminology, is known as the ground state, corresponding to the bound-state of minimal energy . In this paper we refer to the couple indistinctly as ground state/ground state energy or as principal eigenfunction/principal eigenvalue.
The principal eigenvalue can be characterised by the variational formulation
where is the energy functional given by
Using concentrated test functions, the above formula enables to understand the behaviour of the principal eigenvalue as the mutation rate tends to 0. The following will be used in Section 4 to prove some branching phenomena.
Proposition 2.2 (Asymptotics for as ).
Let satisfy Assumption 1. Assume that reaches a global maximum at . Then .
For the convenience of the reader, we give the proof of this standard fact. Let be a smooth, non-negative, and compactly supported in function with . We define the test function
From the variational formula (4), we have
The first integral in the right hand side is given by
as . The second integral gives
which, by the -dominated convergence theorem tends to as . ∎
In the subsequent sections, we will quote results on the spectral properties of Schrödinger operators, in particular an asymptotics for the eigenvalues as . As far as we know, the available results require to assume that the fitness is polynomial.
Assumption 2 (Polynomial confining fitness function).
The fitness function is a real polynomial of degree :
for some integer and some real numbers , .
Under Assumption 2, elliptic regularity theory insures that the eigenfunctions are infinitely differentiable. Furthermore, all the derivatives of each eigenfunction are square-integrable [17]. Notice that it is also known that all eigenfunctions actually belong to the Schwartz space .
Proposition 2.3 (Asymptotics for eigenvalues).
Let satisfy Assumption 2. Then all eigenvalues of are simple and
where , with being the gamma function.
We refer to [39], [15] and the references therein for more details on the above asymptotic formula. Furthermore, in the case of a symmetric fitness , the simplicity of eigenvalues enforce all eigenfunctions to be even or odd. In particular the principal eigenfunction (ground state) is even since it is known to have constant sign.
Remark 2.
Assume that is such that for some polynomial as in Assumption 2 and some constant . From Courant-Fisher’s theorem, that is the variational characterization of the eigenvalues, we deduce that , where are the eigenvalues of the Hamiltonian with potential . Hence, share with the asymptotics (5), which is the keystone for deriving the estimates on eigenfunctions in subsection 2.2, and thereafter our main results in Section 3. Hence, our results apply to such fitness functions, covering in particular the case of the so-called pseudo-polynomials (i.e. smooth functions which coincide, outside of a compact region, with a polynomial as in Assumption 2), which are relevant for numerical computations.
2.2. , and weighted norms of the eigenfunctions
In the study of spectral properties of Schrödinger operators, efforts tend to concentrate around asymptotic estimates of eigenvalues or on the regularity and decay of eigenfunctions [39], [1], [10], [16]. Much less attention has been given to estimate the and norms of eigenfunctions. One reason is that the natural framework for eigenfunctions of the Hamiltonian , defined in (3), is . On the other hand, the biological nature of problem (1) suggests and as natural spaces for the solution . We therefore provide in this subsection rather non-standard estimates on the eigenfunctions.
We define
the mass of the -th eigenfunction of the Hamiltonian . In the sequel, by
we mean that there is such that, for all , .
Proposition 2.4 ( norm of eigenfunctions).
Let satisfy Assumption 2. Then we have
Before proving the above proposition, we need the following lemma which is of independent interest.
Lemma 2.5.
Let and be given. Then there is a constant such that, for all ,
Let denote the open -dimensional ball of radius and center . We write
By the Cauchy-Schwarz inequality we have
for some and . Summarizing,
Now, we select
which minimizes the right hand side of (8) and yields (7). ∎
Remark 3.
The correct power in (7) can be retrieved by a standard homogeneity argument. Indeed, defining for , we get
so that
Powers of in both sides must coincide, which enforces .
We can now estimate the mass of the eigenfunctions.
Proof of Proposition 2.4.
Up to subtracting a constant to , we can assume without loss of generality that . Multiplying by the eigenvalue equation
and integrating over , we get
Integrating by parts and recalling that eigenfunctions are normalized in , we obtain
so that
Next, it follows from Assumption 2 (and ) that there is such that for all , and thus
Now, by Lemma 2.5, we have
which, combined with (5), implies (6). The proposition is proved. ∎
Proposition 2.6 ( norm of eigenfunctions).
Let satisfy Assumption 2. Then we have
Since , we have
and the conclusion follows from (5). ∎
Proposition 2.7 (Weighted norm of eigenfunctions).
Let satisfy Assumption 2. Then we have
From Assumption 2 and , we can find large enough so that the following facts hold for all : there are and such that
and is decreasing on . Assumption 2 implies that and thus, from Proposition 2.3, . Next, up to enlarging if necessary, it follows from Assumption 2 that for all . As a result, functions
are respectively super and sub-solutions of the eigenvalue equation
so that
by the comparison principle. An analogous estimate holds on .
In order to estimate , we split the domain of integration into three parts: , and . Setting
we decompose . Notice that,
where is a polynomial of same degree as . Hence, from (10) and Proposition 2.6, we get
By monotonicity of on ,
Remember that in so that
Finally, . ∎
3. Well-posedness and long time behaviour
In this section we show that the Cauchy problem (1) has a unique smooth solution which is global in time. Keystones are the change of variable (13) that links the non-local equation (1) to a linear parabolic problem, and our previous estimates on the underlying eigenelements. Equipped with the representation (12) of the solution, we then prove convergence in any , , to the principal eigenfunction normalized by its mass.
Up to subtracting a constant to the confining fitness function , we can assume without loss of generality (recall the mass conservation property) that .
3.1. Functional framework
For a negative confining fitness function (see Assumption 1), we set
Recall that the Sobolev space is defined as
where the derivative is understood in the distributional sense. We denote by the Hilbert space with inner product defined by
and with usual inner product
By Assumption 1, it is straightforward that , so that . Moreover, the following holds.
Lemma 3.1.
The embedding is dense, continuous and compact.
This is very classical but, for the convenience of the reader, we present the details. Since and is dense in , it follows that is dense in . Next, since for ,
the embedding is continuous.
The proof of compactness follows by the Riesz-Fréchet-Kolmogorov theorem, see e.g. [8, Theorem 4.26]. Let a bounded sequence of functions of : there is such that, for all ,
We first need to show the uniform smallness of the tails of . Let . Select large enough so that for all . Then
Next, for a compact set , we need to show the uniform smallness of the norm of . Let . By Morrey’s theorem, there is such that, for all and ,
so that
for all , if is sufficiently small. The lemma is proved. ∎
3.2. Main results
We first define the notion of solution to the Cauchy problem (1).
Definition 3.2 (Admissible initial data).
We say that a function is an admissible initial data if , and .
Definition 3.3 (Solution of the Cauchy problem (1)).
Let be an admissible initial data. We say that is a (global) solution of the Cauchy problem (1) if, for any , , , and
1. [label=()]
2. For all , all ,
where the time derivative is understood in the distributional sense. Equivalently, for all , all ,
3. is a continuous function on .
4. .
Here is our main mathematical result.
Theorem 3.4 (Solving replicator-mutator problem).
Let satisfy Assumption 2. For any admissible initial condition , there is a unique solution to the Cauchy problem (1), in the sense of Definition 3.3. Moreover the solution is smooth on and is given by
where are the eigenelements defined in Proposition 2.1, and
We proceed by necessary and sufficient condition. Let be a solution, in the sense of Definition 3.3. We define the function as
This function is well defined since by Definition 3.3 , the integral in the exponential is finite for all . Since and , it is straightforward to see that, for all , . Additionally, from and , one can see that . Last, due to
since .
We now show that solves the linear Cauchy problem
Indeed, formally for the moment,
so that
since solves (1). Those computations can be made rigorous in the distributional sense. Indeed, for a test function , set
and by Definition 3.3 , belongs to . Writing (11) with as test function yields the weak formulation of (14) with as test function, that is
for all .
The well-posedness of the linear Cauchy problem (14) is postponed to the next subsection: from Proposition 3.6, we know that, for all ,
Now, the estimates on the eigenvalues and the norm of eigenfunctions, namely Proposition 2.3 and Proposition 2.6, allow to write
Also, we know from the parabolic regularity theory and the comparison principle, that and that for all , .
Now, we show that the change of variable (13) can be inverted. For , multiplying (13) by and integrating over , we get
On the other hand, we claim that, for all ,
which follows formally by integrating (14) over . To prove (16) rigorously, notice first that by Proposition 2.3 and 2.4, the series
converges for all . Hence , the total mass of , is given by
Next, for any , thanks to Proposition 2.3 and 2.4, so that is differentiable on and
the last equality following by similar arguments based on Proposition 2.7. Hence (16) is proved. From (15), (16) and , we deduce that
for all . As a conclusion, (13) is inverted into
for all , .
Conversely, we need to show that the function given by (17) is the solution of (1) in the sense of Definition 3.3. Let .
Since , the function is continuous on , which shows item of Definition 3.3.
Next, since and , |
e5cd382c33c0540f | Raman Effect In Hindi Pdf
Raman effect in hindi pdf
THEORY OF RAMAN EFFECT 1. The Raman Effect: The Raman effect was first theoretically predicted by A. Smekal, (); followed by quantum mechanical descriptions by Kramers and Heisenberg, () and Dirac, (). The first experimental evidence for the inelastic scattering of light by molecules such as liquids was observed by RamanMissing: hindi.
Dec 01, · This effect was first discovered by C.V. Raman and K.S. Krishnan in in India [64] and was named Raman scattering. It is a very rare and fast process, with only 10 −6.
Raman effect - Raman effect Comments. आप यहाँ पर Raman gk, effect question answers, general knowledge, Raman सामान्य ज्ञान, effect questions in hindi, notes in hindi, pdf in hindi आदि विषय पर अपने जवाब दे सकते हैं।. रमन प्रभाव क्या है? what is raman effect in hindi. By admin. 0 प्रश्न रमन प्रभाव क्या है? Nov 06, · can anyone explain me raman effect in hindi as well as english.
Plzz fast. asked Jan 12, in Class IX Hindi by maaz Expert (k points) plzz answer fast!!! +2 votes. 2 answers. Article on- "Written exams do no good" in about words. asked Mar 4, in Class X English by esha agarwal (1 point). The Raman effect Investigation of molecular structure by light scattering C V RAMAN (Received 9th September ) PART I 1.
Introduction and historical In the scheme of discussion organised by the Faraday Society, the phenomenon of the scattering of light of altered wavelength rightly occupies a. Raman effect, change in the wavelength of light that occurs when a light beam is deflected by xn----ctbrlmtni3e.xn--p1ai a beam of light traverses a dust-free, transparent sample of a chemical compound, a small fraction of the light emerges in directions other than that of the incident (incoming) xn----ctbrlmtni3e.xn--p1ai of this scattered light is of unchanged wavelength.
A small part, however, has wavelengths different. Since the discovery of Raman Effect, a type of scattering, inits various applications have continued to attract interest (Agarwal and Atalla, ).In Raman Spectroscopy, a laser shone on. • Raman effect small but accessible by use of lasers • Complementary information to IR spectroscopy phomonuclear diatomic molecules, low frequency range • In situ analysis of organic and inorganic compounds • Analysis of aqueous solutions and solids (powders) • Using resonance and surface enhancement effects ~Missing: hindi.
Raman Measures the Effect of Light Scattering. Analysis of light scattered by a liquid is not an easy task, and much of the early work in Calcutta was done by the visual observation of color rather than precise measurements of the light's wavelength as shown in Figure 1 at right.
Version Download 72 mb File Size 1 File Count March 2, Create Date March 2, Last Updated xn----ctbrlmtni3e.xn--p1ainload Download Related Posts:Raman Maharshi -Arthur OsbourneRaman Maharshi Anv Aatam Gyan Ka MargSri Raman Maharishi Se Baatcheetसी० वी० रमन | C.
V. RAMANMain Kaun Hun-Shri Ramana Maharshiमहर्षि पाणिनि. May 16, · Sir Chandrashekhar Venkat (CV) Raman Short Biography In Hindi Language चन्द्रशेखर वेंकट रमन की जीवनी. Raman Effect, Jeevan Parichay In PDF.
Nobel Prize. In the Indian physicist C. V. Raman () discovered the effect named after him virtually simultaneously with the Russian physicists G. S. Landsberg () and L. I. Mandelstam ().
I first provide a biographical sketch of Raman through his years in Calcutta () and Bangalore (after ). I then discuss his scientific work in acoustics, astronomy, and optics up.
Raman scattering or the Raman effect / ˈ r ɑː m ən / is the inelastic scattering of photons by matter, meaning that there is an exchange of energy and a change in the light's direction.
Typically this involves vibrational energy being gained by a molecule as incident photons from a. પ્રકાશનો આણ્વિક ફેલાવો તથા 'રામન અસર' (અંગ્રેજી: Raman Effect) માટે તેમને ૧૯૩૦માં નોબલ પુરસ્કાર પ્રાપ્ત થયો હતો. જીવન. Raman scattering or the Raman effect is the inelastic scattering of a photon by molecules which are excited to higher energy levels.
The effect was discovered in by C. V. Raman and his student K. S. Krishnan in liquids,and independently by Grigory Landsberg and Leonid Mandelstam in xn----ctbrlmtni3e.xn--p1ai effect had been predicted theoretically by Adolf Smekal in Nov 26, · क्यों आकाश नीला है?raman effect.रामान प्रभाव.raman scattering.(iit-jee,neet level expplaination) - duration: PHYSICS REVOLVERS 74, Raman effect is discovered by Sir CV Raman which measures vibrational modes in a molecule.
When a sample is exposed to monochromatic radiation majority of the light is transmitted, remaining part is scattered, and Raman spectroscopy measures the scattered light [].From this we can get molecular analysis as every molecule has its own spectrum this gives a characteristic spectrum for each.
रमन प्रभाव (Raman effect) की खोज, भौतिक वैज्ञानिक शिक्षा (Education) एम.एस.सी मृत्यु (CV Raman Death) 21 नवम्बर,बैंगलोर उपलब्धियां (Awards).
Apr 25, · This is known as Raman effect. This series of lines in the scattering of light by the atoms and molecules is known as Raman Spectrum. Applications of Raman Spectrum are also explained here. Dec 27, · The Raman effect comprises a very small fraction, about 1 in of the incident photons 7. A Raman spectrum is a plot of the intensity of Raman scattered radiation as a function of its frequency difference from the incident radiation (usually in units of wave numbers, cm⁻¹).
This difference is called the Raman shift. 8. keywords: raman effect, sir cv raman, cv raman biography, cv raman inventions, cv raman nobel prize, cv raman in hindi, chandrasekhara venkata raman, raman effect in hindi, cv raman information in hindi, C V Raman Biography in Hindi, सी.वी.
रमन जीवनी, वैज्ञानिक सी वी रमन. Contextual translation of "raman effect" into Hindi. Human translations with examples: गुण, परिणाम, प्रभाव, सभी प्रभाव. raman effect, Hindi translation of raman effect, Hindi meaning of raman effect, what is raman effect in Hindi dictionary, raman effect related Hindi | हिन्दी words.
Raman effect was first discovered ay an Indian scientist C.V. Raman, and a Soviet Union scientist, Mandelstam, independently in Because of the weak intensity of Raman scattering, Raman spectrum study was limited to linear Raman spectrum originally, and was. effect in greater or less degree. That the effect is a true scattering, and secondly by its polarisation, which is in many cases quire strong and comparable with the polarisation of the ordinary scattering.
The investigation is naturally much more difficult in the case of gases and vapours, owing to the excessive feebleness of the effect. Aug 15, · Nonlinear Raman Spectroscopy includes: Hyper Raman spectroscopy, coherent anti-Stokes Raman Spectroscopy, coherent Stokes Raman spectroscopy, stimulated Raman gain and inverse Ramen spectroscopy.
Nonlinear Raman spectroscopy is more sensitive than classical Raman spectroscopy and can effectively reduce/remove the influence of fluorescence. Raman's second important discovery on the scattering of light was a new type of radiation, an eponymous phenomenon called Raman effect. InK. S. Krishnan, the Research Associate in his laboratory, noted the theoretical background for the existence of an additional scattering line beside the usual polarised elastic scattering when light.
C.V. Raman was the first Indian to win the Nobel Prize for Physics. He won it for his discovery, ‘The Raman Effect’. This biography of C.V. Raman provides detailed information about his childhood, life, achievements, works & timeline.
Coherent anti-Stokes Raman spectroscopy, also called Coherent anti-Stokes Raman scattering spectroscopy (CARS), is a form of spectroscopy used primarily in chemistry, physics and related fields. It is sensitive to the same vibrational signatures of molecules as seen in Raman spectroscopy, typically the nuclear vibrations of chemical xn----ctbrlmtni3e.xn--p1ai Raman spectroscopy, CARS employs multiple Missing: hindi.
Raman spectroscopy (/ ˈ r ɑː m ən /); (named after Indian physicist C. V. Raman) is a spectroscopic technique typically used to determine vibrational modes of molecules, although rotational and other low-frequency modes of systems may also be observed.
Raman spectroscopy is commonly used in chemistry to provide a structural fingerprint by which molecules can be identified. Raman Spectroscopy Rayleigh and Raman scattering (Stokes and anti-Stokes) as seen on energy level diagram. An associated spectrum is included, note the Raman lines intensity are greatly exaggerated. Raman spectra are usually shown in wavenumbers as a shift from the Rayleigh scattered xn----ctbrlmtni3e.xn--p1aig: hindi. Raman effect, change in the wavelength of light that occurs when a light beam is deflected by moleclues.
When a beam of light traverses a dust-free, transparent sample of a chemical compunds, a small fraction of the light emerges in directions oth. Raman Effect. Raman Effect refers to the inelastic scattering of a photon by molecules which are excited to higher vibrational or rotational energy levels. Part of the light beam after passing through transparent medium gets scattered and the wavelength of these scattered rays is different from that of the incident rays of light.
May 03, · Basically You can tell Why Type of material light passing through, every object deflect light at different wavelength, so you can recognize material from light. so its used at many places from biology to meteorology but notably astrology for te.
Jun 01, · A quantum mechanical analysis is made of higher order processes in Raman scattering. In particular the examples of coupled Stokes and Antistokes radiation and the generation of 1st and 2nd Stokes radiation are considered. All fields, electromagnetic and phonons, are quantized and the Schrödinger equation for the system is solved exactly.
This completely quantum mechanical approach Missing: hindi. The Zeeman effect (/ ˈ z eɪ m ən /; Dutch pronunciation:), named after the Dutch physicist Pieter Zeeman, is the effect of splitting of a spectral line into several components in the presence of a static magnetic xn----ctbrlmtni3e.xn--p1ai is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric xn----ctbrlmtni3e.xn--p1ai similar to the Stark effect Missing: hindi.
Raman effect definition: a change in wavelength of light that is scattered by electrons within a material. The | Meaning, pronunciation, translations and examplesMissing: pdf. Mar 26, · The Compton effect (also called Compton scattering) is the result of a high-energy photon colliding with a target, which releases loosely bound electrons from the outer shell of the atom or molecule. The scattered radiation experiences a wavelength shift that cannot be explained in terms of classical wave theory, thus lending support to Einstein's photon xn----ctbrlmtni3e.xn--p1aig: hindi.
Raman Spectroscopy Raman Effect Is A 2 Photon Scattering Process For PPT. Presentation Summary: Raman Spectroscopy Raman effect is a 2-photon scattering process For Raman-active vibrations, the incident radiation does not cause a change in the dipole. Ministry of Human Resouce Development, Government of India. About New Education Policy Consultation. The National Education Policy was framed in and modified in Raman spectroscopy relies on Raman scattering, or inelastic scattering, of monochromatic light to identify molecular components.
Raman scattering occurs when there are changes in electronic, vibrational or rotational energy. This wavelength change in light is also referred to as the Raman effect. Centre worried about the effects of micro lockdowns on macro recovery. The ministry of home affairs has been sounded out about the localised closures, but with guidelines on Unlock in place and health being a state subject, officials are finding it difficult to address the issue.
“These could derail the recovery,” said a government Missing: pdf. Advantages of Raman Monitoring OOptical fibre coupling up to 's of meters in length can be used for remote analyses.
OSuitable for harsh environments, such as high pressure and temperature. OMultiplexing advantage with multiple probes from a single Raman base unit. ONon-contact measurements minimise the risk of sample contamination. ONo interference from water or CO2 xn----ctbrlmtni3e.xn--p1aig: hindi.
सीवी रामन (तमिल: சந்திரசேகர வெங்கட ராமன்) (७ नवंबर, १८८८ - २१ नवंबर, १९७०) भारतीय भौतिक-शास्त्री थे। प्रकाश के प्रकीर्णन. 2. Prayojanmoolak Hindi ki Nayee Bhoomika- Kailash Nath Panday, Rajkamal Prakashan Samooh, Delhi 3.
Raman effect in hindi pdf
Prayojanmoolak Hindi: Sidhant aur Prayog- Dangal Jhalte, Vani Prakashan, Delhi 4. Hindi Nibandh Sahitya ka Sanskritik Addhyan: Dr Baburam,Vani Prakashan, Delhi 5.
Hindi Gadhya-Vinayas aur Vikas: Ramswaroop Chaturvedi, Lokbharti Prakashan, Delhi 6.
Raman effect in hindi pdf
The latter half of Part I is devoted to more novel subjects in vibrational spectroscopy, such as resonance and non-linear Raman effects, vibrational optical activity, time resolved spectroscopy and computational methods.
Thus, Part 1 represents a short course into modern vibrational spectroscopy. Titanium dioxide and alkali metal titanates are widely used in a variety of applications such as pigments in paint and skin care products and as photocatalysts in energy conversion and utilization. Sodium titanates have been shown to be effective materials to remove a range of cations over a wide range of pH conditions through cation exchange reactions.
|
318c5e1692109490 | Strongly modulated transmission of a spin-split quantum wire with local Rashba interaction
David Sánchez Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain Llorenç Serra Departament de Física, Universitat de les Illes Balears, E-07122 Palma de Mallorca, Spain Institut Mediterrani d’Estudis Avançats IMEDEA (CSIC-UIB), E-07122 Palma de Mallorca, Spain Mahn-Soo Choi Department of Physics, Korea University, Seoul 136-701 Korea
September 16, 2019
We investigate the transport properties of ballistic quantum wires in the presence of Zeeman spin splittings and a spatially inhomogeneous Rashba interaction. The Zeeman interaction is extended along the wire and produces gaps in the energy spectrum which allow electron propagation only for spinors lying along a certain direction. For spins in the opposite direction the waves are evanescent far away from the Rashba region, which plays the role of the scattering center. The most interesting case occurs when the magnetic field is perpendicular to the Rashba field. Then, the spins of the asymptotic wavefunctions are not eigenfunctions of the Rashba Hamiltonian and the resulting coupling between spins in the Rashba region gives rise to sudden changes of the transmission probability when the Fermi energy is swept along the gap. After briefly examining the energy spectrum and eigenfunctions of a wire with extended Rashba coupling, we analyze the transmission through a region of localized Rashba interaction, in which a double interface separates a region of constant Rashba interaction from wire leads free from spin-orbit coupling. For energies slightly above the propagation threshold, we find the ubiquitous occurrence of transmission zeros (antiresonances) which are analyzed by matching methods in the one-dimensional limit. We find that a a minimal tight-binding model yields analytical transmission lineshapes of Fano antiresonance type. More general angular dependences of the external magnetic field is treated within projected Schrödinger equations with Hamiltonian matrix elements mixing wavefunction components. Finally, we consider a realistic quantum wire where the energy subbands are coupled via the Rashba intersubband coupling term and discuss its effect on the transmission zeros. We find that the antiresonances are robust against intersubband mixing, magnetic field changes, and smooth variations of the wire interfaces, which paves the way for possible applications of spin-split Rashba wires as spintronic current modulators.
71.70.Ej, 72.25.Dc, 73.63.Nm
I Introduction
i.1 Motivation
Since the pioneering Datta-Das proposal of an electronic field-effect transistor in which the current flow is controlled by magnetic means only,dat90 the study of the Rashbaras60 ; byc84 spin-orbit interaction in one-dimensional (1D) and quasi-one-dimensional ballistic channels (quantum wires) has attracted a lot of interest.sat99 ; mor99 ; mir01 ; kis01 ; mol01 ; egu02 ; gov02 ; fev02 ; bul02 ; and03 ; stre03 ; scha04 ; per04 ; nes04 ; cah04 ; wan04 ; she04 ; zha05 ; kno05 ; per05 ; rom05 ; deb05 ; ser05 ; zhan05 ; zhan06 ; rey06 ; san06 ; jeo06 ; sha06 ; zha06 ; per07 ; nev07 Precise tunability of the strength of the Rashba coupling has been also experimentally demonstrated in quantum wells.nit97 ; eng97 ; gru00 Typically, semiconductor quantum wires are built from two-dimensional electron gases formed at the interface of a semiconductor heterostructure when the lateral motion of electrons is restricted by a transversal confinement potential to effective widths of the order of the de Broglie electron wavelength. For very clean quantum wires (e.g., quantum point contacts) transport is ballistic and conductance is quantized to integer values of .wee88 ; wha88
The presence of impurities or defects in the vicinity of the constriction destroys conductance quantization.chu89 ; bag90 ; fai90 ; tek91 ; gur93 ; noc94 A striking effect arises when the impurity potential is attractive and enables the existence of at least one bound state whose energy is degenerate with the continuum band of propagating states. As a consequence, for energies close to the transition threshold a direct transmission channel can interfere with a wave trajectory that travels across the bound state and this interference is destructive, leading to enhanced backscattering and Fano asymmetric lineshapes.fano ; cer73 ; gor00 ; kob02 Recently, two of ussan06 have demonstrated that a spin-orbit interaction of the Rashba type localized in an infinitely long quantum wire plays a role similar to an attractive potential and pronounced dips are seen in numerical simulations of the conductance curves.she04 ; zhan05 ; san06 ; zha06 It is remarkable that the Rashba interaction provides both the attractive potential that supports bound statesval04 ; cse04 and the mixing term that couples the localized and the propagating states.san06 Interestingly, when charging effects are taken into account, Coulomb blockade resonances can be tuned directly modulating the strength of the Rashba coupling.lop07
A magnetic field applied in the wire plane leads to Zeeman spin splitting of the 1D modes. Evidence of this is shown in the appearance of conductance plateaus at odd multiples of .wha88 In the first plateau the current is fully polarized since only one spin species is allowed to propagate. Quantum states with opposite spin are evanescent asymptotically and do not take part in electron transport unless there exist inhomogeneities that give rise to resonances or Fano-type interferences in which case evanescent states are crucial. Reference ser07, presents a theoretical method to calculate evanescent states in quantum wires with uniform Rashba interaction.
To determine the full transmission pattern of a generic quantum wire, one must first analyze the energy spectrum of the wire. For quantum wires with uniform Rashba interaction in the absence of external magnetic fields, free-electron energy bands are parabolas shifted apart for opposite spin directions.mol01 The splitting size is proportional to the spin-orbit interaction strength and in the quasi-1D case the Rashba interaction produces anticrossings between bands corresponding to opposite spins and adjacent modes.mor99 ; gov02 Moreover, the propagation threshold is shifted, compared to the case with no spin-orbit coupling, down to an energy . In the presence of an in-plane magnetic field, the energy spectrum changes dramatically even for arbitrarily small fields. The field can be either externally applied or originated from stray fields of the ferromagnets coupled to the wire in the Datta-Das setup.dat90 It is shownstre03 ; per04 ; ser05 that the interplay between the magnetic field and the Rashba interaction leads to the openings of gaps in the 1D energy bands at small wavenumbers. In a quasi-1D wire most of the energy dispersions around the gap form energy minima locally in contrast to the maxima encountered in the 1D case.ser05 In those energy windows in which the gap consists of an energy local maximum followed by a local minimum the conductance curves present anomalous steps for chemical potentials within the gap.per04 ; ser05
In short, external magnetic fields lead to the formation of energy gaps in the spectrum while local Rashba interactions produce Fano-type antiresonances due to the formation of quasi-bound states coupled to the channel of direct transmission states. Therefore, we expect a rich interplay between in-plane fields and localized Rashba spin-orbit couplings in the transport properties of a ballistic quantum wire. This paper presents a generic theoretical description of the quantum transmission of an electron subject to Zeeman splittings and spatially modulated Rashba fields.
i.2 Main findings
We find the occurrence of exact tranmission zeros in the conductance curves of a Zeeman-split wire with local Rashba interaction as a function of the Fermi energy. Central to the existence of the transmission zeros are the formation of a Zeeman gap arising from an in-plane magnetic field and the role of the evanescent states within the Rashba region. In fact, the Rashba interaction couples the propagating and evanescent states precisely in the interior of the Rashba region. The transmission antiresonances are almost universal, showing a Fermi energy with vanishingly small transmission at moderately low magnetic fields. This might be relevant for applications since it provides two operation points for working transistors (low and high current states). It is important to stress that these transmission zeros are fundamentally distinct from the suppressed transmission that may take place in a Datta-Das setup due to spin precessiondat90 even in the presence of in-plane magnetic fields.nev07 The antiresonance position can be tuned with a slight change of and are robust against changes of the magnetic field. We only require that the Fermi energy lies within the gap.
i.3 Outline
The outline of the paper is as follows. Section II is devoted to analyzing the transport properties of a 1D wire, where only one subband is taken into account and Rashba-induced intersubband coupling is then neglected. In Sec. II.1 we discuss the eigenstates and energy spectrum of a 1D wire subject to Rashba interaction and Zeeman spin splittings. In Sec. II.2 we consider a finite Rashba region with constant Rashba strength and a magnetic field pointing in a direction perpendicular to the Rashba field and calculate the transmission within the scattering formalism and numerical matching. A tight binding description of the problem is considered in Sec. II.3. We derive an exact expression for the transmission in the limit of a minimal Rashba region and discuss the Fano form of the lineshape. To end Sec. II we present in Sec. II.4 results for the angular dependence of the magnetic field direction. In Sec. III we examine a quasi-1D wire. The numerical results are in agreement with the 1D case, thus demonstrating that the sharp antiresonances are robust even when intersubband coupling is present like in realistic wires. We also discuss the case of an arbitrary dependence of the Rashba strength with the position and compare our results when the Rashba interaction smoothly increases at the interfaces. Finally, Sec. IV contains the conclusions.
Ii One-Dimensional Wire
We consider a 2D electron gas formed in the - plane due to strong confinement in the direction. As a result of the interfacial electric field, there arises a spin-orbit coupling of the Rashba type with a Hamiltonian given by
where the Rashba strength can be spatially modulated. The limit of a purely 1D system is obtained by further constraining the electron motion along, e.g., the direction. Then, the term in Eq. (1) is neglected and the Rashba interaction plays the effective role of a momentum-dependent magnetic field with a direction along the axis. In the following section we discuss the spectrum and eigenfunctions of a 1D wire when the Rashba strength is uniform and the external magnetic field points either in the - plane (“in-plane” field) or along the vertical direction (“perpendicular” field).
ii.1 Extended Rashba interaction
ii.1.1 In-plane field
We consider an in-plane magnetic field with arbitrary direction, , giving rise to Zeeman interaction.per04 ; ser05 Then, the single-particle Hamiltonian reads,
where is the Zeeman splitting. When we assume a constant Rashba strength from to , the Hamiltonian is diagonalized using the spinor wavefunctions (we take the spin quantization direction along ),
where is the branch-splitting quantum index and is the wavevector associated to free motion along . The spin orientation is determined from
We note that there exists no common spin quantization axis since depends on .ser05 This is due to the existence of a magnetic field since for the spinors lie along (the Rashba axis). The effect is akin to a 2D system with Rashba interaction for which . However, in the 2D case the spin orientation is always perpendicular to whereas in the 1D case only for asymptotically large momenta () the spin is quantized along . A similar effect arises from Rashba intersubband coupling in quasi-1D systems.gov02
Figure 1: Energy spectrum for a 1D quantum wire with uniform Rashba coupling for a Zeeman splitting and different magnetic field angles: (a) , (b) and (c) .
From the Schrödinger equation, , one finds the energy spectrum,
For the case of interest occurs for small Zeeman splittings, , see Fig. 1(a). The lowest branch of the spectrum develops a local maximum around which has important consequences for transport.stre03 ; per04 ; nes04 ; ser05 As a result, there arises a pseudogap between the two spectrum branches. For Fermi energies lying in the pseudogap region, there is only a wavefunction with a given spin direction for each mover (right-moving or left-moving). In addition, there also exist evanescent waves which are crucial when the Rashba interaction is confined to a finite region. Outside the pseudogap region, there are four real wavevectors for a given .
Increasing from to leads to the progressive reduction of the pseudogap size, see Fig. 1(b) and 1(c). For the gap vanishes and the spinors point along the axis since in this case the field axis and the Rashba axis coincide.
We note in passing that when is constant (or, more generically, an even function of ), there exists a symmetry property of the Hamiltonian given by Eq. (2). Let us concentrate on the case . Thus, is invariant under the transformation
namely, the rotation by around axis in the spin space followed by the parity operator , which yields inversion in the direction. The additional factor of is to ensure that . Similar symmetry properties have been discussed in Refs.bul99, ; kis01, ; deb05, which find that the spin parity, i.e., the combination of parity and a Pauli matrix, is a constant of motion for . Here, commutes with even for nonzero fields when . Hence, one can find a common basis of eigenstates for and . The wave functions given by Eq. (3) are not eigenstates of . In fact, since when . Therefore, one could construct states with definite parities with regard to from even () and odd () combinations of , i.e., .
ii.1.2 Perpendicular field
For fields pointing along , the Zeeman term in Eq. (2) is expressed as . The spectrum is identical to the case, Fig. 1(a). Therefore, for the sake of the present discussion, the cases parallel to and parallel to are equivalent in the 1D case since both are perpendicular to the Rashba field direction (along ). Of course, in the quasi-1D case one should also take into account orbital effects but in this section we can neglect this. Let us focus on the branch and energies within the pseudogap. We will find that strong transmission changes take place in that region.
For there are two propagating solutions
with wavevector [see Fig. 1(a)]
where we have defined
As discussed above, the angle depends on the wavevector:
Further, the pseudogap region admits two more solutions which are evanescent waves. For (i.e., ) we find,
For one must make the replacements and . Moreover,
In Fig. 1(a) we illustrate the “dispersion” relation for the evanescent states and the location of . The evanescent states make sense only for nonvanishing as can be readily seen by setting in Eqs. (8) and (14). becomes and becomes pure imaginary, . This corresponds to four propagating solutions (two left-moving and two right-moving) with a definite spin direction.mol01
ii.2 Local Rashba interaction
We now consider a double interface at and between a normal conduction band ( and ) and a region of localized Rashba interaction extending from to , see Fig. 2. Then, for and elsewhere. Our basic goal is to find the transmission through the Rashba region when the magnetic field is present all along the wire, producing a Zeeman gap. For convenience, we take the direction along since the solutions for are simpler to write down.
Figure 2: Schematic representation of a local Rashba interaction in a 1D quantum wire.
We are interested in energies inside the spin pseudogap, , for which a spin-down (spin-up) electron wave is propagating (evanescent). In the scattering problem, an electron with spin down is injected from and reflected with a certain probability . Since the spin quantization axis in the region depends on the wavevector, we must also take into account the spin-up evanescent waves at and . As a consequence, the scattering wave function for reads,
whereas for one should make the replacements and in the expressions for the evanescent states of the Rashba region. In Eq. (16) the wavevector is written in terms of the Fermi wavevector and defined as before. We note that can be pure imaginary for , though the physical wavevector, , is always a quantity manifestly positive and real. The evanescent wave is described with an exponentially decreasing wave with range where , thereby the probability of finding a spin-up on the left or right sides is nonzero. At small the propagating states with wavevector has its spin approximately along [ in Eq. (3)].
We numerically find the coefficients , , , , , , and from matching equations. At the interfaces the wavefunction must be continuous, e.g., . Moreover, the flux, given by the velocity operator , must also be continuous, i.e., . On the normal sides, is trivially given by while in the spin-orbit region one hasmol01
Importantly, the flux associated to Eq. (7), which is proportional to
is positive (negative) for ().
The transmission and the reflection are simply given by and , respectively. Since we are interested in the relative influence of and while keeping the Rashba region length constant, in the numerical results we give the energies as a function of the energy unit . A characteristic transmission curve is plotted in Fig. 3 for whereas is slightly varied. We make the important observation that there arises an exact transmission zero near when . On increasing , the resonance position shifts to lower energies and, at the same time, the resonance broadening is enhanced. This dependence with will become clear later, when we discuss the tight-binding model.
Figure 3: (Color online) Transmission through a Rashba region for and different spin-orbit intensities: (full line), 6.64 (dashed), 6.88 (dot-dashed), 7.12 (dotted).
The transmission for and various magnetic fields is shown in Fig. 4. The transmission curves are reminiscent of the Fano lineshapes. An interesting question is thus whether the transmission behavior is indeed related to a Fano-type interference effect. While in strict one-dimensional systems the interference giving rise to Fano lineshapes is not possible due to the existence of one channel only, in this case and due to the Zeeman splitting there exist two modes, namely, the propagating mode (spin down) and the evanescent (spin up) mode. Both modes become coupled locally within the Rashba region. As a result, the effect is due to a subtle combination of spin-orbit interaction and Zeeman splitting which leads to destructive interference in the Rashba region. A simplified model, discussed below, will shed light on this. For the moment, we note that Fig. 4 shows that it is sufficient to have a rather small Zeeman gap above which the antiresonance develops. For we find .
Figure 4: Transmission versus Fermi energy for and different Zeeman splittings: (a), 0.128 (b) and 0.064 (c).
The energy and length scales we considered above are within the scope of present techniques. E.g., for a Rashba region of size m the value corresponds to meV nm, which is accessible in an InAs wire.nit97 The Zeeman energy used in Fig. 3 corresponds to a magnetic field mT in the same material and in Fig. 4(c) is only 60T. Notably, the effect scales with . Therefore, a smaller would require a larger wire for the antiresonance to be observable.
ii.3 Tight-binding model
The continuum model discussed above leads to remarkable predictions for the conductance of a spin-split quantum wire with a local Rashba interaction but to gain further insight it would be highly desirable to have a simplified model capable of yielding closed analytical formulas for the dip position and shape. In this subsection we consider a discretized version of the Hamiltonian [Eq. (2)]. The infinite 1D wire is modeled with a linear chain of sites. Thus, we obtain the following tight-binding Hamiltonian,and89
where the summations over and are carried out on an infinitely extended 1D wire and the last summation is over the sites of the Rashba region. In this equation, couples nearest neighbors, is the Rashba interaction strength which couples electronic states with opposite spin directions along , and is the lattice parameter. The on-site energies are given by [ for ]. The Hamiltonian is equivalent, in the limit , to whose transport properties have been analyzed above. Here, in order to obtain simplified expressions we consider a localized Rashba interaction restricted only to two sites, 0 and 1 (see Fig. 5). We note that this is the minimal model that characterizes the transport properties of a 1D wire with a Rashba region.
Figure 5: In the minimal tight-binding model, the system consists of a linear chain of coupled sites with a localized Rashba interaction at sites and .
For and the energy band spectrum is given by the well known expression . In the presence of a magnetic field, the spectrum becomes spin split. We now focus on an energy range close to the band bottom, . Then, the eigenfunctions corresponding to spin () are propagating (evanescent) waves. Since we intend to solve the scattering problem of an -electron wave impinging onto the Rashba region, we now introduce the wave amplitudes for , and for , with and the transmission and reflection probability amplitudes. The wavenumber is related to the total energy by means of For electrons with spin , their energy outside the Rashba region falls below the band bottom. As a result, we take the wave amplitudes given by for and for , where Substituting the total wavefunction into the Schrödinger equation and projecting over the sites of the Rashba region , we find the transmission,
which allows to determine the exact condition for the occurrence of zeros in the transmission function :
This expression has a very appealing form. For a given value of the antiresonance energy lies to the left of , as shown in the numerical results of Figs. 3 and 5. Moreover, it predicts that the dip shifts to lower energies as the Rashba interaction strength increases. This is reproduced in Fig. 6 where we plot a characteristic as a function of energy for . In addition, the dip broadens as increases, in excellent agreement with the numerical results of the continuum model, see Fig. 3. Equation (21) also explains why the critical Zeeman splitting below which the dip disappears is so small since the antiresonance is observable only if lies above the band bottom, i.e., . It then follows that where is a slowly increasing function of for . As a result, , which is at least two orders of magnitude smaller than the bandwidth. Incidentally, we also find an upper bound of the Rashba strength, , above which the dip vanishes. For , we obtain . This is an interesting feature for applications since very strong Rashba couplings are not necessary to generate the antiresonance.
Figure 6: Transmission versus Fermi energy for and different spin-orbit strengths. All energies are given in units of .
The considerations made above suggest that the antiresonance has a Fano lineshape but it is hard to demonstrate with controlled approximations that Eq. (20) has exactly this form. Perhaps it is more instructive to consider a closely related model, in which the spin flip interaction due to the Rashba coupling is restricted to a single point, see Fig. 7. Then, the Hamiltonian reads
As before, the spin-up and spin-down energy bands are shifted by a Zeeman splitting. The coupling between spins at the central site may represent the action of external magnetic field pointing in a direction perpendicular to the spin quantization axis or other source of spin flipping. Spin flip interactions in quantum dots have recently received much attention.sf1 ; sf2 ; sf3 ; sf4
We take the wave function ansatz for , and for for the propagating states and for and for for the evanescent states. From the tight-binding equations, we obtain the transmission amplitude,
We can define the broadening which measures the coupling strength between spin-up and spin-down states and is proportional to the density of states (per unit length) for electrons with spin down. It reads,
Using this result and the expressions for and written above, we find the expression for the transmission probability,
valid for energies around the point . We note that Eq. (25) has the desired Fano form. In the conventional Fano effect, the coupling takes place between a bound state immersed in a continuum band and the propagating states.fano Here, the role of the bound state is played by the evanescent modes which, due to the spin flip interaction, are coupled to the propagating states with opposite spins.
Figure 7: Sketch of the system considered in the discussion for a point-like spin-flip interaction between propagating (spin down) and evanescent (spin down) states along a one-dimensional site lattice.
ii.4 Angular dependence
We now discuss the angular dependence of the field direction. In the strict 1D case, only one mode is needed. Thus, we expand the wavefunction in the two-spinor basis which, in the case of an in-plane field , takes the form:
the spinors in the direction.
Substituting in the Schrödinger equation with the Hamiltonian given by Eq. (2), we obtain a pair of coupled equations for and :
where primes indicate and we recall . We use the following gauge transformation,
where , in order to eliminate the first derivatives in Eqs. (28,29), which are transformed into
where the elements of the Hamiltonian matrix are
with . It is clear that for the amplitudes and decouple and for energies inside the gap the transmission is given simply by the propagating state . Then, propagating and evanescent states become decoupled and no dip is expected. Only for angles away from does the problem become nontrivial since couples with the evanescent amplitude . Numerical results shown in Fig. 8 confirm this expectation.
Figure 8: (Color online) Transmission as a function of Fermi energy varying the orientation of the in-plane magnetic field. The azimuthal angle for each curve is given in the legend. We take and .
Iii Quasi-One-Dimensional Wire
The preceding section has shown that the Fano resonance phenomenon manifests itself in 1D quantum wires with Zeeman splitting and a local Rashba interaction. Real wires, of course, have always some small extension in the lateral direction. In this section we analyze the influence of this extra dimension by considering a quasi-1D wire, including the direction. This lateral -confinement is usually weaker than the vertical -confinement. Thus, we neglect the contribution of the -confining electric field to the Rashba strength. For simplicity, we consider a transverse potential of parabolic type, .
Figure 9: (color online) Transmission as a function of the Fermi energy for a quasi 1D wire with transverse parabolic confinement characterized by . A Rashba region of length and a Zeeman energy with the magnetic field along the wire () have been used. The legend gives the numerical values of for the different curves.
The additional spatial dimension is relevant now because the transverse momentum explicitly appears in the Rashba spin-orbit interaction as shown by Eq. (1). It is also worth stressing that the new term in Eq. (1), proportional to , precludes the use of the analytical solution discussed in Sec. II.1 for a wire with an extended Rashba interaction. In fact, it is well known that it causes the formation of textured spin states lacking well defined spin quantization axis even for a fixed value of the wavenumber .gov02 ; ser05 The Rashba coupling is assumed to be non zero only in a region of length , where it takes the value , as in Sec. II.2. We also include the Zeeman coupling as in Sec. II.1 of a magnetic field oriented along a certain azimuthal angle . The full Hamiltonian thus reads
A natural unit system for the present quasi-1D model is set by the wire transverse potential, with energy unit (oscillator energy) and length unit (oscillator length). In what follows the numerical values for the Rashba region length , spin-orbit intensity and Zeeman energy will be given in these oscillator units. In order to obtain the transmission of the system modeled by Eq. (37) we have used the quantum transmitting boundary algorithm as in Ref. san06, . The Schrödinger equation is discretized on a uniform grid using finite differences for the derivatives and imposing scattering boundary conditions. The reader is referred to Refs. san06, ; qtbm, for additional details of the method.
Figure 10: (Color online) Transmission versus Fermi energy varying the orientation of the in-plane magnetic field. The azimuthal angle for each curve is given in the legend. We use a Rashba region of length , Zeeman energy and spin-orbit coupling strength .
The existence of the Fano lineshapes in a quasi-1D wire, with a transmission zero at a given energy, is clearly shown in Fig. 9. This result proves that the physical effect elucidated with the tight binding model of Sec. II.3 is robust and persists in more realistic models. There is also a nice qualitative agreement with the 1D results of Fig. 3. In all three cases (tight-binding, 1D and quasi-1D) increasing the value of leads to a shift towards lower energies of the transmission zero, and to an important broadening of the transmission dip. These are very appealing features related to practical applications in spintronic devices, since they could allow to control the transmission by tuning ; the device operation would not be very sensitive to small changes in due to the broadness of the dip.
The scales used in Fig. 9 are of the same order as in Fig. 3. E.g., for a confinement strength meV in an InAs wire, we obtain m, meV ( mT) and meV nm.
When the magnetic field is oriented along the wire as in Fig. 9, the interference leading to the Fano profiles in the transmission is maximal. On the contrary, for transverse orientation it completely disappears (see Fig. 10). This behavior is in agreement with the analysis of Sec. II.4 in the 1D case, where it was shown that the mixings and vanish for .
The evolution with the Zeeman field intensity for the quasi-1D case is shown in Fig. 11. The behavior is again qualitatively similar to the 1D case of Fig. 4, with the dip evolving towards smaller energies when decreasing the value of . We also notice that, as predicted by the tight binding result, even for quite small Zeeman energies there is a dip in the transmission.
Figure 11: (Color online) Transmission as a function of the Fermi energy for a quasi 1D wire with a Rashba region of and a magnetic field along . The different panels correspond to the given Zeeman energies (in units of ). In each panel solid and dashed lines correspond, respectively, to a value of of 0.45 and 0.40, respectively.
Thus far we have considered abrupt interfaces between the normal sides and the Rashba region. Using the quasi-1D grid calculation we can also address the influence of the smoothness in the transition of the Rashba coupling strength from zero to the finite value , which is closer to reality. We model each interface using a Fermi function with a diffusivity ,
Figure 12 shows the results for different values of . The transmission curve coincides with the abrupt interface limit when while for increasing the interface becomes smoother but the transmission zero remains visible and the dip position does not change much. This is a crucial observation—the transmission zeros we find are roughly independent of the precise profile of the Rashba strength. This robustness has an obvious importance for potential spintronic applications.
Figure 12: (Color online) Transmission versus Fermi energy for and . Each curve corresponds to a different diffusivity , as given in the legend in units of , for a Rashba region of length
Iv Conclusions
We have performed a theoretical analysis of the transport properties of a ballistic quantum wire with spatially inhomogeneous Rashba interaction in the presence of an external magnetic field giving rise to Zeeman spin splitting. When the Rashba coupling dominates the magnetic field an energy pseudogap develops in the wire spectrum. We find abrupt transmission lineshapes when the Fermi energy lies within the pseudogap. The lineshapes are narrow and asymmetric and the transmission reaches zero for energies near the gap closing. We have discussed a minimal tight-binding model that reproduces the essential features of the resonances, yielding analytical expressions for the lineshape dependence on Fermi energy, Rashba intensity and Zeeman splitting. Qualitatively, the evanescent band plays the role of a quasi-bound state that the confined Rashba interaction couples to the propagating states. The evanescent waves are not true bound states but when the Fermi energy approaches the evanescent band bottom electrons scattering off the Rashba region become strongly affected, leading to perfect reflection. Numerical results in realistic quantum wires agree with the purely 1D case. Finally, we have analyzed the behavior of the resonances when the angular orientation of the magnetic field is changed and the interfaces become smoother.
The system studied here could work as a current modulator device. For slight variations of the Fermi energy, which can be externally controlled, we have shown that the transmission changes dramatically between two limit values (1 and 0) across the antiresonance. Our proposal has a number of differences compared to the Datta-Das spin transistor.dat90 First, the latter device modulates the current independently of the energy of the injected electrons since the phase difference that governs the spin precession is independent of the wavevector. Then, small changes of or strongly affects the working points of the transistor whereas in our case these points are not very sensitive to small variations of the external parameters such as the external magnetic field, the Rashba strength or the interface diffusivity. Moreover, a 100 current modulation is hard to achieve in the Datta-Das transistor, especially when intersubband coupling is taken into account, whereas in our case the modulation is rather abrupt and is preserved even when adjacent subbands are coupled, an effect which is unavoidable in real quantum wires. We have reported results for the lowest spin-split subband but have checked that similar pronounced dips are seen in higher subbands. Our results differ also with those of Ref. san06, since in that case the antiresonances reached zero only after a fine tuning of the parameters. Here, our only requirement is that the Fermi energy should lie within the spectrum pseudogap.
As far as the discussion in 1D wires is concerned, the field directions orthogonal to the Rashba field (along , according to the parameterization of the Hamiltonian we have employed) are equivalent. However, in the quasi-1D case one of these directions (the one perpendicular to both the Rashba field and the electron propagation) induce orbital effects. Here, we have restricted ourselves to fields giving rise only to Zeeman splittings since the study of orbital effects in inhomogeneous systems requires knowledge of the evanescent states when the magnetic field is applied perpendicular to the wire plane. This is not a trivial task and it seems to be promising avenue of future research. Reference deb05, finds important changes in the spectrum structure of a quantum wire with uniform Rashba interaction and perpendicular magnetic fields. However, our sharp antiresonances show up even in the presence of rather small Zeeman splittings. Therefore, we expect that the dips should be still visible even when orbital effects are taken into account provided the magnetic field length is much larger than the confinement length.
In our discussion, we have neglected electron-electron interaction effects, which may lead to Luttinger liquid effects in 1D ballistic wires when the interactions are screened like in a wire with electric-field induced spin-orbit interactions.hau01 ; gri05 When Zeeman splittings are present lee05 ; dev05 the transmission seems to be altered by electron-electron interactions, although these works neglect the intersubband coupling term of the Rashba interaction. In fact, for ballistic wires without Rashba coupling but multiple populated subbands a simple mean-field approach demonstratesbut03 that Coulomb interactions are crucial to understand rectification effects observed in nanojunction rectifiers. On the other hand, single-particle effects are shownlop07 to lead to Coulomb blockade antiresonances of the Fano form. Hence, further work is needed to clarify the influence of Coulomb interactions in the conductance of a quantum wire with Zeeman splitting and a localized Rashba interaction.
This work was supported by the Spanish MEC Grant No. FIS2005-02796 and the “Ramón y Cajal” program.
• (1) S. Datta and B. Das, Appl. Phys. Lett. 56, 665 (1990).
• (2) E.I Rashba, Fiz. Tverd. Tela (Leningrad) 2, 1224 (1960). [Sov. Phys. Solid State 2, 1109 (1960)].
• (3) Y. Bychkov and E. I. Rashba, J. Phys. C 17, 6039 (1984).
• (4) Y. Sato, S. Gozu, T. Kikutani, and S. Yamada, Physica B 272, 114 (1999)
• (5) A.V. Moroz and C.H.W. Barnes, Phys. Rev. B 60, 14272 (1999).
• (6) F. Mireles and G. Kirczenow, Phys. Rev. B 64, 024426 (2001).
• (7) A.A. Kiselev and K.W. Kim, Appl. Phys. Lett. 78, 775 (2001).
• (8) L.W. Molenkamp, G. Schmidt, and G.E.W. Bauer, Phys. Rev. B 64, 121202(R) (2001).
• (9) J.C. Egues, G. Burkard, and D. Loss Phys. Rev. Lett. 89, 176401 (2003); J.C. Egues, G. Burkard, D. S. Saraga, J. Schliemann, D. Loss, Phys. Rev. B 72, 235326 (2005).
• (10) M. Governale and U. Zülicke, Phys. Rev. B 66, 073311 (2002).
• (11) G. Feve, W.D. Oliver, M. Aranzana, and Y. Yamamoto, Phys. Rev. B 66, 155328 (2002).
• (12) E.N. Bulgakov and A.F. Sadreev, Phys. Rev. B 66, 075331 (2002).
• (13) E.A. de Andrada e Silva and G.C.L. Rocca, Phys. Rev. B 67, 165318 (2003).
• (14) P. Streda and P. Seba Phys. Rev. Lett. 90, 256601 (2003).
• (15) Th. Schäpers, J. Knobbe and V.A Guzenko Phys. Rev. B 69, 235323 (2004).
• (16) Yu.V. Pershin, J.A. Nesteroff, and V. Privman, Phys. Rev. B 69, 121306 (2004).
• (17) J.A. Nesteroff, Yu.V. Pershin, and V. Privman, Phys. Rev. Lett. 93, 126601 (2004)
• (18) M. Cahay and S. Bandyopadhyay, Phys. Rev. B 69, 045303 (2004).
• (19) X.F. Wang, Phys. Rev. B 69, 035302 (2004).
• (20) I.A. Shelykh and N.G. Galkin, Phys. Rev. B 70, 205328 (2004).
• (21) F. Zhai and H.Q. Xu, Phys. Rev. Lett. 94, 246601 (2005).
• (22) J. Knobbe and Th. Schäppers, Phys. Rev. B 71, 035311 (2005)
• (23) R.G. Pereira and E. Miranda, Phys. Rev. B 71, 085318 (2005).
• (24) C.L. Romano, S.E. Ulloa, and P.I. Tamborenea, Phys. Rev. B 71, 035336 (2005)
• (25) S. Debald and B. Kramer, Phys. Rev. B 71, 115322 (2005).
• (26) Ll. Serra, D. Sánchez, and R. López, Phys. Rev. B 72, 235309 (2005).
• (27) L. Zhang, P. Brusheim, and H.Q. Xu, Phys. Rev. B 72, 045347 (2005).
• (28) S. Zhang, R. Liang, E. Zhang, L. Zhang, and Y. Liu Phys. Rev. B 73, 155316 (2006) .
• (29) A. Reynoso, G. Usaj, and C.A. Balseiro, Phys. Rev. B 73, 115342 (2006).
• (30) D. Sánchez and Ll. Serra, Phys. Rev. B 74, 153313 (2006).
• (31) J.-S. Jeong and H.-W. Lee, Phys. Rev. B 74, 195311 (2006).
• (32) Th. Schäpers, V. A. Guzenko, M.G. Pala, U. Zülicke, M. Governale, J. Knobbe, and H. Hardtdegen, Phys. Rev. B 74, 081301(R) (2006).
• (33) L. Zhang, F. Zhai, and H.Q. Xu, Phys. Rev. B 74, 195332 (2006).
• (34) C.A. Perroni, D. Bercioux, V. Marigliano Ramaglia, and V. Cataudella, J. Phys.: Condens. Matter 19, 186227 (2007).
• (35) A.H. Nevidomskyy and K. Le Hur, arxiv:cond-mat/0608340.
• (36) J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Phys. Rev. Lett. 78, 1335 (1997).
• (37) G. Engels, J. Lange, Th. Schäpers, and H. Lüth, Phys. Rev. B 55, R1958 (1997).
• (38) D. Grundler, Phys. Rev. Lett. 84, 6074 (2000).
• (39) B.J. van Wees, H. van Houten, C.W.J. Beenakker, J.G. Williamson, L.P. Kouwenhoven, D. van der Marel, and C.T. Foxon, Phys. Rev. Lett. 60, 848 (1988).
• (40) D.A. Wharam, T.J. Thornton, R. Newbury, M. Pepper, H. Ritchie, and G.A.C. Jones, J. Phys. C 21, L209 (1988).
• (41) C.S. Chu and R.S. Sorbello, Phys. Rev. B 40, 5941 (1989).
• (42) P.F. Bagwell, Phys. Rev. B 41, 10354 (1990).
• (43) J. Faist, P. Guéret, and H. Rothuizen, Phys. Rev. B 42, R3217 (1990).
• (44) E. Tekman and S. Ciraci Phys. Rev. B 43, 7145 (1991).
• (45) S.A. Gurvitz and Y.B. Levinson, Phys. Rev. B 47, 10578 (1993).
• (46) J.U. Nöckel and A.D. Stone, Phys. Rev. B 50, 17415 (1994).
• (47) U. Fano, Phys. Rev. 124, 1866 (1961).
• (48) F. Cerdeira, T.A. Fjeldly, and M. Cardona, Phys. Rev. B 8, 4734 (1973).
• (49) J. Göres et al., Phys. Rev. B 62, 2188 (2000).
• (50) K. Kobayashi, H. Aikawa, S. Katsumoto and Y. Iye, Phys. Rev. Lett. 88, 256806 (2002).
• (51) M. Valín-Rodríguez, A. Puente, and Ll. Serra, Phys. Rev. B 69, 085306 (2004).
• (52) J. Cserti, A. Csordás, and U. Zülicke, Phys. Rev. B 70, 233307 (2004).
• (53) R. López, D. Sánchez and Ll. Serra, arxiv:cond-mat/0610515 (2006), to appear in Phys. Rev. B.
• (54) Ll. Serra, D. Sánchez, and R. López, arXiv:0705.1506 (2007).
• (55) E.N. Bulgakov, K.N. Pichugin, A.F. Sadreev, P. Streda, and P. Seba, Phys. Rev. Lett. 83, 376 (1999).
• (56) T. Ando, Phys. Rev. B 40, 5325 (1989).
• (57) W. Rudzinski and J. Barnas, Phys. Rev. B 64, 085318 (2001).
• (58) R. López and D. Sánchez, Phys. Rev. Lett. 90, 116602 (2003).
• (59) M.-S. Choi, D. Sánchez and R. López, Phys. Rev. Lett. 92, 056601 (2004);
• (60) B. Dong, G.H. Ding, H.L. Cui, X.L. Lei, Europhys. Lett. 69, 424 (2005).
• (61) C.S. Lent and D.J. Kirkner, J. Appl. Phys. 67, 6353 (1990).
• (62) W. Häusler, Phys. Rev. B 63, 121310 (2001).
• (63) V. Gritsev, G.I. Japaridze, M. Pletyukhov, and D. Baeriswyl, Phys. Rev. Lett. 94, 137207 (2005).
• (64) H.C. Lee and S.-R.E. Yang, Phys. Rev. B 72, 245338 (2005).
• (65) P. Devillard, A. Crepieux, K. I. Imura, and T. Martin, Phys. Rev. B 72, 041309(R) (2005).
• (66) M. Büttiker and D. Sánchez, Phys. Rev. Lett. 90, 119701 (2003).
• (67) A.M. Song, A. Lorke, A. Kriele, and J.P. Kotthaus, W. Wegscheider and M. Bichler, Phys. Rev. Lett. 80, 3831 (1998).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test description |
3121f5eb40a3331c | Interpretations of Quantum Theory: Physics Meets Philosophy
If one believes in the power of mathematics to describe the universe, as the language of God so to speak, a notion which underpins all of Physics in the post Enlightenment Era as reflected in the two pillars of modern Physics, namely Relativity Theory and Quantum Theory, each of which has proven to have tremendously powerful predictive power for the explanation of measurement phenomena at the macrocosmic and microcosmic (subatomic) level of the “physical world” respectively, one is forced to radically change one’s perspective on, and fundamental definition of, “reality”. This is not a philosophical conclusion, or a theological one for that matter. This is a rationally deductive conclusion that anyone who understands modern Physics must arrive at if they follow the math. The two theories are fundamentally incompatible in the sense that they rest on fundamentally incompatible assumptions that have been proven to be mathematically true and again have been empirically verified. Most Physicists punt on the problem. They say that the math is a tool to predict the behavior of measurable phenomena in their respective domains and that any interpretation of what the math “means” or “says” about the nature of reality is a problem for philosophers of science, and in effect outside the domain of pure “science”.
The author takes issue with this type of interpretation however, even though it is the “standard” and “orthodox” view offered by Physicists and is most certainly the viewpoint offered by virtually every major textbook on Physics which is used to teach all modern students about science in the West. This conclusion, which the author deems is inescapable, in turn forces an expansion and redefinition of knowledge itself, one which is typically confined and equivalent to conclusions that are drawn by Science, but one which the science itself forces us to reconsider, as illustrated by any basic understanding of Quantum Theory as well as Relativity Theory, to include and integrate the “observer” as well as the “observed” into some sort of cohesive and coherent model. No matter what model one chooses to adopt, it is one that must sit “above”, ontologically speaking, any definition that can be offered by Physics or Science as it is understood today, and must incorporate some type of metaphysical intellectual system, back to the beginning really to what Aristotle called first philosophy, i.e. metaphysics, as the specific domain which must be explored and logically and rationally constructed to incorporate these scientific findings into our understanding of reality.
From a pure mathematical perspective, what Quantum Theory tells us is that there exists some sort of basic interconnecting principle that explains the behavior and complex relationship of these subatomic “particles” as we have come to understand them. While it would be convenient to categorize and define these strange properties and principles of the subatomic realm as the result of some type of “force”, i.e. a field of sorts that interacts between two separate and distinct “things” or “objects” and results in some sort of correlative measurement phenomena that can be described by some sort of mathematical equation that relates the “objects” in question, any and all attempts to describe the behavior of the subatomic world in such a way has unfortunately completely eluded some of the brightest minds in physics for some 70 years or so. This in fact was the driving force of much of Einstein’s work in the latter part of his career, and one which he was ultimately unable to solve. It is intellectual driving force (no pun intended) that underpins the conclusions drawn in famed EPR Paper which criticized Quantum Theory as “incomplete” and posited the potential existence of so-called hidden variables, which would theoretically bridge the gap between the Quantum Theory and Relativity, the existence of which have been albeit entirely ruled out mathematically speaking by Bell’s Theorem which deals with the potential existence of hidden variables explicitly. The only exception perhaps is Bohmian Mechanics, aka de Broglie-Bohm theory or simply pilot-wave theory (more below), which is arguable the best of, if not the only, coherent hidden variable theory that is also fully deterministic that has been put forward since Quantum Theory has become widely accepted and empirically verified since the middle of the twentieth century or so, since the advent of the Quantum Era.
Leaving Bohmian Mechanics aside (a theory which has not been widely accepted by modern Physics for a variety of reasons and is very difficult for the layperson or non-Physicist to understand and arguably violates the principle of Ockham’s razor[1] and is certainly not taught in schools and academia for the most part), our notion and definition of reality must in fact adapt and evolve to support the developments of modern Science, i.e. Physics, which explain the behavior of macrocosmic phenomena, but also subatomic phenomena, the latter of which of course exhibit quite paradoxically both wave like and particle like behavior and also at the same time have been shown to exhibit strange properties such as entanglement. Following this rationale to its logical conclusion, if we as human beings (and all animals or physical objects for that matter, the entirety of the “animate” and “inanimate” world), both subsist and consist of these elementary particles which exhibit these “non-classical” properties, we must in fact expand upon our notion of “reality” itself to incorporate these characteristics which have proven to be “scientifically” true. The author rejects the “math is for measurement and predictability only” position as an intellectual cop out of sorts for avoiding the albeit difficult problem of offering up a solution to the question of what it all means. A solution which must, by definition, delve into the world of metaphysics at some level or another. Hence the reason no doubt that Physicists are reluctant to wade into these waters.
And therein lies one of the basic underlying problems this work is trying to address really, that the underlying rational for the “its just math” position, that it’s a problem for philosophers of Science and not a problem for Physics as an academic discipline needs to be revised.[2] Not only must we come up with a wholesale new definition of “reality”, but we need to reformulate our approach and definition to knowledge itself, which must incorporate what we understand as the basic substratum of existence as characterized by the basic characteristics and properties of Quantum Theory as well as Relativity by incorporating and integrating the observer and observable phenomena into a more holistic model, or into at least the presentation of alternative models which satisfy this very basic requirement. Hence the essays and subject matter of the last part of this work which deal with ontology. Once this is done, and again the author argues that it must in fact be done if we are to move knowledge forward and continue to evolve, intellectually speaking at least, as a species, we must ultimately confront what any of these alternative models of reality which incorporate and synthesize the notions of the observer and observable phenomena, but also the substratum of existence within which this act of perception is continuously taking place, we must then look at what if any conclusions can be drawn, regarding the meaning of life, the meaning of existence, its ultimate purpose, what we refer to following Aristotle as teleology, and how we as individuals should incorporate said conclusions into our daily lives in the Quantum Era which is dominated intellectually, in particular in the West, by objective realism, a somewhat unintended byproduct as it were of the Scientific Revolution which provided the intellectual platform for twentieth century Physics, i.e. Relativity Theory and Quantum Theory. Or alternatively, if we adopt a materialistic position and we look upon the domain of Physics as we understand it today as simply providing mathematical tools to drive innovation and make life “easier” or more “efficient”, at least we will be “consciously” adopting such positions rather than having them beaten into us by teachers and educators for virtually our entire early life.
So this is the rationale for providing these alternative, more encompassing theories of reality, for delving back into first philosophy, i.e. metaphysics, and concluding – just as Aristotle did some 2500 years ago – that metaphysics must be understood and covered at length, prior to studying physics, or what he and the intellectual and academic community termed natural philosophy up until fairly recently in fact. And the implications of this reversal, or really inversal, of domains of study that we are describing and providing the rationale for here have vast and wide-ranging implications not just for Physics and Philosophy, Philosophy in this sense being defined quite broadly, but on our view and definition of knowledge itself. For once we make this determination, once we come to this conclusion, the entire definition and discipline of what we call “scientific inquiry” must then be broadened to include metaphysics, and in turn – for better or worse – theology. This is precisely the conclusion that Aristotle came to when he attempted to define and describe knowledge, or that which can be said to be “known”, as reflected by the what he called epistêmê, i.e. epistemology, which has been handed down to us through translation as Science.
In other words, the fact that Physicists for the most part refuse to offer up any answers for us as a society as a whole as to what the basic pillars of Physics as we understand them in the modern era mean, or how they should be interpreted with respect to our notion of reality, again what we refer to as teleology, does not make the problem, or any of the proposed solutions to said problem, “unscientific”. Herein lies the heart of one of the underlying theses of this work, i.e. not only should metaphysics be brought back to its place as first philosophy, i.e. should be studied “before” Physics (which is where the term metaphysics actually comes from, i.e. the reason why Aristotle’s treatise Metaphysics was given its title), but that the academic community at large should be reformed and should teach metaphysics, i.e. first philosophy, before Physics, or even Biology or Chemistry for that matter which were topics covered as part of his natural philosophy. The problem with this of course is that metaphysics and theology are so very closely linked that it’s very hard to distinguish between the two once you follow any proposed system of metaphysics to its logical conclusion. For any system of metaphysics to be complete, must – again as put forth by Aristotle – address the underlying “causes” or “reasons” why some “thing” or some “principle” has been brought into existence. The “why” questions, our teleology again, that underlie not just Physics, again natural philosophy, but also the individual beings which participate in and are fundamentally integrated with this physical world, ontology. These questions take us quite naturally into the domains of ethics, morality, theology and Sociology (political philosophy), all of which again must rest, from a rational and logical perspective, upon whatever system of metaphysics we adhere to or adopt.[3]
This approach of course has the benefit of bringing back as it were, all of the branches of knowledge under a single, cohesive and integrated umbrella. This is one of the primary reasons why Aristotle’s philosophy was so influential for such a long period in the West, arguably representing the cornerstone and basic foundation of “education” in the West for some 2000 years. His conceptions and definitions of logic, reason and metaphysics and even physics and ethics underpinned almost all intellectual thinking more or less, including Religion as well, before the system was overhauled and effectively split in two as an unintended byproduct of the so-called Scientific Revolution after which Religion and Science have been subsequently become completely incompatible. Incompatible to the point where common and widely held conceptions of these two domains is that they rest on two entirely distinctive and almost diametrically opposed principles – one called Science, that is entirely objective and is bound by empirically valid and “proven” hypotheses and principles, i.e. laws, and another that is based upon “faith” or “belief” and is entirely subjective and is one that fundamentally cannot be “proven” empirically or otherwise and is therefore “unscientific”. Taken to the extreme, Science is looked upon as “rational” and Religion is looked upon as “irrational”. And this of course does not even broach the topic of the potential reality of the so-called “mystical” experience or the nature of consciousness itself which is arguably outside of the domains of Science and Religion at this stage of the intellectual development of human history, despite the existence of mystical disciplines that have persisted and have been written about, and ultimately provide the basis for all Religions, throughout the entirety of human history.
So we must therefore, to advance intellectual development as a whole, and for the good of society and the environment within which we live in fact, look at and analyze various coherent and cohesive intellectual systems, i.e. systems of metaphysics really, which bring together and make sense of these seemingly incompatible basic principles that underlie our modern conceptions of physical reality – i.e. that there is some non-local underlying attribute of the substratum of existence that manifests itself by the fundamental correlative measurement properties of subatomic particles that are separated by distances that cannot be traversed within the boundaries of Classical Mechanical assumptions. This requires us of course to make sense of what Quantum Theory actually implies, or means – enter teleology again – and in turn what the implications it has on any conception of reality, i.e. ontology, we come up with to explain these basic and seemingly incompatible assumptions, and in turn and expansion of the definition of knowledge itself, epistemology, to take these factors into account. Although at first glance the exercise might seem to be a purely intellectual one (really a Philosophical one in terms of how this discipline is understood in the modern, Quantum Era) the exercise nonetheless has great merit because at the very least it will help elucidate the limitations, and the subtle and far reaching implications in fact, of the pure materialistic and objective view of reality that prevails in the West today – even if one rejects any of the systems of metaphysics that are put forth herein as put forth in antiquity by Aristotle.
This leads us to questions and topics that fall under the heading of “Interpretations” of Quantum Theory, which arguably fall under the category of what is typically referred to as philosophy of science today but effectively, as keenly understood by Bohm for example, really are ontological questions – i.e. fall directly under the modern Philosophical discipline of ontology, a discipline which studies the nature of reality, or technical being, terminology that harkens back to the very origins of Hellenic philosophy.
There are many interpretations of Quantum Theory, i.e. how to make sense of the model with respect to its implications regarding the nature of the physical universe, physical reality as it were, but there are three in particular that deserve attention due either to their prevalence or acceptance in the academic community, i.e. academia, and/or their impact on scientific and/or philosophical community in particular, which in this domain really amounts to the Physics community more or less. The fundamental question underlying these varying interpretations of Quantum Theory, what distinguishes them from one another essentially, are philosophical in nature – again ontological primarily. In other words, the fundamental question along which the various interpretations of Quantum Theory align, or misalign as the case may be, is what does Quantum Theory, given its predictive power, imply about the true nature of physical reality? We have come to a place in Science where we know that the underlying substratum of existence is bound by such mathematically proven principles such as uncertainty, complementarity and entanglement, and the implicit connection between the observed and the act of observation – all of which fly in the face of our long held beliefs with respect to our understanding of Classical Mechanics, i.e. how the world actually “is”, calling into question the nature of objective reality in and of itself.
On the one hand, we can say that it’s just a predictive model, no need to come to any radical conclusions about what it implies about the nature of the world we live in, much less any metaphysical, ontological, ethical or moral considerations (Copenhagen Interpretation). On the other hand, we can look at Everett’s relative-state formulation and conclude that the underlying math tells us that we are all, mathematically speaking at least, part of a constantly unfolding universe where the distinction between the observed and the observer is not nearly as clearly defined as we have come to think. But are there any other alternatives that give us the opportunity, at least theoretically at least, to hold on to our notions of objective reality that we have come to adore and consider to be almost unassailable assumptions about the world we live in? David Bohm, the main architect of what has come to be known as Bohmian Mechanics, offers an alternative interpretation of Quantum Theory that falls squarely in this camp.
The first is the so-called “Standard” or “Orthodox” interpretation, the one most often compared to or cited in reference to when differing interpretations are put forth and explained and the one presented in the majority of text books on the subject. This is most commonly referred to as the Copenhagen Interpretation and it basically renders the theoretical boundaries of interpretation of Quantum Theory to the results of the experiment itself and no further. This point of view can be looked at as a pure mathematical and physical behavioral modelling view of Quantum Mechanics and fundamental rejects any philosophical or ontological implications.
The second is definitely a little out there but still nonetheless carries some weight within the academic community, the Physics and Mathematics community in particular, and is undoubtedly mathematically and theoretically sound, and intellectually interesting, even though its ontological implications are somewhat extreme, abstract theoretically mathematical case. This interpretation has a few variants but is mostly referred to in the literature as the many-worlds interpretation, or many-minds, Interpretation and it expands upon the theoretical boundaries of Quantum Mechanics by explaining its stochastic nature by proposing the existence of multiple universes, or at least multiple possible universes.
The third interpretation that intellectually is perhaps the most appealing, particularly given its implicit ontological and metaphysical underpinnings, and as such is sometimes the Ontological Interpretation of Quantum Theory or simply Bohmian Mechanics. It extends Quantum Mechanics to include a principle it refers to as quantum potential, and while it abandons the classical notion of locality it still preserves the notion of objective realism and determinism upon which Classical Mechanics is predicated. [4]
Of these three, the most widely accepted and commonly taught interpretation, the one that is presented in textbooks on the subject and is most often used as the standard bearer for alternative interpretations, is the Copenhagen Interpretation. This interpretation is most often associated with Niels Bohr and Werner Heisenberg, stemming from their collaboration in Copenhagen in 1927, hence the name. The term was further crystallized in writings by Heisenberg in the 1950s when expressing his views on contradictory interpretations of Quantum Theory. The Copenhagen Interpretation holds that the Quantum Theory does not, and cannot, yield a description of any sort of objective reality, i.e. does not have any ontological implications, but deals only with sets of probabilistic outcomes of experimental values borne from experiments observing or measuring various aspects of energy quanta, entities that do not fit neatly into classical interpretations of mechanics. The underlying tenet here is that the act of measurement itself, the observer (or by extension the apparatus of observation) causes the set of probabilistic outcomes to converge on a single outcome, a feature of Quantum Mechanics commonly referred to as wavefunction collapse and that any additional interpretation of what might actually be going on, i.e. the underlying “reality”, defies explanation and therefore any interpretation of the model from an ontological or metaphysical perspective is in fact intellectually inconsistent with the fundamental mathematical tenets of the theory itself.
In this interpretation of Quantum Theory, reality – used here in the classical sense of the term as the existence of natural phenomenon, i.e. “things”, that exist independent of any “act of observation” – is a function of the experiment, and is defined as a result of the act of observation and has no ontological or metaphysical implications independent of the experiment itself which simply yields some measurement value. In other words, reality in the quantum world from this point of view does not exist independent of observation. Or put somewhat differently, the manifestation of what we think of or define as “real” is intrinsically tied to and related to the act of observation of the system itself. Niels Bohr is historically considered to be one of the strongest proponents of this interpretation, an interpretation which refuses to associate any metaphysical implications with the underlying theoretical model. His position is that given this proven interdependence between that which is being observed and the act of observation itself, no metaphysical interpretation should, or in fact can, be extrapolated from the theory. Quantum Mechanics from this perspective is simply a tool to describe and measure states and particle/wave behavior in the subatomic realm that are made as a result of some well-defined experiment.
In other words, in Bohr’s view, attempting to make some determination as to what Quantum Theory actually implies about the nature of reality, beyond the results of a given experiment, violates the fundamental tenets of the theory itself. From Bohr’s perspective, the inability to draw conclusions beyond the results of the experiments which the mathematical models predict, the yielding values or measurements from the experiments which run consistent with the stochastic mathematical models that underpin the theory, is in fact a necessary conclusion of the theorem’s basic tenets and therefore all that can be said about the theory itself, its ultimate interpretation, is defined wholly and completely by the mathematical model itself and that was the end of the matter. This view can also be seen as the logical conclusion of the principle of complementarity, one of the fundamental and intrinsic features of Quantum Theory that makes it so mysterious and hard to understand in classical terms. Complementarity, which is closely tied to the Copenhagen Interpretation, expresses the notion that in the quantum domain the results of experiments, the values yielded (sometimes called observables) are fundamentally tied to the act of measurement itself. In this sense complementarity can be viewed as the twin of uncertainty, or its inverse postulate.
Furthermore, based upon the model and the principles of complementarity and uncertainty which are both mathematically proven “attributes” of the underlying theory, in order to obtain a complete picture of the state of any given system, one would need to run multiple experiments across a given system. But any time an act of observation is made the state of the system changes – hence the notion of uncertainty which is a basic principle of any subatomic system that is subject to measurement or observation which again is a function of the underlying complementarity of the associated and related particles or corpuscles that are being measured in said system as fully described by the act of observation, mathematically described as wavefunction collapse.
In this view, the basic characteristics of the subatomic world which is described by Quantum Theory are complementarity and uncertainty, and these characteristics in and of themselves say something profound about the underlying uncertainty of the theory itself from a Classical Mechanics, objective realist perspective. To Bohr, complementarity is in fact the core underlying principle which underpins the uncertainty principle and these two basic and fundamental characteristics of the model which describes the quantum world captured at some level its very essence. Furthermore, according to Bohr and within the intellectual framework of the Copenhagen Interpretation generally speaking, these attributes taken to their logical and theoretical limits, do not allow for or provide any metaphysical framework for interpretations of the model beyond the model itself which is bound by a) the measurement values or results of a given experiment, b) the measurement instruments themselves that were part of a given experiment, and c) the act of measurement itself. All that can be said about the model is contained within the model.
Another common and more recently popularized interpretation of Quantum Theory is that perhaps all possible outcomes as described in the wavefunction do in fact “exist”, even if they could not be seen or perceived in our objective reality as defined by a given experiment of a given system. This interpretation, which has come to be known in the literature as the many-worlds interpretation of Quantum Theory, actually incorporates all of the stochastic outcomes described within the wavefunction into the definition of reality itself so to speak. So rather than the wavefunction being a mere mathematical tool as it were, in the many-worlds interpretation the wavefunction is reality. In other words, if the math itself is viewed as the description of the underlying “reality”, and reality must conform to the basic underlying assumptions of Classical Mechanics – causal determinism, local realism, etc. – then wavefunction collapse which is a hallmark of Quantum Mechanics simply represents “one” of the many possible outcomes, one of the many “realities” that are inherent in the underlying system. In this respect, the many-worlds interpretation can be seen as juxtaposed with the Copenhagen Interpretation which presupposes that the alternative outcomes implicit in the wavefunction which are not yielded upon the act of observation, i.e. again wavefunction collapse, do not have any real existence per se. Although on the surface it might appear to be an outlandish premise, this interpretation of Quantum Theory has gained some prominence in the last few decades, especially within the Computer Science and Computational Complexity fields which are driven by pure math more or less.
This original formulation of this theory was laid out by Hugh Everett in his PHD thesis in 1957 in a paper entitled The Theory of the Universal Wave Function wherein he referred to the interpretation not as “Many-Worlds” but, much more aptly and more accurately given his initial formulation of the theoretical extensions of Quantum Mechanics that he proposed, as the relative-state formulation of Quantum Mechanics. Almost completely ignored by the broader scientific community for several decades after he published his work, the theory was subsequently developed and expanded upon by several authors in the last decade or two and has come to be known, along with its variants that have cropped up, as the many-worlds interpretation. Everett was a graduate student at Princeton at the time that he authored The Theory of the Universal Wave Function and his advisor was John Wheeler, one of the most respected theoretical physicists of the latter half of the twentieth century. In Everett’s original exposition of the theory, he begins by calling out some of the problems with the original, or classic, interpretation of Quantum Mechanics, specifically what he and other members of the physics community believed to be the artificial creation of the notion of wavefunction collapse to explain the quantum uncertain to deterministic behavior transitions, as well as the difficulty that standard interpretations of the theory had in dealing with systems that consisted of more than one observer. These he considered to be the main drivers behind his search for an alternative view, interpretation, or theoretical extension even of Quantum Theory. He actually referred to his relative-state formulation of Quantum Theory as a metatheory given that the standard interpretation could be derived from it.
After writing his thesis, Everett did not in fact continue a career in academia and therefore subsequent interpretations and expansions upon his theory were left to later authors and researchers, most notably by Bryce Dewitt who coined the term “many-worlds”, and David Deutsch among others. DeWitt’s book on the topic published in 1973 entitled The Many-Worlds Interpretation of Quantum Mechanics in many respects popularized this interpretation and brought it back into mainstream Physics and it included a reprint of Everett’s thesis. Deutsch’s seminal work on the topic is a book entitled The Fabric of Reality published in 1997 where he expands and extends the many-worlds interpretation to other academic disciplines outside of Physics such as Philosophy, specifically epistemology, Computer Science and Quantum Computing, and even Biology and theories of evolution. Although Bohr, and presumably Heisenberg and von Neumann as well, whose collective views Quantum Theory’s philosophical implications make up the Copenhagen Interpretation, would no doubt explain away these strange and seemingly arbitrary assumptions as out of scope of the theory itself (i.e. Quantum Theory is intellectually and epistemologically bound by the experimental apparatus and their associated experimental results), Everett finds this view philosophically limiting and at the very least worth exploring tweaks and extensions to the theory to see if these shortcomings can be removed, and in turn what the implications are theoretically speaking when some of the more standard and orthodox assumptions of Quantum Mechanics are relaxed in some sense.
In Everett’s original conception of what he called the relative-state formulation of Quantum Mechanics” , is conceived to augment the standard interpretation of Quantum Theory (read Copenhagen Interpretation) which theoretically prevents us from any true explanation as to what the theory says about the nature of “reality” itself, or the real world as it were – a world which is presumed to be governed by the laws of Classical Physics where “things” and “objects”, i.e. measurable phenomena, exist independent of observers. Where “objects” or “particles”, depending upon the physical context, have real, well defined, static measurable and definable qualities that exist independently of the act of measurement or observation. This world of course is fundamentally incompatible with the underlying mathematical characteristics of Quantum Mechanics, a model which is stochastic, i.e. a probabilistic, where the outcomes of experiments are effectively defined by their uncertainty and complementarity, which seemingly contradict the underlying assumptions of Classical Mechanics.
Given the implications of this interpretation and again its more widespread adoption in recent years and in popular culture, it’s important that we understand it’s basic principles and tenets as Everett understood them. Everett’s starts by making the following two basic assumptions:
• he abstracts the notion of the observer as a machine-like entity with access to unlimited memory, which stores a history of previous states, or previous observations, and also has the ability to make simple deductions, or associations, regarding actions and behavior of system states solely based upon this memory and deductive reasoning.
His second assumption represents a marked distinction between it and Quantum Theory proper and incorporates observers and acts of observation (i.e. measurement) completely into one holistic theoretical model. Furthermore, Everett proposes, and this is the core part of his thesis, that if you yield to assumptions 1 and 2, you can come up with an extension to Quantum Mechanics that describes the entire state of the universe, which includes the observers and objects of observation, that can be described in a completely mathematically consistent, coherent and fully deterministic manner without the need of the notion of wavefunction collapse or any additional assumptions regarding locality or causal determinism for that matter from which the standard interpretation of Quantum Theory as it were, can be deduced.
Everett makes what he calls a simplifying assumption to Quantum Theory, i.e. removing the need for or notion of wavefunction collapse, and assumes the existence of a Universal Wave Function which accounts for and describes the behavior of all physical systems and their interaction in the universe, completely including the observer and the act of observation into the model – observers being viewed as simply another form of a quantum state that interacts with the environment. Once these assumptions are made, he can then abstract the notion of measurement, which is the source of much of the oddity and complexity surrounding Quantum Theory, as simply interactions between quantum systems that are all governed by this same Universal Wave Function. In Everett’s self-proclaimed metatheory, the notion of what an observer means and how they fit into the overall model are fully defined, and what he views as the seemingly arbitrary notion of wavefunction collapse is circumvented. His metatheory is defined by the assumption of the existence of a Universal Wave Function which corresponds to the existence of a fully deterministic multi-verse based reality whereby wavefunction collapse is understood as a specific manifestation of the realization of one possible outcome of measurement that exists in our “reality”, or our specific multi-verse, i.e. the one which we observe during our act of measurement.
But in Everett’s theoretical description of the universe, if you take what can be described as a literal interpretation of this Universal Wave Function as the overarching description of reality, the other, unobserved, possible states reflected in the wavefunction of any system in question do not cease to exist with the act of observation. In Everett’s original conception of Quantum Theory, his so-called relative-state formulation, the act of observation of a given system does not represent a “collapse” of the quantum mechanical wave that describes a given system state, but that these other states that are inherent in the wavefunction itself, while they do not manifest in our act of observation of said system do however have some existence per se. To what degree and level of reality these “states” exists is a somewhat open ended question in this model and is the subject of much debate in subsequent interpretations of Everett’s metatheory, i.e. the relative-state formulation, but regardless according to Everett’s original conception of relative-state formulation, observers and observed phenomena are abstracted to a single mathematical construct which is derived from the wavefunction itself, i.e. the Universal Wave Function, and collectively are entirely descriptive of not just a given state of a given system, but also in turn the entire physical universe, most of which is simply not perceived by us as we “observe” it.
What Everett has put forward with his notion of the Universal Wave Function really, with the so-called relative-state formulation of Quantum Mechanics, is a full ontological description of reality that is implied in the underlying mathematics of Quantum Theory, a complete metaphysics as it were, an interpretation that certainly goes well beyond the standard Copenhagen Interpretation with respect to ontology. In his own words, and this is a subtle yet important distinction between Everett’s view and the view of subsequent proponents of the many-worlds interpretation , these so-called “unobserved” states exist but remain uncorrelated with the observer in question, an observer that is incorporated and abstracted into his notion of a Universal Wave Function which models all of “reality”, again observed phenomenon and observers themselves.
This is his great intellectual leap, that measurement systems and observers are intrinsically, from a mathematical and metaphysical perspective, basically the same thing. The implications of this somewhat simple and elegant additional layer of abstraction upon the underlying math of Quantum Mechanics is that these so-called “unobserved” or “unperceived” states do have some semblance of reality. That they do in fact exist as possible realities, realities that are thought to have varying levels of “existence” depending upon which version of the many-worlds interpretation you adhere to. With DeWitt and Deutsch for example, a more literal, or “actual” you might say, interpretation of Everett’s original theory is taken, where these other states, these other realities or multi-verses, do in fact physically exist even though they cannot be perceived or validated by experiment.[8] This is a more literal interpretation of Everett’s thesis however, and certainly nowhere does Everett explicitly state that these other potential uncorrelated states as he calls them actually physically exist. What he does say on the matter, presumably in response to some critics of his metatheory, seems to imply some form of existence of these “possible” or potential universes that reflect non-measured or non-actualized states of physical systems, but not necessarily that these unrealized outcomes actually exist in some alternative physical universe which is typically how the many-worlds interpretation of Quantum Theory is commonly understood today (hence the name), again a significant deviation from Everett’s original conception.
According to Everett’s view then, the act of measurement of a quantum system, and its associated principles of uncertainty and entanglement, is simply the reflection of this splitting off of the observable universe from a higher order notion of a multi-verse where all possible outcomes and alternate histories have the potential to exist. The radical form of the many-worlds interpretation is that these potential, unmanifested realities do in fact exist, whereas Everett seems to only go so far as to imply that they “could” exist and that conceptually their existence should not be ignored but at the same time their existence need not have any bearing on our conception or notion of “reality”.
As hard as this many-worlds interpretation (sometimes referred to as the many-minds interpretation) of Quantum Theory might be to wrap your head around, it does represent a somewhat elegant theoretically and mathematically sound solution to some of the criticisms and challenges raised by the broader Physics community against Quantum Theory, namely the EPR Paradox and the Schrödinger’s cat problems. It does also raise some significant questions however as to the validity of his underlying theory of mind and subjective experience in general, notions which Everett somewhat glosses over (albeit intentionally, he is not constructing a theory of mind nor does he ever state that he intends to in any way) by making the simple assumption that observers can be incorporated into his Universal Wave Function view of reality by abstracting them into simple deductive reasoning and memory based machines. Nonetheless this aspect of Everett’s interpretation of Quantum Theory, his implicit and simplified theory of observation and the role of mind, remains one of the most hotly debated and widely criticized aspect of his metatheory, and one upon which arguably his entire theoretical model rests.[10]
The last of the so-called interpretations of Quantum Theory that are relevant to this study is what we refer to throughout as Bohmian Mechanics, a fully deterministic model of Quantum Theory pioneered by David Bohm, one of the most prolific Physicists of the twentieth century. David Bohm was an American born British physicist of the twentieth century who made a variety of contributions to Physics, but who also invested much time and thought into the metaphysical, really ontological, implications of Quantum Theory, and in Philosophy in general, topics that in fact most Physicists have steered away from. In this respect Bohm was a bit of a rebel relative to his peers in the academic community because he extended the hard science of Physics into the more abstract realm of the descriptions of reality as a whole, incorporating first philosophy back into the discussion in many respects, but doing so with the tool of hard mathematics, making his theories very hard, if not impossible, to ignore by the Physics community at large, and establishing a scientific – really mathematical – foothold for some very Eastern philosophical metaphysical assumptions, all bundled together under a notion that Bohm referred to as undivided wholeness.
Bohm was, like Everett and many others in the Physics community (Einstein of course being the most well-known), dissatisfied with mainstream interpretations of Quantum Mechanics, in particular the so-called Copenhagen Interpretation which basically said that Quantum Theory was just a predictive modeling tool and cannot be used as the basis for any sort of metaphysical or ontological interpretation regarding the true nature of reality whatsoever. This led him, apparently with some prodding by Einstein with whom he had ongoing dialogue toward the end of Einstein’s life, to look for possible hidden variable theories which could take the probability and uncertainty out of Quantum Theory and provide for – at least from an ontological and metaphysical perspective at least – a common set of assumptions across all of Physics. Bohmian Mechanics is the result of this work, and although it generally speaking has not gained much traction in the scientific and academic community the model does a) prove that hidden variable theories are actually possible (something that still remained in doubt well into the 70s and 80s even decades after Bohm first published his adaptation of de Broglie’s pilot-wave theory which supported multi-bodied systems in the 1950s) and b) actually provided for a somewhat rational (at least rational from a Classical Mechanics point of view) explanation of what might actually be going on in this subatomic world where waves and particles seemed to blend into this non-classical, indeterministic reality – albeit requiring the relaxation of at least one of the prominent assumptions underlying Classical Mechanics, i.e. locality.
The foundations for Bohmian Mechanics were laid by Louis de Broglie in 1927 when he originally proposed that Schrödinger’s wavefunction could be interpreted as describing the existence a central physical particle accompanied by a so-called “pilot-wave that governed its behavior, thereby physically explaining why these subatomic “particles” behaved like waves and particles depending upon the experiment. De-Broglie’s pilot-wave theory in its original form affirms the existence of subatomic particles, or corpuscles as they were called back then, but viewed these particles not as independent existing entities but as integrated into an undercurrent, or wave, which was fully described by Schrödinger’s wavefunction and gave these subatomic particles their wave-like characteristics of diffraction and interference while at the same time explained their particle like behavior as illustrated in certain experiments. This represented a significant divergence from standard interpretations of Quantum Theory at the time. From his original 1927 paper on the topic, de Broglie describes pilot-wave theory as follows:
One will assume the existence, as distinct realities, of the material point and of the continuous wave represented by the [wavefunction], and one will take it as a postulate that the motion of the point is determined as a function of the phase of the wave by the equation. One then conceives the continuous wave as guiding the motion of the particle. It is a “pilot wave”.[11]
De Broglie’s pilot-wave theory was dismissed by the broader academic community however when it was presented at the time however due to the fact that the model, as presented by de Broglie, could only be used to describe single-body systems. This fact, along with the then very strong belief that any variant of hidden variable theories were theoretically impossible as put forth by von Neumann in paper he published in 1932 which led to the abandonment of pilot-wave theory by the Physics community as a possible alternative explanation of Quantum Mechanics for some two decades or so until it was picked back up by Bohm after von Neumann’s thesis that no local hidden variable theories were possible was proven to be false, or at least not nearly as restrictive as originally presumed.[12]
So in the early 1950s Bohm, driven primarily by the desire to illustrate that hidden variable theories were in fact possible, picked up where de Broglie left off and extended pilot-wave theory to support multi-body physical systems., giving the theory a more solid scientific and mathematical ground and providing a fully developed, alternative theoretical and mathematical description of Quantum Mechanics for consideration by the broader Physics community. In the new framework, what he refers to as the Ontological Interpretation of Quantum Theory, Bohm-Hiley extend the underlying mathematics of Quantum Mechanics to include a fundamentally non-local force called quantum potential, a force which provided the rational and mathematical foundations for the explanation of non-local correlations between subatomic particles and their associated measurements. In his Ontological Interpretation, Bohm-Hiley suggests that it was in fact the actual position and momentum of the underlying particle(s) in question that were the so called hidden variables, values which governed, along with the quantum potential, how a quantum wave-particle would behave, effectively sidestepping the so-called measurement problem, i.e. the need for wavefunction collapse
The force of quantum potential, as Bohm-Hiley describe it is not the same type of force that underlies most of Classical Mechanics, where its effect is a function of intensity or magnitude. It is this extra variable, one which is inherently non-local in the Classical Mechanics sense, along with the Schrödinger equation, i.e. the wavefunction, which in toto govern and fully determine the behavior of a quantum system and has the potential (no pun intended) to fully describe all of its future and past states, irrespective of whether or not the quantum system is observed or measured. This is how Bohmian Mechanics can be said to be fully causally deterministic, hence the Causal Interpretation name given to the model in some circles. It is the notion of quantum potential that is the theoretical glue to speak that keeps Bohmian Mechanics together and, along with the establishment of the actual position and momentum of a given particle (or set of particles) as being fundamentally real, is the mathematical (and metaphysical) tool that is used to explain what’s actually going on in the quantum realm. In other words – and this implication and assumption which underlies Bohmian Mechanics cannot be overstated – the quantum system not only has some definitive initial state, but it also knows about its environment to a certain extent, information that is embedded in the underlying quantum potential of a given system, a variable which can be added to the more standard mathematical models of Quantum Mechanics without changing any of the predictive results or fundamental attributes or properties of the underlying equations.
Quantum potential in Bohm’s view is a force that is universally present not only in the quantum realm but underlying all of Physics, a force that effectively becomes negligent as the quantum system becomes sufficiently large and complex and is transformed from a system that exhibits both wave and particle like behavior to a system governed by Classical Mechanics as described by Newton. It provides us with an explanation for wavefunction collapse and quantum measurement uncertainty as put forth by Heisenberg, von Neumann and others by positing that the Schrödinger’s wavefunction does in fact fully describe quantum system behavior, that the actual position and momentum of a given quantum state does in fact exist even if it is not measured or observed, and that there exists some element of non-local active information within the environment which explains the observable and experimentally verifiable existence of the correlation of physically separated quantum entities, i.e. correlated observables. As John Stewart Bell, a proponent in the latter part of his career of Bohmian Mechanics (what he refers to as de Broglie-Bohm theory) puts it:
Bohmian Mechanics, as Bohm’s exposition of de Broglie’s pilot-wave theory later evolved into its more mature form, provides a mathematical framework within which subatomic reality can indeed be thought of as actually existing independent of an observer or an act of measurement, a significant departure from standard interpretations of the theory that were prevalent for most of the twentieth century, i.e. the Copenhagen Interpretation mostly. In modern Philosophical terms, it’s a fully realist interpretation of Quantum Theory, providing a full ontological description as it were – one that’s also fully deterministic, albeit non-local – of the reality that underpins Quantum Theory which is implicit to the wavefunction – hence the name that Bohm gives his so-called interpretation of Quantum Theory, i.e. the Ontological Interpretation. Bohmian Mechanics furthermore is consistent with Bell’s Theorem, which again states that no “local” hidden variable theories could ever reproduce all the predictions of Quantum Mechanics, and also at the same time directly addresses the concerns regarding completeness of Schrödinger’s wavefunction as a description of the subatomic world that were raised by the famed EPR Paper.[15]
Furthermore, Bohmian Mechanics is fully deterministic, proving that once the value of these hidden variables of position and momentum of the underlying particles within the system are known, and once an additional non-local attribute is added to the system state (i.e. quantum potential), all future states (and even past states) could be calculated and known as well. This solution effectively relieves and solves many of the problems and paradoxes that were/are inherent in standard interpretations Quantum Theory such as uncertainty and complementarity (i.e. entanglement), as well as getting rid of the need for wavefunction collapse. It furthermore provides us with a mathematically sound description of Quantum Mechanics which rests on almost all of the same basic underlying assumptions of Classical Mechanics, everything except the notion of locality. Bohmian Mechanics falls into the category of hidden variable theories. It lays out a description of quantum reality where the wavefunction, along with the notion of quantum potential, together represent a fully deterministic, albeit again non-local, description of the subatomic world – mathematically speaking. With respect to the importance of Bohm’s work in Quantum Mechanics, Bell himself, albeit some 30 years after Bohm originally published his extension of de Broglie’s pilot-wave theory, had this to say:
Again, in this model it is the “actual” position and momentum of said particle which is the so-called hidden variable which in turn determine the result of a given experiment or observable result. Bohmian Mechanics agrees with all of the mathematical predictions of standard interpretations of Quantum Theory, i.e. its mathematically equivalent, but it extends the theoretical model to try and explain what is actually going on, what is driving the non-local behavior of these subatomic “things” and what in fact can be said to be known about the state of quantum systems independent of the act of measurement or observation. With this notion of quantum potential, Bohm provides a mathematical as well as metaphysical principle which “guides” subatomic particle(s), gives them some sense of environmental awareness, even if the reality he describes, again the so-called Ontological Interpretation of Quantum Theory, does not necessarily abide by the same principles of Classical Mechanics gives its assumptions regarding locality – i.e. that all objects or things are governed by and behave according to the principles of Classical Mechanics which are bound by the constraints of Relativity and the fixed speed of light, principles which have been demonstrated to be wholly inconsistent with Quantum Mechanics, causing of course much consternation in the Physics community and calling into question local realism in general.
Bohmian Mechanics contribution to Quantum Mechanics, and Physics as a whole in fact, is not only that it calls into question the presumption of local realism specifically, what Einstein referred to as “spooky action at a distance”, but also in that it proved unequivocally that hidden variable theories are in fact theoretically and mathematically possible and still consistent with the basic tenets of Quantum Mechanics. Bohm in fact “completes” Quantum Mechanics in the very sense that the EPR Paper described when published in 1935 which is illustrated in their famed EPR Paradox. Bohmian Mechanics, whether you believed its underlying metaphysical assumptions about what was really going on in the subatomic realm, constructed in a very sound mathematical and theoretical model that was entirely consistent with Quantum Mechanics, the grounding of physical reality and existence itself as it were, brought very clear attention to the fact that our notions of time and space, and the perception of reality itself, was in need of a wholesale revision in terms of basic assumptions.
What Bohmian Mechanics calls our attention to quite directly, and in a very uncomfortable way from a Classical Mechanics perspective, is that there are metaphysical assumptions about reality in general that are fully baked into Classical Mechanics that must be relaxed in order to understand, and in fact explain, Quantum Mechanics. Furthermore, it was these same subatomic particles (and/or waves) whose behavior which was modeled so successfully with Quantum Mechanics, that in some shape or form constituted the basic building blocks of the entire “classically” physical world – this fact could not be denied – and yet the laws and theorems that have been developed to describe this behavior, i.e. Classical Mechanics, were and still are fundamentally incompatible with the laws that govern the subatomic realm, specifically the underlying assumptions about what is “real” and how these objects of reality behave and are related to each other.[17]
While the Copenhagen Interpretation of Quantum Theory holds that the model is simply a calculation tool and is bound by certain metaphysical constraints that are inherent to the theoretical model itself, Bohmian Mechanics, as well as Everett’s relative-state formulation in fact, provide explanations to what Quantum Theory’s underlying mathematics tells us about the nature of the universe we live in, about reality itself or again in Philosophical terms with respect to ontology (albeit drawing very different conclusions about the nature of the reality that is being described), arguably requiring us to reconsider the underlying assumptions that sit at the very foundation of Classical Mechanics. In Bohm’s own words:
And Bohm didn’t stop with his Ontological Interpretation of Quantum Theory, he expanded its theoretical foundations to establish a grounding of a new order, an order which could encompass not only Classical Mechanics and Quantum Mechanics, but one that encompassed the role of the observer, consciousness itself, as well. This is his notion of the implicate order and holomovement, principles upon which a sound logical, rational and holistic metaphysical framework could be constructed which encompassed all of existence; physical, mental and psychological, and in many respects covering all of the theological and philosophical ground that rested at the core of Descartes’s notion of res cogitans, res extensa and God but encompassing Physics as well. To Bohm, both Classical Mechanics as well as Quantum Mechanics could be looked at not as inconsistent with each other, but as different manifestations of what he referred to as the implicate order, an underlying order which reflected pre-spatial phenomenon which manifested itself in the various physical planes of existence, in the case of various scales, in what he termed explicate orders.
Bohm, and Basil Hiley who contributed to and co-authored their text that described in detail their Ontological Interpretation of Quantum Theory, not only proved that non-local hidden variable theories of Quantum Mechanics were possible, but also that in order to truly understand what was happening at this underlying substratum of existence, the notion of intellect, or at some level what could be construed as consciousness, had to be considered as an active participant in the model that explained what was going on – this is again what sits behind their notion of quantum potential, the means by which a quantum system is “informed” of its environment as it were, underpinning the notion of active information that complemented and augmented the wavefunction to govern elementary behavior – behavior that Bohm and Hiley at least considered to be “intelligent” in a way, or at the very least aware of the various elements of the environment beyond any Classical Mechanical boundaries. Their idea of active information, which is a, if not the, revolutionary idea that they propose to explain the subtleties and mysteries of subatomic behavior, implies that there is some sort of awareness the overall interconnected quantum environment which must be considered in order to fully explain quantum system behavior, an aspect which by its very nature violates some of the core assumptions of Classical Mechanics, namely that of local realism, i.e. that the behavior of any given “object” or system of objects is independently real, exists independent of the act of measurement or observation, and is governed entirely by the properties or qualities of said object or system or any forces which act on said system.
In Bohm’s Philosophy, his metaphysics (and we’re no longer in Physics proper just to be clear), he believed that the quantum reality, its explicate order that we perceive and can measure and interact with by means of various experiments, is further governed by a higher implicate order that stems from some cognitive aspect of consciousness – i.e. the human mind or some aspect of cosmic mind, even if he isn’t explicit in using this terminology. That in fact we cannot get away from considering the role of mind, the role of the perceiver, in completely understanding quantum behavior or Quantum Theory in general. He perhaps best describes his notion of the implicate order, its relationship to various explicate orders, and what he means by holomovement, and how these metaphysical constructs from his perspective can be used to understand the seemingly non-local forces/interaction that appear to be at work in Quantum Mechanics, with an analogy of a fish swimming in an aquarium being looked at and perceived through different camera lenses, each yielding a different perspective on what the fish looks like but at the same time describing the same fish:
All things found in the unfolded, explicate order emerge from the holomovement in which they are enfolded as potentialities, and ultimately they fall back to it. They endure only for some time, and while they last, their existence is sustained in a constant process of unfoldment and re-enfoldment, which gives rise to their relatively stable and independent forms in the explicate order.[20]
From a conceptual perspective, one can think of Bohm’s idea of implicate and explicate order using the analogy of a game of chess. In chess, the game itself is governed by an explicate order, where the boundaries of the board and the rules of the overall game are established – who is white, who is black, the capturing of individual pieces, the goal of trying to capture the king to win the game, etc. Furthermore, each piece in the game is governed by its own set of rules that determine how it can move across the board, another explicate order as it were that although subservient to the master explicate order of the game itself, represents an explicate order nonetheless. And yet implicit to the game is the mind and objectives of the two players themselves, who although must operate and behave according to the aforementioned explicate order directives or laws/rules not only of the game itself but also with respect to the individual movements of individual pieces on the board, but yet at the same time, all the while governed by another, higher order, i.e. the objective of trying to “win the game” by capturing the opponent’s king, i.e. the implicate order as it were. Each of the players (presumably if they are any good at chess) has the vision and intellect, the intelligence as it were, to leverage all of these different yet interrelated explicate orders – the explicate order of the game and the explicate orders which govern the behavior of the individual pieces – in an attempt to achieve the desired outcome, i.e. capture the king of the opponent which represents the underlying implicate order of the game in this analogy.
The implicate order in this case is the mind of the player, from which each of the explicate orders unfolds as he (or she) moves each individual piece. It is within this higher order that each of the players comes up with their own strategy and framework in mind, processing and reacting to information about the game itself as each move is made. Each player understands how the game is to be played, what moves he can make as the game evolves and pieces come off the board – i.e. the underlying and always applicable explicate orders which govern the rules of the game – while at the same time the game is governed by a higher-level order which also describes the underlying behavior, the underlying reality” as it were, as to what is truly going on at a higher level of abstraction as it were. This is the implicate order underlying the game, i.e. that each player is trying to “win”. [Interesting enough in this example there are really two different implicate orders at play which influence the outcome of the game, both of which obey the same set of rules but the interplay of which governs the overall behavior, the outcome, of not only the individual moves as they are made but the outcome of the game itself.]
In many respects, this notion of implicate order is echoed in Everett’s relative-state formulation of Quantum Theory, i.e. that the underlying correlation of an observed state of a given system reflects our observation, the relative-state formulation of reality as it were, of a given quantum state and not that these other, uncorrelated, states that we do not perceive do not necessarily exist. Everett’s relative-state formulation of Quantum Mechanics ironically enough, and one of its biggest criticisms in fact, is that is fully coherent only because it incorporates a theory of mind directly into his model – a metaphysical construct which is abstracted into a quasi-mechanical reasoning machine (albeit greatly simplified relative to a functioning human mind) which has access to infinite memory that is capable of “remembering” prior states of existence or prior observation states, which in turn provides the rational explanation of the collapse of the wavefunction as a misunderstanding of what is actually going on – namely the observance of one manifest, correlated, state, not necessarily the lack of existence of all of the uncorrelated states, leading of course to the seemingly perplex and somewhat confounding notion of the of the existence of many-worlds interpretation. Bohm’s metaphysics makes essentially the same philosophical leap, namely that it is the existence of an underlying implicate order which contains within it various explicate order which may or may not be manifest depending on which observational state, or perspective, we choose.
To Bohm, and Hiley, this implicate order construct can also be used to incorporate a theory of mind (back) into Physics, reverting back to first philosophy as it were, or in more modern philosophical parlance again, ontology. To Bohm, it is quantum potential or active information which point to the existence of a basic underlying consciousness or awareness that underpins physical reality – implying that the universe itself when looked at from this grand perspective, one that includes the act of perception along with that which is perceived (which arguably is an artifact and a necessary conclusion of Quantum Theory), points to the necessary conclusion of what he calls undivided wholeness.
It is now quite clear that if gravity is to be quantised successfully, a radical change in our understanding of spacetime will be needed. We begin from a more fundamental level by taking the notion of process as our starting point. Rather than beginning with a spacetime continuum, we introduce a structure process which, in some suitable limit, approximates to the continuum. We are exploring the possibility of describing this process by some form of non-commutative algebra, an idea that fits into the general ideas of the implicate order. In such a structure, the locality of quantum theory can be understood as a specific feature of this more general a-local background and that locality, and indeed time, will emerge as a special feature of this deeper a-local structure.[21]
What is arguably the logical conclusions of any reasonable interpretation of Quantum Theory, leaving open the idea of at least some form of metaphysical/philosophical interpretation is possible (which seems rational), is that our notion of “order”, and our notions and assumptions regarding the basic nature of reality – what falls under the discipline of ontology which is a major theme of this work – need to be radically changed in order to account for all of the strange phenomenon, features and characteristics that come along with the tremendous predictive power of the underlying mathematics. Some elemental and basic non-local principle must be incorporated into our ontology in order to incorporate the truth and empirical validity of Quantum Theory – that is to say that no matter what interpretation of Quantum Theory you find most attractive, at the very least the notion of local realism which underpins all of Classical Mechanics, all of Western philosophy really, must be abandoned in order to make sense of what is going on. One would be hard pressed to find someone with a good understanding of Quantum Theory who would dispute this.
In the words of Max Planck,, one of the greatest physicists of the 20th century by any measure, and words which you won’t find in any Physics textbook mind you, he sums up the state of affairs as follows:
[1] Ockham’s razor, or lex parsimoniae in Latin meaning “law of parsimony”, is a principle initially forth by the 14th century theologian, philosopher and logician William of Ockham (c. 1287 – 1347), and states that among competing hypotheses, the one with the fewest assumptions should be selected and in most if not all cases represents the “best”, or “optimal”, solution. Ockham’s razor has been a guiding force for scientific theoretical advancement throughout much of the Enlightenment Era and remains a persistent and guiding principle of scientific theoretical analysis, and philosophical and metaphysical inquiry as well, to this day. See Wikipedia contributors, ‘Ockham’s razor’, Wikipedia, The Free Encyclopedia, 8 December 2016, 02:13 UTC, <; [accessed 8 December 2016].
[2] Not all Physicists fall into this category of course, and some have offered various metaphysical insights over the years, Bohm and even to a certain extent Einstein and Bohr representing some of the more prominent examples, but the general albeit prejudicial view still for the most part holds true and is reflected in the discipline of Physics as it is taught in the West which represents “intellectual orthodoxy” if we may use that term in this context.
[3] This is arguably one of the reasons that metaphysics, and its companion subject theology, are not taught in the West outside of advanced classes in private high schools or universities, i.e. institutions that are not publically funded, given the predilection, for sound historical reasons undoubtedly, for refusing to mix not just Religion and Science but religion with “education” as a whole. Part of the byproduct of the separation of “church and state” as it were.
[4] In the Physics community, and in particular with respect to Quantum Theory in particular, Bohmian Mechanics is viewed as a hidden variable theory within the context of the standard literature and findings with respect to the theoretical implications of the EPR Paradox and Bell’s Theorem. Depending upon context, the same theoretical framework, which was developed primarily by Bohm but rests on work done by de Broglie, is referred to as the Causal Interpretation of Quantum Theory (given its fully deterministic model), or as de Broglie-Bohm theoryWe shall try and use Bohmian Mechanics throughout as much as possible. We can find the most detailed description of Bohmian Mechanics in Bohm and Basil Hiley’s book entitled The Undivided Universe which was first published in 1993 although much of its contents and the underlying theory had been thought out and published in previous papers on the topic since the 1950s. In this work they refer to their interpretation not as the Causal Interpretation, or even as de Broglie-Bohm theory, but as the Ontological Interpretation of Quantum Theory given that from their perspective its gives the only complete causal and deterministic theoretical model of Quantum Theory where it is the actual position and location of the particle within the “pilot-wave that determines the statistical outcome of the experiment that is governed by the wavefunction.
[6] From the Introduction of Everett’s thesis in 1957 “Relative State” Formulation of Quantum Mechanics.
[7] Hugh Everett, III. Theory of the Universal Wave Function, 1957. Pg 53.
[10] See Bohm and Hiley’s Chapter on Many-Worlds in their 1993 book entitled The Undivided Universe: An Ontological Interpretation of Quantum Theory for a good overview of the strengths and weaknesses mathematical and otherwise of Everett and DeWitt’s different perspectives on the many-worlds interpretation of Quantum Theory.
[13] David Bohm, Wholeness and the Implicate Order, London: Routledge 1980 pg. 81.
[15] In fact, Bohm’s pilot-wave theory to a large degree inspired Bell’s Theorem. See Bell’s paper entitled On the Einstein Podolsky Rosen Paradox in 1964, published some 12 years after Bohm published his adaption of De Broglie’s pilot-wave theory.
[17] There has been significant progress in the last decade or two in reconciling Quantum Theory and Classical Mechanics, most notably with respect to Newtonian trajectory behavior, what is described in the literature as accounting for the classical limit. For a good review of the topic see the article The Emergence of Classical Dynamics in a Quantum World by Tanmoy Bhattacharya, Salman Habib, and Kurt Jacobs published in Las Alamos Science in 2002.
[18] David Bohm, Wholeness and the Implicate Order, London: Routledge 1980 pg. xv.
[20] [Bohm, David, 1990].
[21] Relativity, Quantum Gravity and Space-time Structures, Birkbeck, University of London (12 June 2013).
[21] Max Planck, Scientific Autobiography and Other Papers.
[22] Max Planck, Scientific Autobiography and Other Papers.
Categories: Essays, Metaphysics, Philosophy, Physics
1 reply
1. Do you sit at all?
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this: |
5f428f14a168c43f | Randell Mills GUT - Who can do the calculations?
• I'll let you know what Dr. Mills says. Or you can just join us at The Society For Classical Physics. Sorry if I came off as a jerk, you seem to be obviously willing to take an honest look at the theory.
Furthermore, not all parts of the theory are fully fleshed out as you can see, but what it does predict it does so with extreme accuracy and within the confines of classical physics and fundamental constants. There is room to make original contributions to the theory.
I suggested the design of using liquid electrodes last year on the forum to isolate the energetic transition reactions from the solid parts of the reactor and prevent them from melting or vaporizing. This was prior to revealing any liquid fuel injection or liquid electrodes being used in the latest design revealed last week.
External Content youtu.be
• The vaporized silver provides the conductive matrix, the heat provides the kinetic energy to the reactants which are catalyst and atomic hydrogen.
The kinetic energy is what is responsible for initiating the transition reactions (specifically dipole/multipole resonant collisions destabilizing the orbitsphere causing radial acceleration and release of electric potential between electron and proton). If the plasma wasn't contained within the pressure vessel the conditions conducive to the reactions would not persist. The current is mainly to alleviate charge buildup and to provide the initial kinetic energy to the reactants.
The energy obviously comes from the transition reactions which are releasing ~100 or more eVs per event depending on which fractional state is being catalyzed. There could likely even be disproportionation occurring which is when hydrinos collide and drop to even lower energy levels. This also likely occurs within the corona of our star.
If the plasma wasn't confined somehow, it would simply dissipate.
Even in single shot open air tests three years ago the plasma persisted much longer after current ceased to flow which current theory cannot explain. In all cases there is no high field, only a maximum of 5 volts.
Why not just go on the forum and ask Dr. Mills directly?
• If the magic is in the kinetic energy and not in the driving electric current, then the hydrino reaction can spread over N numbers of silver fountains...say 100 electrode sets...an electrode array where the reaction in one electrode set can activate the reaction in many other electrode sets that are nearby the prime driver set.
• I meant according to the current mainstream physics paradigm, it is explained using classical physics (GUTCP) as I have described above.
I don't claim to be the world's authority on GUTCP but I think I got the major points mostly right. Again, Dr. Mills doesn't mind answering questions on his Society for Classical Physics forum. We interact with him on a daily basis pretty much.
Also I wasn't sure if you were being sarcastic with your prior posts, but look at what happens in the corona of the Sun. If GUTCP is right, disproportionation hydrino reactions occur on a massive scale providing the high energy photons to produce the ionized species of elements observed in the spectrum, not millions of degrees temperature as is currently assumed.
• @stefan
You asked about the relationship between GUTCP and QM. I think Mills had the same question and has a first answer and gives its derivation from p11 ff. His conclusion:
“Thus the mathematical relationship of GUTCP and QM is based on the Fourier transform of the redial function. GUTCP requires that the electron is real and physically confined to a two dimensional surface comprising source currents that match the wave equation solutions for spherical waves in two dimensions (angular) and time. The corresponding Fourier transform is a wave over all space that is a solution of the three dimensional wave equation (ev.g. the Schrödinger equation). In essence, QM may be considered as a theory dealing with the Fourier transform of an electron, rather than the physical electron. By Parsevals theorem, the energies may be equivalent, but the quantum mechanical case is nonphysical – only mathematical. It may mathematically produce numbers that agree with experimental energies as eigenvalues, but the mechanisms lack internal consistency and conformity with physical laws. ”
This is a quite a remarcable result.
@ Eric
Sorry for my strong wording. I think I adopted the verbally strong position of some of the people posting in this thread :-) .
To your question:
I am no expert so I am talking about my current understanding of the process of pair production and the fine structure constant: Of course the electron is not moving with lightspeed. 1/alpha is the fraction where the electron would have the velocity c and because this is not possible (because GUTCP relies on special relativity as one of its foundations) the last permitted orbit is a fraction of 1/137. Orbit 1/138 would result in an electron velocity greater than c. And in between the pair production process happens. This transition state orbitosphere is not a traditional orbit of the electron but rather a short living state where (in the case Driscoll describes) the photon wave (photon orbitosphere) changes to become an electron and a positron. To get an impression of how this might work I think one has to see the animations of the fields of the photon and the free electron. I think they are somethere on BLPs page.
To your other question regarding my two links: they are linked to Mills equations because they use the nonradiation condition to construct models for electrons. The paper from 1990 is interesting because they use a simple ad hoc nonradiation condition for the simplest case. Then they solve maxwells equations for their simple nonradiation condition and can show that the electron can have a stable orbit and directly show that the spin is a direct physical consequence of their solution and not “inherent” as in QM. They are completely unrelated to Mills but basically had the same idea and could produce a small part of Mills result. Instead of the ad hoc simples nonradiation condition Mills took the general case and as a model of the electron he used the 2D wave equation. Btw. this also shows that Mills is not randomly putting numbers together – because these guys got the same result as Mills at least for the spin.
And the other paper shows that it is possible to construct not only the electron but other particles with this nonradiation condition so that they are stable – it is more or less a proof/indication that Mills model does not violate any accepted law of nature (Maxwell, Newton) and gives stable models for atoms.
• In regards to Epimetheus's post above, I can't remember where I read it, it was either on the forum or in Brett's book, but apparently many years ago, Hermann Haus told Dr. Mills privately that he had correctly solved for the structure of the electron classically. At the time he did not wish to make "waves" so to speak through public acknowledgement.
In regards to K-Capture Eric seems to be asking specifically about the case of capture of the inner shell; I've posted a question on the other forum so we'll see what Dr. Mills says.
• Here's what Dr. Mills posted. Probably not as much information as you would have liked but you can always prod him for more detail on the forum.
Also GUTCP theorizes that excited states are due to photons expressing "effective charge" and shielding the electron to a degree from the central field of the proton. I guess if one accepts that a high energy photon can convert into an electron and positron the idea of photons in certain situations expressing effective charge isn't all that strange. I'm not sure how to relate this to K-capture but just thought I'd mention it.
Randy MillsToday at 5:12 AM
K capture can only occur if the reaction can form a more stable nucleus. A proton cannot undergo K-capure for example.
• Mills states as follows:
Mills has trademarked “Hydrino.” And because his issued patents claim the hydrino as an invention, BLP asserts that it owns all intellectual property rights involving hydrino research. BLP therefore forbids outside experimentalists from doing even the most basic hydrino research, which could confirm or deny hydrinos, without first signing an IP agreement. “We welcome research partners; we want to get others involved,” Mills says. “But we do need to protect our technology.”
The insulator-metal transition in hydrogen
Very high temperature shock wave methods might make metalize hydrogen obtainable.
This transition from molecular liquid to atomic liquid is called the PPT (discussed below). Leif Holmlid uses a quantum mechanics process called Rydberg blockade to produce metalized hydrogen where a Rydberg matter substance like potassium is used as a QM template to reform the atomic structure of hydrogen into the low orbit based liquid matalized form.
* A phase of hydrogen Rydberg matter (RM) is formed in ultra-high vacuum by desorption of hydrogen from an alkali promoted RM emitter (Holmlid 2002 J. Phys.: Condens. Matter 14 13469). The RM phase is studied by pulsed laser-induced Coulomb explosions which is the best method for detailed studies of the RM clusters. This method gives direct information about the bonding distances in RM from the kinetic energy release in the explosions. At pressures >10-6 mbar hydrogen, H* Rydberg atoms are released with an energy of 9.4 eV. This gives a bonding distance of 150 ± 8 pm which corresponds to a metallic phase of atomic hydrogen using the results by Chau et al (2003 Phys. Rev. Lett. 90 245501). The results indicate that a partial 3D structure is formed.
I beleive that there are other theories that have been accepted by science that explain below base hydrogen orbits; specifically metalized hydrogen. High pressure physics is directed at producing metalized hydrogen as its major goal. [/quote]
All the experimental data that Mills has accumulated may very well be consistent with high temperature shock wave produced PPT hydrogen.
From Holmlid
Instead of inverted Rydberg matter, it is spin-based Rydberg matter with orbital angular momentum l = 0 for the electrons. It is shown to be both superfluid4 and superconductive (Meissner effect observed) at room temperature.6,7 The measured H–H distances are short, normally 2.3 pm.1,3,9 Several spin states with different internuclear distances exist.3 It is likely that the main process initiated by the impinging laser pulse is a transition from level s = 2 with H–H distance of 2.3 pm, to level s = 1 with theoretical distance 0.56 pm. At this distance, nuclear reactions are spontaneous and laser-induced nuclear processes are thus relatively easy to start.
• @Epimetheus
I don't understand this connection, the radial solutions are essentially Laguerre polynomials + exponential for Shrödingers equation of hydrogene and
in the derivation you gave me he uses spherical bessel functions as a radial function. So I can't follow this line of thoughts. But it is true that the fourier
transform with spherical bessel functions for the radial part indeed fourier transform into Mills charge distribution.
• Has anyone worked through Appendix I to the point they feel comfortable with the derivations? I'm mostly OK with it (except for the discussion of the H() and G() functions). However, the conclusion uses some terms that aren't well explained. While I *think* I understand these, can anyone take a crack at providing a more intuitive justification behind the highlighted equations? Exactly what is represented by the cross-product s_n * v_n? Is omega_n the angular frequency of the emitted photon? I think s_n is the spatial frequency expressed in rad/m, and v_n is a velocity in m/sec of the current density. And radiation requires that the cross product of the two at some point on the orbitsphere is equal to the photon's wavelength. Am I understanding this correctly?
• I think that in order to understand a proof of this you should in stead of the taken path, expand the plane wave in the fourier transform
into a sum of spherical bessel functions and spherical harmonics, the sum will cancel almost all terms but a single bessel and spherical
harmonic that match the same quantum number of the Mills charge distribution due to orthogonality. You will end up with the fourier transform being:
(*) j_l(|s|r) Y_lm(theta,phi)
Which is much better because the stated equation (38) in Mills takes convolution with all factors except the last having s. You just can't show that
this expression dissapears because of the property of the convolution. now for a specific w0 |s| has a certain magnitude for light like wave numbers
and hence r can be chooses so that (*) e.g. |s|r represents a zero of the spherical bessel function and (*) is shown to be zero for all light like s,w.
To understand everything that is written is hard though. In all to motivate the non radiation one only needs a half of page I think and could keep
it much much simpler than whats written in the book.
• To further explain and highlight that the key to proper mathematical understanding of Mills theory is the expansion of plane waves in various ways.
We have a photon inside the atom that is trapped. Consider the superpositioning of EM plane waves assume that the wave vectors of all the plane waves are evenly distributed e.g. they live on a sphere with constant radi. Again the theorem where you expand the plane wave in bessel function and spherical harmoics apply and we get the explicit solution of the electrical potential as
~ j_0(|r|w/c)exp(i w t), r = sqrt(x^2+y^2+z^2)
j_0(x) = sin(x)/x and hence |r|w/c = 2pi for a zero
<=> |r|2 pi f / c = 2 pi
<=> |r| 1/(1/f c) = 1
<=> |r| 1 / (T c) = 1
<=> |r| / lambda = 1
<=> |r| = lambda
So the lambda of the trapped photon has to be the same as the radi as described in option geeks post above.
What utter nonsense. Even Randall Mills can't patent physics, and how could he possibly forbid someone else from experimenting? Is he planning to put a copyright tag on each and every hydrino?
ETA. Next step, GE patent the electron and forbid anyone else using 'pirate' versions.
• Read the paper that the jack booted thugs at BLP don't want you to see.
The fact that BLP tries to suppress basic scientific research only means they are not worthy of our attention. Any organization which would send a cease and desist letter to a replicator (who is not seeking to commercialize the technology) is not worthy of existing. My hope is that LENR technologies -- which produce millions of eV per reaction -- arrive on the market soon and cause BLP to lose all future funding.
• What do you want to tell us? That paper sees indications for unusual development of bright light as claimed by Mills. This is more supportive for Mills theory than the opposite. But regarding an independent validation I find the papers of world class plasma physicist like Kroesen and Conrads much more compelling:
Conrads, H, R Mills, and Th Wrubel. (2003) “Emission in the deep vacuum ultraviolet from a plasma formed by incandescently heating hydrogen gas with trace amounts of potassium carbonate.” Plasma Sources Sci Technol 12: 389–395.
Driessen, N. M., E. M. van Veldhuizen, P. Van Noorden, R. J. L. J. De Regt, and G. M. W. Kroesen. (2005) “Balmer-alpha line broadening analysis of incandescently heated hydrogen plasmas with potassium catalyst.” In XXVIIth ICPIG, Eindoven, the Netherlands. 18-22 July.
I´m not your opinion that the cease and desist letter has anything to say. It just tells me that after Rossi we have another guy who is totally scared to lose the race against the competitors. Mills wants to make a lot of money and he owes his private investors a huge return of investment. He also needs money to start some new companies that have other products predicted by GUTCP in their focus. And of course he wants to sue the a$$ of everyone who harmed his credibility like Wikipedia, Rathke, etc.
Being the lone wolf can make you a bit weird. In my eyes Mills is way ahead of Rossi regarding basic decent human behavior.
• Randell Mills is not a decent human being. As I said in my previous post, he is a thug. Andrea Rossi, despite his less than complete honesty and straightforwardness, has never attempted to sue those who performed replications of his technology. He never sent cease and desist letters to Parkhomov, Songsheng, Stepanov, Alan Smith, and a dozen other individuals. Why? Because Andrea Rossi realizes that attempting to prohibit, under threat of litigation, basic scientific research is absolutely repugnant. It's not simply bad, but the polar opposite of the open source movement.
Basically, he is claiming that trying to replicate a scientific phenomenon (in this case the reality of the hydrino) is something no one has the right to do unless they sign up with his company. No one has any duty or obligation whatsoever to ask his permission or sign any document with Black Light Power before performing not-for-profit research. He's basically trying to be the dictator of an entire branch of science which he has no right to be. But even if he was trying, his dictatorship is a flop. After decades of research and making huge claims and pronouncements about a dozen different variations of their technology, the best he can come up with is a giant Rube Goldberg device. Even if his figures and those of his validation team are confirmed, it will be many years before a SunCell would be robust enough to operate for many months or years in an industrial setting.
LENR has him beat and he knows it due to the very basic physics involved. His technology isn't really somewhere between nuclear and chemical. That is like saying, "the speed of me on my bicycle is somewhere between a turtle and an ICBM." And if you notice, he doesn't even speak about his beloved "hydrino hydrides" anymore. He used to brag about them. Waving tubes of multi-colored crystals he'd claim they had all sorts of amazing properties. Now they have vanished.
My hope is that Black Light Power folds in short order. I would hope the same for any company or organization that would threaten a lawsuit over a simple replication attempt. We've had enough petty, arrogant dictators on this planet -- they've been responsible for all sorts of atrocities. We definitely don't need them in science. |
16ee193fa8d7e8af | Hidden Symmetries of the Hydrogen Atom
Here’s the math colloquium talk I gave at Georgia Tech this week:
Hidden symmetries of the hydrogen atom.
Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.
I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way.
In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:
• Greg Egan, The ellipse and the atom.
He says:
Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.
But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.
Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.
Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?
Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?
I have a bunch more questions, but this one is the simplest!
16 Responses to Hidden Symmetries of the Hydrogen Atom
1. John Baez says:
Okay, here’s a guess about my puzzle, which may be implicit in Göransson’s work. I’ll talk about the 2d Kepler problem, but everything should generalize to the 3d problem if it works at all.
Take an ellipse in the plane with its focus at the origin. Draw a point on it. This describes a possible orbit of a planet in the plane, with the point indicating where the planet is at time zero.
We can think of this ellipse as the 2d projection of some great circle on a sphere in 3 dimensions. The diameter of this sphere is the semi-major axis of the ellipse. The center of this sphere is not the ellipse’s focus, but instead its “center”.
The point on the ellipse gives a point on this great circle, so we get a “pointed great circle”.
Now take all pointed great circles on the sphere. Their projections give pointed ellipses, all having semi-major axes of the same length. All these ellipses have the same center, too, but they have different foci. Translate them so their focus is always at the origin!
Now we have all possible pointed ellipses with the same semi-major axis and with focus at the origin. These are all possible orbits of a planet with a given fixed energy. Why? Because the energy of a planet in a given orbit is a function of the semi-major axis (together with the planet’s mass, the Sun’s mass and the gravitational mass).
So, we’ve gotten a one-to-one correspondence between elliptical orbits of a planet with a given energy and pointed great circles on the sphere. This gives a way for the rotation group SO(3) to act on this set of elliptical orbits!
The claim is that these SO(3) symmetries are the ones that, via Noether’s theorem, give the angular momentum and Runge–Lenz vector as conserved quantities!
2. Tali says:
Hi John,
Enjoyed the post. I can’t see why the group acting here is SO(3).
Also, if you grab a planet at any point of its path, and move to anywhere the same distance from the sun you will most probably get a non periodic orbit, or am I wrong?
• John Baez says:
Tali wrote:
Yes, it’s not obvious without calculations, and not obvious from my talk. I believe it can be made more obvious, and my comment, above yours, is my guess as to how. This contains a link to an earlier post here on Azimuth, about Göransson’s work on this issue. Check that out!
If you do what I said in my post you’ll get a periodic orbit, because the new orbit will have the same energy as the old one, and if an orbit for the inverse square force law is periodic, so are all other orbits of that energy. (They will all be ellipses with the same semi-major axis.)
3. Greg Egan says:
I just want to add some background that might be helpful. I know John already knows all this, but people reading about these higher-dimensional symmetries for the first time might not.
The Laplace–Runge–Lenz vector ( http://en.wikipedia.org/wiki/Laplace–Runge–Lenz_vector ) is usually defined as:
A = m v \times L - m k r/\left| r\right|
where k is the force constant for the Kepler problem.
A points along the axis of symmetry of the orbit, from the centre of attraction towards the point of closest approach. It is conserved for any given orbit, and its length depends on the eccentricity of the orbit, going to zero for a perfectly circular orbit.
There is a rescaled version that has some nice properties. If we define:
\displaystyle{ M = \frac{A}{\sqrt{-2 E m}} }
where E is the total energy (negative for a bound orbit), then M and the angular momentum vector L have a conserved sum of squares:
\displaystyle{L^2 + M^2 = \frac{k^2 m}{-2 E}}
There is a direct parallel between this equation and the Pythagorean equation between the traditional measures of an ellipse, a,b,c:
b^2 + c^2 = a^2
Here a is the semi-major axis of the ellipse, b is the semi-minor axis, and c is half the distance between the foci. These are related to physical quantities in the Kepler problem by:
\displaystyle{a = -\frac{k}{2 E}}
\displaystyle{b^2 = -\frac{L^2}{2 E m}}
\displaystyle{c^2 = -\frac{M^2}{2 E m}}
So a is fixed by energy alone, and then, for fixed energy, b is proportional to L, while c is proportional to the scaled LRL vector M.
This sum of squares relationship between L and M suggests that they are acting like two parts of some higher dimensional object of a fixed magnitude. And there is such a higher-dimensional object, in four dimensions (or three, for the planar Kepler problem). Probably the simplest version of it is the bivector:
B = L + e_w \wedge M
where the angular momentum vector L is construed as a bivector giving the plane of the rotation, and e_w is a unit vector orthogonal to the 2 or 3 of ordinary space.
What does this bivector mean? There are two ways we can associate the elliptical orbits of the planet with great circles on a sphere in one higher dimension. John has already described one, used by Göransson, where we project great circles on a higher-dimensional sphere orthogonally into ordinary space.
The other way, with an older pedigree, is to consider the “hodogram” of the orbit: the curve traced out by the velocity of the planet. For the Kepler problem, this will always be a circle, but its centre will be displaced from the origin if the orbit itself is non-circular. All of these velocity circles, for a fixed total energy, can be constructed by stereographically projecting great circles from a higher-dimensional sphere into velocity space.
In either case, the plane in which these higher-dimensional great circles lie is precisely that defined by our bivector, B. So we get an action of the rotations in the higher-dimensional space on B, which conserves the bivector’s overall norm, and hence the sum-of-squares relationship between the associated L and M vectors for a fixed energy.
In other words, rotating the bivector B in the higher-dimensional space gives us an action on pairs of vectors (L,M) that are angular momentum and Laplace–Runge–Lenz vectors for orbits of a particular energy.
• John Baez says:
Thanks for the overview, Greg! One question is whether the two ways of relating elliptical orbits of a certain fixed energy to great circles on the 3-sphere are ‘the same’, at least up to some normalizations.
I think you’re answering in the affirmative, since you’re saying in either approach the great circle coming from an orbit lies in the plane defined by the bivector B = L + e_w \wedge M.
Another way to get at the answer is to directly relate the circle traced out by the velocity vector of the elliptical orbit to the great circle in Göransson’s approach. Have you tried that?
The tricky thing here is that Görannson (and Souriau, and Moser) use a reparametrization of time to convert elliptical orbits into simple harmonic motion, so his velocity is not the true velocity.
I’ll explain this for the 3d Kepler problem, though the dimension doesn’t really affect much. Let
be the position of our planet at time t. If we use this inverse square force law
\displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} }
and choose E = -1/2, conservation of energy says
\displaystyle{ \frac{\dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{1}{r} = -\frac{1}{2} }
If we choose a new parameter s so that
\displaystyle{ \frac{d s}{d t} = \frac{1}{r} }
and use a prime for the derivative with respect to s, we get
\displaystyle{ t' = \frac{dt}{ds} = r }
\displaystyle{ \mathbf{r}' = \frac{dr}{ds} = \frac{dt}{ds}\frac{dr}{dt} = r \dot{\mathbf{r}} }
Then conservation of energy can be rewritten as
\displaystyle{ (t' - 1)^2 + \mathbf{r}' \cdot \mathbf{r}' = 1 }
which is the equation of a 3-sphere in 4-dimensional space! It’s a sphere of radius one centered at the point
With some further calculation we can show some other wonderful facts:
\mathbf{r}''' = -\mathbf{r}'
t''' = -(t' - 1)
so the velocity 4-vector (t',x',y',z') moves in simple harmonic motion, following a great circle on the 3-sphere… if velocity is computed using our new time coordinate, s.
But I have not thought enough about how this is related to the circle traced out by the original velocity 3-vector (\dot{x}, \dot{y}, \dot{z}). If we take (t',x',y',z') and do a stereographic projection down to 3 dimensions, do we get (\dot{x}, \dot{y}, \dot{z})?
• Greg Egan says:
I’m confident of two things: (1) the semi-minor axis of an ellipse that is the orthogonal projection of a great circle is proportional to the cosine of the angle between the plane of the great circle and the plane it is projected on, and (2) in the usual stereographic projection of the velocity circle from a great circle, there is the same relationship between the angle of the plane of the great circle and the semi-minor axis of the elliptical orbit.
As far as I understand Göransson’s approach, both the point (t-s,x,y,z) and the velocity wrt s move in great circles, and unless I’m confused, the two planes will necessarily be parallel. So the great circle traced out by (t',x',y',z') will also be parallel to the great circle in the standard approach, and up to some choice of scaling, both ought to project to the same circles in velocity space.
• John Baez says:
Okay, great! I think you’re right: as a function of s, the point (t-s,x,y,z) moves around a great circle on the unit 4-sphere at unit speed. So, its velocity vector does the same, and in the same 2-plane.
4. Wolfgang says:
As a chemist I wonder: Is this a symmetry taking one orbital type into another one? I once saw a picture about it in a book of Peter W. Atkins. Imagine a sphere which surface is marked such that one hemisphere represents the + sign of a wavefunction and the other hemisphere the – sign. Now stereographically project it to the plane. If the sphere is oriented such that its polar axis is perpendicular to the plane you will get only one sign in the plane (similar to a s-orbital), if the axis is parallel to the plane you will get + and – separated by a nodal line (like a p-orbital). Thus, in the plane you create some very distinct “orbitals”, while in space you only have to rotate the sphere. Of course the analogy to the hydrogen atom would set the stage one dimension higher, so it would be a 4D rotation that interchanges orbitals of different types. Unfortunately I never really looked for the mathematical details of the analogy, and I think, it is yet another kind of hidden symmetry of the mathematical treatment of this kind of problem?
• John Baez says:
Wolfgang wrote:
Yes, exactly! All states of a given energy lie in a single irreducible representation of SO(4). What this means is that the hidden 4-dimensional rotation symmetries of the hydrogen atom can do things like take a 4s state to a 4p state or 4d state or 4f state. So, all 16 states here are related by 4-dimensional rotation symmetries:
while 3d rotation symmetries only suffice to relate states in a given row.
I don’t understand the pictorial approach you’re describing well enough to see if it’s secretly a lower-dimensional analogue of what I’m talking about. I do know that what I’m talking about works in every dimension—if we posit atoms in other dimensions that are still governed by the inverse square force law, which is a bit odd.
• Wolfgang says:
I looked up the source: It’s “Galileo’s finger – The ten great ideas of science” by Peter Atkins, chapter six “Symmetry” and Fig. 6.7 on page 175. Unfortunately, since it is really a text addressed to laymen, it does neither contain any mathematics nor references where this idea would have been made more precise. I can only guess that Atkins might have written it elsewhere, e.g. in his book on “Molecular Quantum Mechanics”, but I did not checked it so far.
• Greg Egan says:
John wrote:
I’m not sure if I’m taking you more literally than you intended, or if I’m nitpicking unnecessarily, but although a generic element of SO(4) will certainly take a 4s state to a superposition of states that includes a 4p state (and a 4d state, and a 4f state), I’m not convinced that any element of SO(4) can take a 4s state to a 4p state.
That is, given an eigenfunction of the 3D orbital angular momentum operator L^2 with \ell=0, I don’t believe there is an element of SO(4) that takes it to an eigenfunction of L^2 with \ell=1. It’s not inconceivable that there could be such an element, but it seems like it would take something much stronger than having an irreducible representation of SO(4) to make this true.
• John Baez says:
You’re right; I was being pretty sloppy.
It would be fun to dig into this issue, and see exactly which vectors you can get by applying an SO(4) element to an ns state, but I have too many other puzzles on my mind to tackle this one!
• John Baez says:
Here’s a fun thing that’s sort of related. It’s true that irreducible unitary representation of a group doesn’t usually carry any unit vector to any other. But sometimes the group \mathrm{Spin}(n) acts transitively on the unit sphere of its irreducible spinor representations; this happens up to n = 6 but not beyond. One exciting case is n = 5, where
\mathrm{Spin}(5) \cong \mathrm{Sp}(2)
is the group of 2 × 2 quaternionic unitary matrices, and the spinors are \mathbb{H}^2: the quaternionic unitary matrices act transitively on the unit sphere in \mathbb{H}^2. Another is n = 6, where
\mathrm{Spin}(6) \cong \mathrm{SU}(4)
is the group of 4 × 4 complex unitary matrices with determinant 1, and the spinors are \mathbb{C}^4: these matrices act transitively on the unit sphere in \mathbb{C}^4. When you get beyond n = 6 there are pure spinors which are different, and ‘better’, than the rest.
5. John Baez says:
I got an email from someone who has dug deeper into the quantum mechanics of the 2d Kepler problem:
• Eyal Subag, Symmetries of the hydrogen atom and algebraic families.
Abstract. We show how the Schrödinger equation for the hydrogen atom in two dimensions gives rise to an algebraic family of Harish-Chandra pairs that codifies hidden symmetries. The hidden symmetries vary continuously between SO(3), SO(2,1) and the Euclidean group O(2)⋉R2. We show that solutions of the Schrödinger equation may be organized into an algebraic family of Harish-Chandra modules. Furthermore, we use Jantzen filtration techniques to algebraically recover the spectrum of the Schrödinger operator. This is a first application to physics of the algebraic families of Harish-Chandra pairs and modules developed in the work of Bernstein et al. [Int. Math. Res. Notices, rny147 (2018); rny146 (2018)].
One interesting thing about this paper is that it constructs a ‘space of 2d hydrogen atom state’ for any complex energy. The physical meaning of these seems open for exploration.
6. Thy Boy Who Lived says:
“Same velocity, AND SAME DISTANCE from the Sun at these points, so same total energy”
This is not correct according to the figure drawn. The four points shown are not at the same distance. All you have to do is draw a straight line from the Sun through the points on the ellipse, and you can clearly see that the points on the ellipse are much closer. Don’t worry though, the error is only in the drawing because they didn’t draw the ellipse and the circle with the same exact total area. At that point the dotted arc would overlay the circle and should be eliminated because it would be redundant. Sorry to be picky like that but … you know … it’s mathematics.
• Greg Egan says:
You’re mistaken about the figure, unless you’re viewing the web page on a device that distorts the aspect ratio; the dashed arc is perfectly circular and it is centred on the sun. I checked the image file itself, and all four points’ centres are 231 pixels from the centre of the sun.
I have no idea why you think the two orbits should have the same area (or why you think one of them is circular). The two orbits have the same semi-major axis (as required) and different semi-minor axes, so they do not have the same area. You could choose the new position for the planet so that its orbit was circular, and then the dashed arc would overlay it, but that would make for a far more confusing image because the generality of the construction would not be apparent.
Leave a Reply to Thy Boy Who Lived Cancel reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
35f6e2bb1baeb736 | Controlled excitation and resonant acceleration of ultracold few-boson systems by driven interactions in a harmonic trap
Ioannis Brouzos Zentrum für Optische Quantentechnologien, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany Peter Schmelcher Zentrum für Optische Quantentechnologien, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany
July 7, 2019
We investigate the excitation properties of finite utracold bosonic systems in a one-dimensional harmonic trap with a time-dependent interaction strength. The driving of the interatomic coupling induces excitations of the relative motion exclusively with specific and controllable contributions of momentarily excited many-body states. Mechanisms for selective excitation to few-body analogues of collective modes and acceleration occur in the vicinity of resonances. We study via the few-body spectrum and a Floquet analysis the excitation mechanisms, and the corresponding impact of the driving frequency and strength as well as the initial correlation of the bosonic state. The fundamental case of two atoms is analyzed in detail and forms a key ingredient for the bottom-up understanding of cases with higher atom numbers, thereby examining finite-size corrections to macroscopic collective modes of oscillation.
I Introduction
Detailed control of both single particle potential landscapes and interparticle interactions is an appealing feature of ultracold atom physics pethick_smith (). Apart from cooling the atoms to ultra low temperatures, it is in the meanwhile routinely possible to design almost arbitrarily shaped optical and magnetic traps and to tune them with an unprecedented control bloch (). The interactions among ultracold atoms can be adjusted by exploiting magnetically and optically induced Feshbach resonances chin (). The dimensionality of the system can be tuned by strongly confining the transverse degrees of freedom leading to quasi one-dimensional traps and dimensionality-specific phenomena like the Tonks Gas kinoshita (); paredes () of impenetrable bosons girardeau () and its attractive excited state counterpart, the Super Tonks gas haller (). For these low dimensional systems an additional tool to control the interactions are confinement induced resonances olshanii ().
In the present work we take advantage of this experimental progress concerning the design of external and most importantly interatomic forces, and consider the effects of a time-dependent oscillating interaction strength in a one-dimensional harmonic trap. Time-dependent driving is usually applied to external traps, providing inspiring effects such as the dynamical control of tunneling lignier (); zenesini (), dynamical localization eckardt (), photon-assisted tunneling sias () via a periodic driving of the lattices or the excitation of collective oscillations in a harmonic trap moritz () to mention only a few. However, investigations considering a time-dependent scattering length, usually referred to as ’Feshbach resonance management’ kevrekidis03 () for the mean-field situation, have inspired a lot of research on control of solitons kevrekidis () or modulational instabilities adhidary (). Experimental investigations in this direction have been performed recently donley (); bagnato (). The main advantage of the driving of the scattering length compared to other driving modes applied on the external potential for examining collective excitations stringari (); moritz (), is that other species or non-condensate fractions are not affected by the driving of the interaction of one-species bagnato (). Apart from the harmonic trap, a two mode system with time-varying interaction has been studied with Floquet theory, leading to many-body coherent destruction of tunneling and localization gong ().
On the other hand, beyond the mean-field regime, there are relatively few works dealing with the time-dependent modulation of the scattering length, addressing mainly the experimental results on the formation of molecules donley () from a few-body perspective molmer (); blume (). While these works concentrate mainly on the attractive part of the two-body spectrum and on the corresponding bound state, our work focuses exclusively on repulsive interactions. As illustrated in Ref. molmer, in a harmonic trap the coupling of an excited two-body state to the molecular ground state is very efficient since nearby states are out of resonance, and possess a relatively small coupling to the initial state. For the repulsive case as we will demonstrate, the relevant few-body states reflect the equidistant spectrum of the harmonic trap and therefore lead to a more complex dynamics involving several instantaneous configurations. A recent publication petrov () explored the integrability of the system, via a similar model with time-modulated interaction thereby calculating the dynamical structure factor.
In this work, we aim to examine from a few body perspective the effects resulting from a periodic modulation of the repulsive interaction strength, focusing on few-body collective excitations, control of the dynamics and state population, as well as mechanisms of acceleration via resonances. The case of two atoms in an one-dimensional harmonic trap for which the energy spectrum is known analytically for a constant arbitrary strength of the interaction busch () serves as a starting point of the investigation. The successful experimental preparation of few-body systems with a controllable number of atoms has been realized in Ref. friedhelm () and the energies have been measured with high precision selim (). Since the modulation of the interaction strength affects exclusively the relative motion in this system (the center of mass is decoupled and therefore unaffected), we study explicitly the internal motion. Firstly the focus is on the frequencies and the driving amplitudes that give rise to a controlable excitation to particular states after preparation in a certain initial state with a specific strength of the interaction. Additionally we examine particular few-body analogues of collective macroscopic modes of breathing oscillations, as well as a resonant acceleration mechanism via multiple excitations. Our analysis of resonances and acceleration modes is supported by calculations of the Floquet spectrum for the effective single degree of freedom of the relative motion within a harmonic trap and an oscillating delta barrier. Going to higher atom numbers we demonstrate similarities and analyze the differences with the basic case of two particles, and compare the results with those of macroscopic calculations, showing finite size effects on the collective modes. All calculations are performed by the numerically exact Multi-Configurational Time-Dependent Hartree method (MCTDH see Appendix), which is especially designed to treat the dynamics of many degrees of freedom under time-dependent modulations.
This article is organized as follows: In Section II we introduce our model, and in Section III our focus is on the case of two particles thereby examining the mechanisms of controlable collective excitations to specific states and the influence of parameter changes on them. We investigate in Section IV the acceleration mechanism via multiple excitations and calculate the Floquet spectrum of this case illustrating the underlying mechanism for the appearance of the resonances. An extension of this study to higher atom numbers is performed in Section V concentrating on the analogue of the breathing mode for collective oscillations and finite size corrections. In the last Section VI we summarize our results and provide an outlook.
Ii Modeling the time dependent interaction
Quasi one-dimensional waveguides can be created by choosing a strongly focused laser field yielding strongly confined transversal directions compared to the longitudinal one. In this way the trap becomes highly anisotropic with the characteristic length for the transversal trapping much smaller than the longitudinal one ( is the frequency of the harmonic confinement in the transversal direction). Consequently the transverse degrees of freedom are energetically frozen as only the ground state is occupied and the effective 1D interaction strength reads for the case of contact interactions olshanii ():
where the free-space s-wave scattering length does not depend on the detailed appearance of the potential for the interatomic interaction, which is then modeled by an effective contact potential. There are two parameters in Eq.1 that can be tuned to attain a time-dependent interaction strength: (i) the scattering length via a change of the strength of e.g. a magnetic field approaching to or departing from a Feshbach resonance –Feshbach resonance management– as where and are the width and the position of the resonance, respectively, and is the background scattering, and (ii) the transversal length by modifying the relevant laser parameters, taking into account the quasi-one dimensional restrictions posed above (see also Ref. bagnato, ).
The one-dimensional -body Hamiltonian with a time dependent coupling reads:
where we have performed a scaling transformation setting the length scale equal to the longitudinal characteristic oscillator length and the energy scale to , while the scaled interaction strength is . The interaction potential between each pair of particles is represented by the Dirac -function. We note that for numerical purposes, a Gaussian with a very small width of the order of the grid spacing is employed.
Initially, the particles are prepared in the ground state of the harmonic trap with an interaction strength . We will then explore the excitation dynamics for a periodic driving of the repulsive interaction strength of the form:
where is the amplitude of the driving and the driving frequency. The impact of each of these three parameters of the driving law () will be examined. The reason for the specific choice for the driving is our focus on repulsive interactions, i.e., should stay positive even for . Since , the above driving law comprises a periodic oscillation with frequency . Investigating purely attractive interactions as done in Ref. molmer, or alternating between attractive and repulsive interactions such as in Refs. blume, ; donley, ; kevrekidis03, represent interesting but different situations.
Iii Relative Motion of the two atom problem and instantaneous eigenspectrum
In general, particles in a harmonic trap with contact interaction represents a separable problem . The center of mass and and its conjugate momentum constitute the center of mass Hamiltonian . The Hamiltonian of the relative motion is in general not subject to further simplifications and cannot be solved analytically. Nevertheless, for the special case of two particles the relative motion () reduces to an effective one-body problem:
The contact interaction affects only the relative motion, leaving unaffected the center of mass. Therefore we focus on the relative motion, which actually represents a one particle problem with a harmonic trap and a delta barrier with oscillating height placed in the center. With the transformation , we obtain the standard form of the harmonic oscillator Hamiltonian . defines an analytically solvable eigenvalue equation busch () in the case of a time-independent parameter , which we will discuss next as it is very important for the understanding of the excitation dynamics of the relative motion. The solutions cover in general the complete interval and we will refer to them as the instantaneous eigenstates , where are the energy levels for a certain time instant . In spite of the existence of these stationary solutions the driven time-dependent problem possesses no closed analytical solution, although a study via the evolution of the coefficients in an expansion with respect to the corresponding instantaneous eigenstates is natural molmer (); blume ().
Figure 1: (a) The three lowest eigenenergies of the symmetric (0,2,4) and antisymmetric (1,3,5) eigenstates of the relative motion for two particles with increasing interaction strength . (b) The energy difference between the eigenstates of the relative motion and with increasing .
In Fig. 1(a) we show the lowest lying eigenenergies for . For the eigenspectrum of is the harmonic oscillator single-particle spectrum with the equidistant eigenenergies . As increases the odd levels are unaffected by since they possess a node at the coordinate origin, while the even states acquire an increasing energy and a dip at . Therefore, as we observe in Fig. 1(a) each even level approaches energetically the next upper odd level forming a doublet spectrum, characteristic for double-well potentials. In the limit where the ’barrier height’ is infinite the even levels become degenerate with the odd ones. This limit is the so-called Tonks-Girardeau limit where the bosons are mapped to non-interacting fermions girardeau (). The Tonks-Girardeau Gas, is one of the most fundamental systems appearing exclusively in one-dimensional many-body systems. The transition to ’fermionization’ for two atoms has been recently reported also experimentally selim (), enhancing the interest on few-body studies, like the present one.
The energy difference between two even levels with increasing interaction strength shown in Fig. 1(b) plays also a crucial role for the dynamics. In fact, since the initial preparation is in the ground state which corresponds to an even state of the relative motion, the dynamics can only lead to a population of other even states, and the corresponding energy distance is crucial for the time-evolution. For the two limits of zero interaction and the Tonks-Girardeau gas () the gap between two even parity levels is . Starting from and increasing this value slightly decreases [see Fig. 1(b)], since the states with larger quantum number possess a lower probability density at and are therefore less affected by the contact interaction. The slope at reads:
The response to a minor increase of therefore depends on the value of the harmonic oscillator eigenstates at which decreases with [see Fig. 1(a)]. Therefore at the onset of the interactions the successive even states tend to approach each other energetically. Fig. 1(b) shows the effect of an increasing coupling strength on the distance between the first two even levels and (red line). The most rapid change of this energy gap is near with a minimum value at and it approaches asymptotically the value for . The slope is decreasing as increases [see Fig. 1(a)] and from some value of on, the energy gap starts to increase again and asymptotically approaches the value of the non-interacting system. Additionally, since decreases with increasing the deviation from the value for shown in Fig. 1(b) (red line) is the largest possible such deviation between two successive (even) states. This is exemplarily shown in the same figure by the distance between the next pair of successive states and (green line). It is obvious that the energy gap is always larger than the gap, and this holds analogously also for gaps between higher lying neighboring states.
The above discussed features of the energy spectrum and the corresponding gaps, possess a crucial impact on the dynamics which we will discuss later. As changes with time according to Eq. (3) different regions of -values in Fig. 1 are probed according to the choice of the parameters and . We might already foresee e.g. that a driving around small -values possesses a greater impact than for a driving for larger -values, since the corresponding slope is larger, subsequently leading to larger energy variations. The equidistance of the spectrum close to the two extreme limits, as well the decrease of the gaps at small to intermediate values of will also be of great importance concerning a possible resonant behaviour (see sections IV and V).
We note that Fig. 1 is based on the exact eigenenergies obtained from the equation busch ():
The corresponding even eigenstates are given analytically in the form of parabolic cylinder functions. For the numerical calculations of the time-dependent evolution of the system we use a regularized delta-function of the form: with which is small enough to catch the real ’delta-like’ behavior but also convenient for a numerical grid sampling. Minor numerical deviations on the eigenstates and the quantum evolution stemming from this approximation are unavoidable.
Iv resonant controllable excitation
The wave packet of the relative motion at corresponds to the ground state of in Eq. (4) with . This an even state and since parity is conserved during the time evolution only even parity states can be occupied, which relates to the bosonic permutation symmetry. In this work we are interested mainly in two quantities:
• the population of momentarily eigenstates at a certain time , where is the wave function of the relative motion. We use () without loss of generality.
• the time evolution of the energy . We refer only to the relative energy since the center of mass is completely untouched by the change of .
Figure 2: Population of instantaneous eigenstates with time variation, for (a) , , , (b) , , , (c) , , . Profile of the resonances via the minimal occupation of the ground state with varying driving frequency. Several cases are shown for different values of and .
We start our investigation in the regime of small driving amplitudes where the excitation dynamics is to a larger extent controllable. The main role in this regime is played by the driving frequency. If this frequency is much lower than the gap between two even states () then we are in the adiabatic regime and the evolution involves only the momentarily ground state (with ). Approaching the first resonance from below induces an excitation to the state of the relative motion. This is shown for a typical case in Fig. 2 (a) where , and . We see that the ground state looses population while the second excited state gains. The next even level remains almost unpopulated. For the particular case though where (and correspondingly for ) since the eigenspectrum is initially completely equidistant, an excitation to the next level is not prohibited corresponding to a two step process. Therefore we see in Fig 2 (b) that a bit closer to the resonance () the level gains population after the level does so. This is also in general the case for larger amplitudes where multiple excitations are enhanced as we will see later on. Therefore, while the system departs – in this case completely – from the ground state, the first collective excitation to state is necessarily combined with a transfer of population to the next level and therefore a controllable excitation exclusively to the second level (complete depopulation of ground state and complete population of the second excited level) is not possible here. It becomes though possible if the initial interaction is stronger. For instance for , we see in Fig. 2 (c) a complete transfer of population to the first excited level. Therefore, the non-harmonicity in the spectrum due to the initial correlations is helpful from the point of view of a controllable state excitation and preparation. Let us note that for a controlable creation of such states, one should choose a small amplitude , since larger amplitudes lead easily to multiple occupation of excited states due to the close to equidistant spectrum. Additionally the driving frequency should be carefully tuned to be close to the corresponding energy spacing of the spectrum with . For this case –somewhat lower than the resonant frequency for the non-interacting situation– we have a decreased gap in the energy spectrum [see Fig. 1 (b)].
As an overview of the resonances in this regime we present in Fig. 2 (d) the minimal occupancy of the instantaneous ground state as a function of the driving frequency for several small amplitudes of the driving. We observe that far from resonance the minimal , so there is hardly any excitation as expected. The frequency where the ground state becomes at a certain time completely unoccupied corresponds to the resonance. The resonant frequency is slightly shifted to lower values for a larger amplitude of the driving which is attributed to the decrease of the energy gap in the spectrum [see Fig. 1 (b)], as covers larger regions of (for ). We also observe that for the resonant frequency is shifted to much lower values than for , approximately corresponding to the energy gap of the levels at (). For this case though, small changes in the driving amplitude do not shift significantly the position of the resonance, since the energy spacing in the vicinity of this value of does not change substantially [see Fig. 1 (b)]. However, the most important difference of the two cases for different initial coupling strength is the fact that the case leads easily to multiple excitation of higher states since the energy gaps are all close to the same value corresponding to the same resonant frequency, while for larger the energy spacing is not equidistant and therefore a complete controlable transfer to a certain state is possible.
A comment on the robustness of the initial state preparation is in order here. Apart from being easily excitable to different instantaneous states, the initially non-interacting ensemble is in general more sensitive to the driving of the interaction. Even far from resonance the evolution of this initial state (), leads to a change in energy of the order of while for stronger and particular intermediate interactions as well as close to the fermionization regime it is ten orders of magnitude lower. This is understandable if one inspects the slopes of the energy curves in Fig 1 (a), which are much larger for small values of . This could be a signature for the detection of a highly correlated ensembles like the Tonks Girardeau gas, i.e., by studying their response to changes of the interaction strength.
Figure 3: Occupation of the adiabatic eigenstates of the Hamiltonian for , close to the second and the first resonance (a) (b)
Not only an excitation to the first excited state of the relative motion is possible, but also to other excited states if the resonant frequency is chosen correspondingly as we can see in Fig. 3 (a). As expected the resonance width though decreases for transitions to higher states. The effect of the initial and of the amplitude is similar to the previous case.
For larger amplitudes the controllability of the excitation process reduces, as many states are subsequently excited, and simultaneously taking part in a complex time evolution. A typical example is presented in Fig. 3 (b). Still the frequency plays the dominant role and only close to resonances the evolution leads to highly excited states of the spectrum. Mechanisms of acceleration appear then which we will discuss in the following section. We would like to note here that a large , offer the possibility of ’multi-photon excitations’. In this case even for low frequencies which are an integer ratio of the principle resonant frequency, excitations become possible.
V acceleration via multiple excitation and Floquet analysis
We will now more thoroughly examine the case of strong driving, which makes it possible, as we will show, to accelerate the particles, i.e., to increase the mean value of the energy with time. The process of multiple excitations as we have seen, is possible close to resonances, since the spectrum is approximately equidistant. Especially a larger value of leads to a covering of wider areas of the energy gaps, and therefore the comparatively small differences between the gaps effectively drain away. Through this multiple excitation process, the system never returns completely to the ground state, and indeed occupies gradually increasingly higher lying states. This excitation process induces an increase of the energy to very high values as long as the gaps to higher excited states are in the resonance window.
Figure 4: Time evolution of the expectation value of the energy for , and several frequencies close to resonance. Acceleration shown close at the resonant frequency
We present in Fig. 4 the time evolution of the expectation value of the energy, close to the first resonance for . For values of the frequency sufficiently far from resonance, a repopulation of the ground state in the course of the time can be observed, while a multi-mode behavior is encountered due to an excitation of several states. Approaching the resonance, the instantaneous ground state is never repopulated significantly, on the opposite, higher states of the spectrum are subsequently populated in the same manner as shown in Fig. 3 (b). This leads to an acceleration, i.e., energy gain of the particles. Our finite time simulations indicate that this energy gain approaches a saturation to very high values of the energy for strong but finite .
Let us analyze this resonant mechanism from the perspective of Floquet theory which has been developed for time-periodic Hamiltonians. The Ansatz of the Floquet theory for the time-dependent wave packet reads:
where are the so-called Floquet eigenstates or quasi-energy states, which are time periodic functions expanded here into a Fourier series and are the quasienergies. Introducing this Ansatz into the Schrödinger equation for the Hamiltonian in Eq. (4) we find:
The r.h.s. of this equation is the Floquet Hamiltonian which has diagonal, i.e., proportional to , and off-diagonal terms coupling the different modes , . To find and eigenvalues one should solve this eigenvalue problem by diagonalizing the corresponding Hamiltonian. The harmonic oscillator basis is very convenient to express the Floquet Hamiltonian in our particular problem, not only because of the harmonic part of the potential which has as matrix elements the harmonic oscillator eigenvalues but also because the matrix elements of are of the form . An alternative method equally well-suited for our problem is to write the Floquet eigenvectors in terms of parabolic cylinder functions abramowitz () which are the solutions of the underlying stationary problem busch (). In this representation we have
with the important properties , . By integrating Eq. (8) using these relations we obtain a recursive system of algebraic equations for the coefficients . Demanding that this system possesses solutions we obtain the values of quasienergies . We have used both methods to derive the eigenspectrum of the Floquet Hamiltonian for our problem and confirmed the agreement crosschecking the results and thereby the numerical convergence.
Figure 5: Floquet eigenspectrum for (a) increasing driving frequency with , (b) increasing initial interaction strength for the resonant frequency and
We show in Fig. 5 (a) the eigenspectrum of the Floquet Hamiltonian, i.e., the Floquet quasi-energies with increasing driving frequency for a typical case and . In the dense spectrum of quasi-energies we encounter points of accumulation when the frequency is at resonant values (). At these points, many Floquet quasi-energies take a value close to the harmonic oscillator eigenenergies (), and form Floquet bands. The even-frequency resonances () correspond to accumulation points with an energy gap , i.e. at the quasi-energy values .
Let us now show how these results obtained for the Floquet Hamiltonian are connected with the quantum acceleration processes for quantum resonances. In dynamical systems a pure point Floquet spectrum is associated with localized behavior and the energy remains bounded at all times, while singularly continuous components are responsible for diffusive behavior and growth of the energy (see Ref. [gardiner, ] and references therein). At the resonant driving frequencies as we have seen above the quasi-energies are accumulating and approach particular values forming close to continuous areas. This property of the spectrum leads to the acceleration and energy gain.
Besides, in Fig. 5 (b) we see that with increasing and for the resonant frequency the eigenenergies of the Floquet spectrum deviate from each other, and only come close again as we approach the fermionization limit in the next upper level. This deviation from the accumulation points makes the evolution less diffusive for intermediate interactions, and corresponds to the picture of the instantaneous spectrum, where the energy gaps for intermediate values of become less equidistant, prohibiting multiple excitation.
Vi higher atom numbers and finite size effects on collective excitations (breathing mode)
Figure 6: The lowest eigenenergies with increasing interaction for (a) three and (b) four bosons.
The extensive analysis of the two-body case above, is very useful for an understanding of the effects occurring for higher atom numbers. The main reason for this are the properties of the many body spectrum of the harmonic oscillator including the delta-type interaction. We present in Fig. 6 the energetically low lying part of the spectrum for three and four particles with increasing interaction strength [see also Ref. sascha, ]. We observe that all states show a quite similar evolution and thus the energy gaps between them do not deviate significantly (crossings or anti-crossings do not occur). Also important is that most of the states correspond to excitations of the center of mass, which are not relevant to our study. For example the first excited state and one of the two states of the second excited band which behave exactly like the ground state, correspond to the ground state of the relative motion and to the first and second excitation of the center of mass motion, respectively.
The time-dependent variation of the interaction strength which affects only the relative motion offers the possibility of a controllable excitation to specific states also for higher atom numbers, since states like those mentioned above which correspond to excitations of the center of mass do not contribute to the time evolution and are therefore avoided. We demonstrate this in Fig. 7 (a) for the case of three particles and an initial value in the intermediate regime, where an excitation predominantly to the lower state (corresponding to an excitation of the relative motion) of the second excited band of eigenstates is performed [see Fig. 6(a)]. We denote here by the numbers the energetically ordered states which correspond to a respective excitation of the many body relative motion. The resonant frequency for excitation from state 0 to state 2 will be referred as the principle resonance in the following.
Figure 7: (a) population of instantaneous eigenstates for three particles with , and . (b) Population of the instantaneous ground state for different numbers of particles and
The above signifies that higher number of particles allow for a similar control of their dynamics as in the case for two particles which is mainly due to the similarities of their underlying energy spectrum. One important difference here is that for equal parameters, systems with larger atom numbers experience a stronger impact of the driving than those with a lower atom number. This can be seen from the the maximum loss of population of the instantaneous ground state with time in Fig. 7 (b) in a non-resonant case which gets larger with an increasing number of particles. An explanation for this atom number related effect is the response of the ground state energy to the variation of :
where we see that the slope of the total energy at increases quadratically with the particle number. Therefore a variation of possesses a greater impact for higher atom numbers.
Another important observation concerning the size of the system is the position of the principle resonance in the regime of intermediate interactions. We have seen already that for e.g. the resonant frequency is lower than for very weakly or strongly interacting initial states. This frequency becomes even lower as the number of particles increase which is based on the decreasing energetical spacing in the corresponding many-body spectra with increasing particle number. Consequently a lower value of frequency is needed for larger atom number for the corresponding resonant excitation via driving of the interaction strength.
The above observation is important for modes of collective oscillation of the wave function, in analogy to the macroscopic collective oscillations pethick_smith (). Many measurements in experiments are based on exciting collective modes, which are usually analyzed within the effective mean field descriptions stringari (). The frequencies of these oscillations can be obtained with a high accuracy from the experimental data usually observing the size or the mean position of the condensate moritz (); haller (), and represent a very important measure for identifying different interaction regimes. For example in one dimension the Thomas Fermi limit, or the Tonks Girardeau and Super-Tonks girardeau gas, possess a very characteristic ratio between the so-called dipole mode which is an oscillation of the center of mass of the condensate with the trap frequency , and the breathing or first compression mode, where essentially the size of the condensate is oscillating with a frequency . For the effectively non-interacting limits () this ratio is while for the Thomas Fermi limit it is stringari (). These important theoretical results have been confirmed in the corresponding experiments moritz (); haller ().
Figure 8: One body density for the case of three particles as in Fig. 7 for several snapshots.
From the few body perspective which we examine here, the mode of oscillation that we excite by varying the scattering length, is of a compressional-breathing type. We demonstrate this in Fig. 8 where we show several snapshots of the one-body density for characteristic time moments. The excitation to the second state of the spectrum which corresponds to an excitation of the relative motion (and therefore to a broader wave function), possesses therefore the characteristics of a breathing mode. The characteristic frequency of such a mode, in the intermediate interaction regime could be compared with the mean field result for the Thomas Fermi regime. For non-interacting cases we confirm the ratio . The Thomas Fermi regime applies to large ensembles of particles, and the analogy with a the finite system examined here can be at most indicative. Nevertheless in one-dimensional systems we can take the minimum value of the energy gap which appears close to as corresponding to the Thomas-Fermi regime (see also corresponding arguments in yiannis ()). In doing so, we can reveal finite size corrections for the breathing mode of the Thomas Fermi limit, starting with two particles privdis () and going to for five particles with a further tendency to decrease with increasing particle number. In the macroscopic limit we should approach the mean field estimate stringari ().
This interrelation of the collective mode frequencies and their finite size deviations within the few-body spectrum are probably valuable for further studies on connecting, checking and reinterpreting mean field results in the light of the exact many-body spectrum and behaviour.
Vii Conclusion
We examined the effects of a periodically driven interaction strength on ultracold bosonic systems in a one-dimensional harmonic trap which can be realized by time a modulation of magnetic fields utilizing Feshbach resonances or periodically changing the transversal confinement length in a waveguide via laser fields. We have shown that the feature of near equidistant energy levels for arbitrary interaction strength for the relative motion of two particles in a harmonic trap but also for the corresponding many-body spectrum for larger atom numbers has important consequences for the excitation dynamics. In particular, the energetical spacing yields resonant driving frequencies, which one can employ to excite particular states of bosonic relative motion. We underline that unlike other driven many-body systems, the variation of the interaction strength offers the possibility to design excitations of the relative motion of a certain species exclusively with a very high degree of controllability for certain regimes of the driving parameters. Approaching the resonant frequencies the atoms excite to the corresponding excited level of the relative motion which has been demonstrated here by calculating the occupation of instantaneous eigenstates. For strong driving amplitudes in the vicinity of the resonances the energy reaches out to very high values with several and successive excitations to energetically higher lying states of the spectrum. This multi-excitation process of acceleration is also analyzed via the properties of the Floquet spectrum. The initial interaction strength distorts the energy spectrum shifting the position of the resonances, while highly correlated initial states are quite insensitive with respect to the changes of the repulsive interaction. We have shown via our exact numerical calculations, that for any number of particles there is a similar and for larger ensembles even more sensitive response to the driving of the interaction strength leading to higher excitation amplitudes. Effects due to the finite size of the system, are also analyzed from the perspective of collective oscillation modes, and especially the analog to the macroscopic breathing mode is established, thereby discussing similarities and deviations from mean field approaches. Especially the two-body problem discussed here represents a case of interest to the experiment of distinguishable fermions in a harmonic trap friedhelm (); selim (). Interesting outlooks are the exploration of different driving laws, including the possibility to alter between repulsive and attractive interactions or using different potential landscapes.
The authors acknowledge many fruitful discussions with F.K. Diakonos concerning the Floquet analysis and thank Hans-Dieter Meyer for helpful discussions and comments. Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
Appendix A Computational Method
Treating time-dependent Hamiltonians with many interacting degrees of freedom is a computationally very demanding problem. In this work for all numerical calculations we rely on the Multi-Configurational Time-Dependent Hartree (MCTDH) method mctdhbook (); meyer90 (); beck00 (), primarily a wave-packet dynamical tool known for its outstanding efficiency in high-dimensional applications. The underlying idea of MCTDH is to solve the time-dependent Schrödinger equation
as an initial-value problem by an expansion in terms of direct (or Hartree)products :
using a convenient multi-index notation for the configurations, , where denotes the number of degrees of freedom and . The single-particle functions are in turn represented in a fixed, primitive basis implemented on a grid. For indistinguishable particles as in our case, the single-particle functions for each degree of freedom are of course identical in both type and number (, with ).
In the above expansion, both the coefficients and the Hartree products are time-dependent. Using the Dirac-Frenkel variational principle, one can derive equations of motion for both . This conceptual complication offers an enormous advantage: the basis is variationally optimal at each time , allowing us to keep it fairly small. Exactly for this reason MCTDH is ideally designed to solve time-dependent Hamiltonians like driven systems as the one we tackle here. The permutation symmetry can be enforced by symmetrizing the coefficients , but the ground state is automatically of bosonic character.
In addition the Heidelberg MCTDH package mctdh:package () incorporates the so-called relaxation method which provides a way to obtain the lowest eigenstates of the system by propagating some wave function by the non-unitary (propagation in imaginary time.) As , this automatically damps out any contribution but that stemming from the true ground state,
In practice, one relies on a more sophisticated scheme termed improved relaxation meyer03 (). Here is minimised with respect to both the coefficients and the configurations . The equations of motion are solved iteratively, first for (by diagonalisation of with fixed ) and then propagating in imaginary time over a short period. The cycle will then be repeated. This method is used in the present work to compute the eigenstates and spectrum for particles.
As it stands, the effort of this method scales exponentially with the number of degrees of freedom, . Just as an illustration, using orbitals and requires configurations . This restricts our analysis in the current setup to about , depending on how decisive correlation effects are. Therefore we are limited in terms of numerical convergence to the number of particles we can treat for large variations of the interaction strength. The number of orbitals needed for a time-evolution of our driven system also depends on how prominent the excitations are, i.e., how close to resonance the driving frequency is. The latter fact sets limitations to examining the long-time dynamics where the behaviour involves many excitations rather than oscillatory. By contrast, the dependence on the primitive basis, and thus on the grid points, is not as severe. In our case, the grid spacing should of course be small enough to sample the interaction potential. The length of the grid should be also choosen sufficiently large especially in cases where the driving frequency approaches resonances and the time-dependent wave function covers a very wide coordinate range.
• (1) C. J. Pethick and H. Smith, Bose-Einstein condensation in dilute gases (Cambridge University Press, Cambridge, 2008); L. Pitaevskii and S. Stringari, Bose-Einstein Condensation (Oxford University Press, Oxford, 2003).
• (3) C. Chin et al., Rev. Mod. Phys. 82, 1225 (2010).
• (4) T. Kinoshita, T. Wenger, and D. S. Weiss, Science 305, 1125 (2004);
• (5) B. Paredes et al., Nature 429, 277 (2004).
• (6) M. Girardeau, J. Math. Phys. 1, 516 (1960).
• (7) E. Haller et al., Science 325, 1224 (2009).
• (8) M. Olshanii, Phys. Rev. Lett. 81, 938 (1998).
• (9) H. Lignier, C. Sias, D. Ciampini, Y. Singh, A. Zenesini, O. Morsch, E. Arimondo, Phys. Rev. Lett. 99, 220403 (2007)
• (10) A. Zenesini, C. Sias, H. Lignier, Y. P. Singh, D. Ciampini, O. Morsch, R. Manella, E. Arimondo, A. Tomadin, S. Wimberger, New J. Phys. 10, 053038 (2008)
• (11) A. Eckardt, M. Holthaus, H. Lignier, A. Zenesini, D. Ciampini, O. Morsch, E. Arimondo Phys. Rev. A 79, 013611 (2009)
• (12) C. Sias, H. Lignier, Y. P. Singh, A. Zenesini, D. Ciampini, O. Morsch, E. Arimondo, Phys. Rev. Lett. 100, 040404 (2008)
• (13) H. Moritz, T. Stöferle, M. Köhl, T. Esslinger, Phys. Rev. Lett. 91, 250402 (2003).
• (14) P. G. Kevrekidis, G. Theocharis, D. J. Frantzeskakis, and Boris A. Malomed, Phys. Rev. Lett. 90, 230401 (2003)
• (15) P. G. Kevrekidis, V. V. Konotop, A. Rodrigues and D. J. Frantzeskakis, J. Phys. B 38, 1173 (2005)
• (16) S. K. Adhikari, Phys. Rev. A 66, 013611 (2002)
• (17) E. A. Donley, N. R. Claussen, S. T. Thompson and C. E. Wieman C E, Nature 417, 529 (2002)
• (18) S.E. Pollack et al., Phys. Rev. A 81, 053627 (2010)
• (19) S. Stringari, Phys. Rev. Lett. 77, 2360 (1996).
• (20) J. Gong, L. Morales-Molina and P. Hänggi, Phys. Rev. Lett. 103, 133002 (2009).
• (21) J. F. Bertelsen and K. Mølmer, Phys. Rev. A 73, 013811 (2006).
• (22) B. Borca, D. Blume and C. H. Greene, New J. Phys. 5 111 (2003).
• (23) M. Colome-Tatche and D. S. Petrov, Phys. Rev. Lett. 106, 125302 (2011).
• (24) T. Busch et al., Found. Phys. 28, 549 (1998).
• (25) F. Serwane, G. Zürn, T. Lompe, T. B. Ottenstein, A. N. Wenz and S. Jochim Science 332, 6027 (2011);
• (26) G. Zürn, F. Serwane, T. Lompe, A. N. Wenz, M. G. Ries, J. E. Bohn and S. Jochim arXiv:1111.2727
• (27) M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions (Dover, New York, 1972).
• (28) T. P. Billam and S. A. Gardiner, Phys. Rev. A 80, 023414 (2009).
• (29) S. Zöllner, H. D. Meyer and P. Schmelcher, Phys. Rev. A 75, 043608 (2007)
• (30) I. Brouzos and P. Schmelcher arXiv:1108.1354
• (31) Private communication: The experiment in Heidelberg friedhelm (); selim () has measured up to for the excitation of the two-body relative motion.
• (32) H. D. Meyer, U. Manthe, and L. S. Cederbaum, Chem. Phys. Lett. 165, 73 (1990).
• (33) H.-D. Meyer, G. A. Worth, and F. Gatti, Multidimensional Quantum Dynamics: MCTDH Theory and Applications (Wiley-VCH, Weinheim, 2009).
• (34) M. H. Beck, A. Jäckle, G. A. Worth, and H. D. Meyer, Phys. Rep. 324, 1 (2000).
• (35) G. A. Worth, M. H. Beck, A. Jäckle, and H.-D. Meyer, The MCTDH Package, . H.-D. Meyer, Version 8.4 (2007). See
• (36) H.-D. Meyer and G. A. Worth, Theor. Chem. Acc. 109, 251 (2003).
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
You are asking your first question!
How to quickly get a good answer:
• Keep your question short and to the point
• Check for grammar or spelling errors.
• Phrase it like a question
Test description |
091f27e9d4f93e8c | [<] [>]
Atomic Physics
What is Atomic Physics?
Context and scope
Atomic physics and allied subjects.
Atomic Physics is the branch of science that deals with the structure of the electron cloud within atoms. It regards the nucleus of the atom as a point charge of certain mass, without making any assumptions about its structure, which is the subject of Nuclear Physics. At the other limit of its characteristic length scale, atomic physics is neighbour to Molecular Physics and, in a wider sense, Chemistry, i.e. the interaction of atoms with each other and the structure of the resulting molecular electron systems.
All of these subjects draw heavily on Quantum Mechanics for their theoretical foundation because the states electrons can occupy in an atom or molecule are different solutions of the Schrödinger equation. The transitions between these states can be exploited to measure the electronic structure (and therefore for chemical analysis) using a variety of experimental techniques belonging to the field of Spectroscopy. The related field of Optical Physics deals with techniques to modify the populations of various electronic states, resulting in applications such as lasers. Both are based on the interaction between electromagnetic radiation and electron systems.
The scope of atomic physics.
There are three different aspects to atomic physics: the electronic structure of atoms, spectroscopic techniques to measure electronic transitions between different states, and the variety of chemical elements represented by the periodic table and their properties, which are governed by their electronic structure.
By solving the Schrödinger equation for a quantum mechanical system such as an atom, we can determine which states an electron can occupy, and what the total energy of each state is. This gives rise to the shell structure of the atom, which had already been anticipated in the earlier Bohr model of the atom. Given distinct states with different energies, it follows that the excitation of an electron from one state to another requires a certain amount of energy, which can be supplied by a photon of the appropriate energy, $E=h\nu$. By sweeping through the electromagnetic spectrum, all possible transitions between energy levels can be mapped, including small splittings between levels of almost identical energy, i.e. the fine structure of the spectrum.
While the states of the hydrogen atom can be calculated analytically, the complexities of other atoms containing more than one electron mean that approximations are required. In practice, layers of complexity are built up gradually, from hydrogen via helium and the alkali metals to the rest of the periodic table, using perturbation theory. This approach uses the known states from the simpler system and adds a perturbation to adapt them (and their energies) to the more complex situation.
Probing atoms (and molecules)
Experimental techniques: imaging, scattering, diffraction.
Much of experimental physics consists of various interactions between light and matter. Broadly, three different types of interaction can be distinguished, with a host of experimental techniques within each of these groups:
Imaging is a measurement in real space (units: metres), where the capacity of a material to absorb light is utilised. The material absorbs some of the incident photons, resulting in a shadow representing the shape and internal structure of the object. Microscopy is one flavour of this technique, and so is tomography, where images are taken from different angles to reconstruct a 3D model of the structure.
Scattering is a measurement in reciprocal space (units: $\mathrm{m^{-1};}$ in practice usually $\mathrm{nm^{-1}}$ or $\mathrm{{\AA}^{-1}}$), where the deflection of rays due to refraction at internal surfaces is probed. This changes the direction of propagation of the wave front, i.e. the wave vector, $\vec{k}$, changes direction. Scattering data are usually plotted in terms of either scattering angle or momentum transfer from the incoming wave to the scattering object. Both are in the reciprocal space domain, but sometimes scattering data are analysed by Fourier transformation from reciprocal into direct (i.e. real) space.
Spectroscopy is a measurement in the energy domain (units: J; in practice often eV) or, equivalently, the wavelength or frequency domain, where the absorption or emission of a sample is measured while either sweeping the wavelength of the source through an extended range or by using a white (polychromatic) source and analysing the response from the sample after the interaction has taken place in terms of the energy of the radiation leaving the sample. Absorption occurs when some of the incident photons are used to supply the energy for transitions between levels in the electronic structure of the material.
Of course the underlying physical processes exploited by these techniques all occur simultaneously whenever light and matter interact; it is merely a matter of choice of apparatus which type of interaction we choose to monitor and analyse.
The experimental side of atomic physics - atomic spectroscopy. |
162dd1ea06254639 | Centre for Advanced Study
at the Norwegian Academy of Science and Letters
Molecules in Extreme Environments
Active 2017/2018 Natural Sciences - Medicine - Mathematics
Chemistry is the science about matter — its stability, reactivity, transformations, interactions with external electromagnetic fields and radiation. However, we are mostly familiar with chemistry under conditions that can be realized on Earth. Under other conditions, chemistry changes in ways that cannot easily be predicted or understood from our experience with Earth-like chemistry.
For example, in the atmospheres of certain stellar objects such as rotating white dwarfs and neutron stars, extreme magnetic fields exist that cannot be generated on Earth. Knowledge about chemistry under such conditions can only be gained by performing advanced quantum-mechanical simulations, solving the Schrödinger equation for the electrons and nuclei that constitute matter.
Such calculations reveal an exotic, unfamiliar chemistry — molecules become squeezed and twisted, behaving in unexpected, fascinating ways. Even chemical bonding is affected — in a strong magnetic field, atoms are bound to one another by the rotation of the electrons, giving rise to molecules that do not exist on Earth.
In the project Molecules in Extreme Environments, we aim not only to understand chemistry under extreme conditions such as strong magnetic fields, extreme pressure, and intense laser pulses — we also aim to guide experimental work. Our work on magnetic bonding has already triggered experimental investigations on impurities in semiconductors, where similar effects may occur and lead to the design of materials with new properties. Likewise, a recent collaboration with astrophysicists aims to detect molecules in the atmospheres of white dwarfs, guided by quantum-mechanical simulations.
Previous events
Group leader |
8df49b236766b64a | Schedule Jan 13, 2012
Quantum transport on carbon nanotori in nanodevices and metamaterials - from effective models to non-equilibrium Green's function methods.
Mark A. Jack (FAMU), Mario Encinosa (FAMU), John Williamson (FAMU), Adam Byrd (FAMU), Leon W. Durivage (Winona State U)
Graphene-based allotropes such as carbon nanorings hold the promise of completely new nanodevice and metamaterials applications due to the effects of magnetic flux and curvature on quantum transport on a nanoscale toroidal surface and the coherence of resulting electromagnetic moments. Unique electronic and optical characteristics will emerge due to the compactification of the honeycomb lattice structure of a flat graphene sheet to a two-diemsional mamifold with toroidal geometry. Additional modular symmetries are predicted to significantly impact energy band structure and transport properties of physically distinct nanotori with different chiralities and dimensions and thus drastically reduce the number of spectrally distinct ring geometries. In addition to persistent current and Aharonov-Bohm effects under magnetic flux, new electromagnetic field distributions such as a new toroidal moment will be generated by the ring currents. In a metamaterial of a regular two- or three-dimensional lattice of these aligned nanoconstituents a significant enhancement of these quantum signatures may be expected coherence of the individual electromagnetic responses. In an effective model, the Hamiltonian for a single charge constrained to motion near a toroidal helix with loops of arbitrary eccentricity is developed and the resulting three-dimensional Schrödinger equation reduced to an effective one-dimensional formula inclusive of curvature effects in form of two resulting effective curvature potentials. The magnitude of the toroidal moment generated by the current depends strongly on the magnetic field component of the field normal to the toroidal plane. A strong dependence on coil eccentricity is also observed. In a theoretical sense, the curvature potential terms are necessary to preserve the hermiticity of the minimal prescription Hamiltonian. This effective model may also elucidate how a surface current may be driven by a properly polarized incoming electromagnetic wave front to generate a specific multipole response. Alternatively, electron transport on the carbon nanotorus is calculated in a tightbinding model for armchair and zigzag carbon nanotori between metallic leads using a recursive non-equilibrium Green's function method. Density-of-states, transmission function and source drain current are calculated for realistic system sizes of 10,000 carbon atoms and more. An object-oriented C++ code was developed using parallel sparse matrix software libraries such as PETSc (Portable, Extensible Toolkit for Scientific Computation) with additional MPI parallelism to evaluate the transport Green.s function at different energies. This fast and numerically precise tool on a multi-core architecture can incorporate additional effects such as electron-phonon coupling effects due to low-energy phonon modes, exciton transport, or electron-plasmon coupling terms in second- or third-nearest-neighbor type calculations.
View poster as pdf.
Author entry (protected) |
d63b6502161457db | This is documentation for Mathematica 5, which was
based on an earlier version of the Wolfram Language.
View current documentation (Version 11.2)
Documentation / Mathematica / The Mathematica Book / Advanced Mathematics in Mathematica / Mathematical Functions /
3.2.10 Special Functions
Mathematica includes all the common special functions of mathematical physics found in standard handbooks. We will discuss each of the various classes of functions in turn.
One point you should realize is that in the technical literature there are often several conflicting definitions of any particular special function. When you use a special function in Mathematica, therefore, you should be sure to look at the definition given here to confirm that it is exactly what you want.
Mathematica gives exact results for some values of special functions.
In[1]:= Gamma[15/2]
No exact result is known here.
In[2]:= Gamma[15/7]
A numerical result, to arbitrary precision, can nevertheless be found.
In[3]:= N[%, 40]
You can give complex arguments to special functions.
In[4]:= Gamma[3 + 4I] //N
Special functions automatically get applied to each element in a list.
In[5]:= Gamma[{3/2, 5/2, 7/2}]
Mathematica knows analytical properties of special functions, such as derivatives.
In[6]:= D[Gamma[x], {x, 2}]
You can use FindRoot to find roots of special functions.
In[7]:= FindRoot[ BesselJ[0, x], {x, 1} ]
Special functions in Mathematica can usually be evaluated for arbitrary complex values of their arguments. Often, however, the defining relations given below apply only for some special choices of arguments. In these cases, the full function corresponds to a suitable extension or "analytic continuation" of these defining relations. Thus, for example, integral representations of functions are valid only when the integral exists, but the functions themselves can usually be defined elsewhere by analytic continuation.
As a simple example of how the domain of a function can be extended, consider the function represented by the sum . This sum converges only when . Nevertheless, it is easy to show analytically that for any , the complete function is equal to . Using this form, you can easily find a value of the function for any , at least so long as .
Gamma and Related Functions
Gamma and related functions.
The Euler gamma function Gamma[z] is defined by the integral . For positive integer , . can be viewed as a generalization of the factorial function, valid for complex arguments .
There are some computations, particularly in number theory, where the logarithm of the gamma function often appears. For positive real arguments, you can evaluate this simply as Log[Gamma[z]]. For complex arguments, however, this form yields spurious discontinuities. Mathematica therefore includes the separate function LogGamma[z], which yields the logarithm of the gamma function with a single branch cut along the negative real axis.
The Euler beta function Beta[a, b] is .
The Pochhammer symbol or rising factorial Pochhammer[a, n] is . It often appears in series expansions for hypergeometric functions. Note that the Pochhammer symbol has a definite value even when the gamma functions which appear in its definition are infinite.
The incomplete gamma function Gamma[a, z] is defined by the integral . Mathematica includes a generalized incomplete gamma function Gamma[a, , ] defined as .
The alternative incomplete gamma function can therefore be obtained in Mathematica as Gamma[a, 0, z].
The incomplete beta function Beta[z, a, b] is given by . Notice that in the incomplete beta function, the parameter z is an upper limit of integration, and appears as the first argument of the function. In the incomplete gamma function, on the other hand, z is a lower limit of integration, and appears as the second argument of the function.
In certain cases, it is convenient not to compute the incomplete beta and gamma functions on their own, but instead to compute regularized forms in which these functions are divided by complete beta and gamma functions. Mathematica includes the regularized incomplete beta function BetaRegularized[z, a, b] defined for most arguments by , but taking into account singular cases. Mathematica also includes the regularized incomplete gamma function GammaRegularized[a, z] defined by , with singular cases taken into account.
The incomplete beta and gamma functions, and their inverses, are common in statistics. The inverse beta function InverseBetaRegularized[s, a, b] is the solution for in . The inverse gamma function InverseGammaRegularized[a, s] is similarly the solution for in .
Derivatives of the gamma function often appear in summing rational series. The digamma function PolyGamma[z] is the logarithmic derivative of the gamma function, given by . For integer arguments, the digamma function satisfies the relation , where is Euler's constant (EulerGamma in Mathematica) and are the harmonic numbers.
The polygamma functions PolyGamma[n, z] are given by . Notice that the digamma function corresponds to . The general form is the , not the , logarithmic derivative of the gamma function. The polygamma functions satisfy the relation .
Many exact results for gamma and polygamma functions are built into Mathematica.
In[1]:= PolyGamma[6]
Here is a contour plot of the gamma function in the complex plane.
In[2]:= ContourPlot[ Abs[Gamma[x + I y]], {x, -3, 3},
{y, -2, 2}, PlotPoints->50 ]
Zeta and Related Functions
Zeta and related functions.
The Riemann zeta function Zeta[s] is defined by the relation (for ). Zeta functions with integer arguments arise in evaluating various sums and integrals. Mathematica gives exact results when possible for zeta functions with integer arguments.
There is an analytic continuation of for arbitrary complex . The zeta function for complex arguments is central to number-theoretical studies of the distribution of primes. Of particular importance are the values on the critical line .
In studying , it is often convenient to define the two analytic Riemann-Siegel functions RiemannSiegelZ[t] and RiemannSiegelTheta[z] according to and (for real). Note that the Riemann-Siegel functions are both real as long as is real.
The Stieltjes constants StieltjesGamma[n] are generalizations of Euler's constant which appear in the series expansion of around its pole at ; the coefficient of is . Euler's constant is .
The generalized Riemann zeta function or Hurwitz zeta function Zeta[s, a] is given by , where any term with is excluded.
Mathematica gives exact results for .
In[1]:= Zeta[6]
Here is a three-dimensional picture of the Riemann zeta function in the complex plane.
In[2]:= Plot3D[ Abs[ Zeta[x + I y] ], {x, -3, 3}, {y, 2, 35}]
This is a plot of the absolute value of the Riemann zeta function on the critical line . You can see the first few zeros of the zeta function.
In[3]:= Plot[ Abs[ Zeta[ 1/2 + I y ] ], {y, 0, 40} ]
The polylogarithm functions PolyLog[n, z] are given by . The polylogarithm function is sometimes known as Jonquière's function. The dilogarithm PolyLog[2, z] satisfies . Sometimes is known as Spence's integral. The Nielsen generalized polylogarithm functions or hyperlogarithms PolyLog[n, p, z] are given by . Polylogarithm functions appear in Feynman diagram integrals in elementary particle physics, as well as in algebraic K-theory.
The Lerch transcendent LerchPhi[z, s, a] is a generalization of the zeta and polylogarithm functions, given by , where any term with is excluded. Many sums of reciprocal powers can be expressed in terms of the Lerch transcendent. For example, the Catalan beta function can be obtained as .
The Lerch transcendent is related to integrals of the Fermi-Dirac distribution in statistical mechanics by .
The Lerch transcendent can also be used to evaluate Dirichlet L-series which appear in number theory. The basic -series has the form , where the "character" is an integer function with period . -series of this kind can be written as sums of Lerch functions with a power of .
LerchPhi[z, s, a, DoublyInfinite->True] gives the doubly infinite sum .
Exponential Integral and Related Functions
Exponential integral and related functions.
Mathematica has two forms of exponential integral: ExpIntegralE and ExpIntegralEi.
The exponential integral function ExpIntegralE[n, z] is defined by .
The second exponential integral function ExpIntegralEi[z] is defined by (for ), where the principal value of the integral is taken.
The logarithmic integral function LogIntegral[z] is given by (for ), where the principal value of the integral is taken. is central to the study of the distribution of primes in number theory. The logarithmic integral function is sometimes also denoted by . In some number-theoretical applications, is defined as , with no principal value taken. This differs from the definition used in Mathematica by the constant .
The sine and cosine integral functions SinIntegral[z] and CosIntegral[z] are defined by and . The hyperbolic sine and cosine integral functions SinhIntegral[z] and CoshIntegral[z] are defined by and .
Error Function and Related Functions
Error function and related functions.
The error function Erf[z] is the integral of the Gaussian distribution, given by . The complementary error function Erfc[z] is given simply by . The imaginary error function Erfi[z] is given by . The generalized error function Erf[, ] is defined by the integral . The error function is central to many calculations in statistics.
The inverse error function InverseErf[s] is defined as the solution for in the equation . The inverse error function appears in computing confidence intervals in statistics as well as in some algorithms for generating Gaussian random numbers.
Closely related to the error function are the Fresnel integrals FresnelC[z] defined by and FresnelS[z] defined by . Fresnel integrals occur in diffraction theory.
Bessel and Related Functions
Bessel and related functions.
The Bessel functions BesselJ[n, z] and BesselY[n, z] are linearly independent solutions to the differential equation . For integer , the are regular at , while the have a logarithmic divergence at .
Bessel functions arise in solving differential equations for systems with cylindrical symmetry.
is often called the Bessel function of the first kind, or simply the Bessel function. is referred to as the Bessel function of the second kind, the Weber function, or the Neumann function (denoted ).
The Hankel functions (or Bessel functions of the third kind) give an alternative pair of solutions to the Bessel differential equation.
In studying systems with spherical symmetry, spherical Bessel functions arise, defined by , where and can be and , and , or and . For integer , Mathematica gives exact algebraic formulas for spherical Bessel functions.
The modified Bessel functions BesselI[n, z] and BesselK[n, z] are solutions to the differential equation . For integer , is regular at ; always has a logarithmic divergence at . The are sometimes known as hyperbolic Bessel functions.
Particularly in electrical engineering, one often defines the Kelvin functions, according to , .
The Airy functions AiryAi[z] and AiryBi[z] are the two independent solutions and to the differential equation . tends to zero for large positive , while increases unboundedly. The Airy functions are related to modified Bessel functions with one-third-integer orders. The Airy functions often appear as the solutions to boundary value problems in electromagnetic theory and quantum mechanics. In many cases the derivatives of the Airy functions AiryAiPrime[z] and AiryBiPrime[z] also appear.
The Struve function StruveH[n, z] appears in the solution of the inhomogeneous Bessel equation which for integer has the form ; the general solution to this equation consists of a linear combination of Bessel functions with the Struve function added. The modified Struve function StruveL[n, z] is given in terms of the ordinary Struve function by . Struve functions appear particularly in electromagnetic theory.
Here is a plot of . This is a curve that an idealized chain hanging from one end can form when you wiggle it.
In[1]:= Plot[ BesselJ[0, Sqrt[x]], {x, 0, 50} ]
Mathematica generates explicit formulas for half-integer-order Bessel functions.
In[2]:= BesselK[3/2, x]
The Airy function plotted here gives the quantum-mechanical amplitude for a particle in a potential that increases linearly from left to right. The amplitude is exponentially damped in the classically inaccessible region on the right.
In[3]:= Plot[ AiryAi[x], {x, -10, 10} ]
Legendre and Related Functions
Legendre and related functions.
The Legendre functions and associated Legendre functions satisfy the differential equation . The Legendre functions of the first kind, LegendreP[n, z] and LegendreP[n, m, z], reduce to Legendre polynomials when and are integers. The Legendre functions of the second kind LegendreQ[n, z] and LegendreQ[n, m, z] give the second linearly independent solution to the differential equation. For integer they have logarithmic singularities at . The and solve the differential equation with .
Legendre functions arise in studies of quantum-mechanical scattering processes.
Types of Legendre functions. Analogous types exist for LegendreQ.
Legendre functions of type 1 are defined only when lies inside the unit circle in the complex plane. Legendre functions of type 2 have the same numerical values as type 1 inside the unit circle, but are also defined outside. The type 2 functions have branch cuts from to and from to . Legendre functions of type 3, sometimes denoted and , have a single branch cut from to .
Toroidal functions or ring functions, which arise in studying systems with toroidal symmetry, can be expressed in terms of the Legendre functions and .
Conical functions can be expressed in terms of and .
When you use the function LegendreP[n, x] with an integer , you get a Legendre polynomial. If you take to be an arbitrary complex number, you get, in general, a Legendre function.
In the same way, you can use the functions GegenbauerC and so on with arbitrary complex indices to get Gegenbauer functions, Chebyshev functions, Hermite functions, Jacobi functions and Laguerre functions. Unlike for associated Legendre functions, however, there is no need to distinguish different types in such cases.
Confluent Hypergeometric Functions
Confluent hypergeometric functions.
Many of the special functions that we have discussed so far can be viewed as special cases of the confluent hypergeometric function Hypergeometric1F1[a, b, z].
The confluent hypergeometric function can be obtained from the series expansion . Some special results are obtained when and are both integers. If , and either or , the series yields a polynomial with a finite number of terms.
If is zero or a negative integer, then itself is infinite. But the regularized confluent hypergeometric function Hypergeometric1F1Regularized[a, b, z] given by has a finite value in all cases.
Among the functions that can be obtained from are the Bessel functions, error function, incomplete gamma function, and Hermite and Laguerre polynomials.
The function is sometimes denoted or . It is often known as the Kummer function.
The function can be written in the integral representation .
The confluent hypergeometric function is a solution to Kummer's differential equation , with the boundary conditions and .
The function HypergeometricU[a, b, z] gives a second linearly independent solution to Kummer's equation. For this function behaves like for small . It has a branch cut along the negative real axis in the complex plane.
The function has the integral representation .
, like , is sometimes known as the Kummer function. The function is sometimes denoted by .
The Whittaker functions give an alternative pair of solutions to Kummer's differential equation. The Whittaker function is related to by . The second Whittaker function obeys the same relation, with replaced by .
The parabolic cylinder functions are related to Whittaker functions by . For integer , the parabolic cylinder functions reduce to Hermite polynomials.
The Coulomb wave functions are also special cases of the confluent hypergeometric function. Coulomb wave functions give solutions to the radial Schrödinger equation in the Coulomb potential of a point nucleus. The regular Coulomb wave function is given by , where .
Other special cases of the confluent hypergeometric function include the Toronto functions , Poisson-Charlier polynomials , Cunningham functions and Bateman functions .
A limiting form of the confluent hypergeometric function which often appears is Hypergeometric0F1[a, z]. This function is obtained as the limit .
The function has the series expansion and satisfies the differential equation .
Bessel functions of the first kind can be expressed in terms of the function.
Hypergeometric Functions and Generalizations
Hypergeometric functions and generalizations.
The hypergeometric function Hypergeometric2F1[a, b, c, z] has series expansion . The function is a solution of the hypergeometric differential equation .
The hypergeometric function can also be written as an integral: .
The hypergeometric function is also sometimes denoted by , and is known as the Gauss series or the Kummer series.
The Legendre functions, and the functions which give generalizations of other orthogonal polynomials, can be expressed in terms of the hypergeometric function. Complete elliptic integrals can also be expressed in terms of the function.
The Riemann P function, which gives solutions to Riemann's differential equation, is also a function.
The generalized hypergeometric function or Barnes extended hypergeometric function HypergeometricPFQ[, ... , , , ... , , z] has series expansion .
The Meijer G function MeijerG[,...,, ,...,, ,...,, ,...,, z] is defined by the contour integral representation , where the contour of integration is set up to lie between the poles of and the poles of . MeijerG is a very general function whose special cases cover most of the functions discussed in the past few sections.
The Appell hypergeometric function of two variables AppellF1[a, , , c, x, y] has series expansion . This function appears for example in integrating cubic polynomials to arbitrary powers.
The Product Log Function
The product log function.
The product log function gives the solution for in . The function can be viewed as a generalization of a logarithm. It can be used to represent solutions to a variety of transcendental equations. The tree generating function for counting distinct oriented trees is related to the product log by . |
089a1d9c30754823 | Dismiss Notice
Join Physics Forums Today!
Why adj( U(t)) * U(t) = I where U(t) is a propagator in QM?
1. Jul 19, 2007 #1
This is probably really obvious but can someone explain to me why adjiont( U(t)) * U(t) = I where U(t) is a propagator in QM and I is the identity.
2. jcsd
3. Jul 20, 2007 #2
Isn't that basically the definition of U?
4. Jul 20, 2007 #3
Look at what happens if you time-reverse the Schrödinger equation.
5. Jul 20, 2007 #4
This is not obvious, but can be proven. The time evolution operator U(t) [also known as propagator] must preserve probabilities. E. P. Wigner proved (I think it was in 1931) that this implies that U(t) is a unitary operator (formally, U(t) can be also antiunitary, but this possibility can be discarded on the basis of continuity of U(t)). This result is called "Wigner theorem". The condition you wrote is equivalent to saying that U(t) is a unitary operator.
6. Jul 20, 2007 #5
in the language of operator theory, i believe another proof is via Stone's theorem.
it is always a bit startling to realize that many of the properties of QM follow very naturally from the mathematical properties of the Hilbert space. for example, many people are (for some reason) surprised when i tell them that the resolution of the identity, or complete set of states, [tex]\sum_i |i><i| = 1[/tex] is merely a trivial result of vector calculus, e.g. [tex]\vec{v} = \sum_i \vec{e_i} (\vec{e_i} \cdot \vec{v}) [/tex] with an arbitrary basis
1. <u|A|u> ? (Replies: 4)
2. U(1) breaking (Replies: 12)
3. What is this U(1)? (Replies: 3) |
70cec5cf15b49a2f | From Wikipedia, the free encyclopedia
(Redirected from Atoms)
Jump to: navigation, search
A drawing of a Lithium atom. In the middle is the nucleus, which in this case has four neutrons (blue) and three protons (red). Orbiting it are its three electrons.
Lithium atom model
Showing nucleus with four neutrons (blue),
three protons (red) and,
orbited by three electrons (black).
Smallest recognised division of a chemical element
Mass: 1.66 × 10(−27) to 4.52 × 10(−25) kg
Electric charge: zero
An atom is the basic unit that makes up all matter. There are many different types of atoms, each with its own name, mass and size. These different atoms are called chemical elements. The chemical elements are organized on the periodic table. Examples of elements are hydrogen and gold.
Atoms are very small, but the exact size changes depending on the element. Atoms range from 0.1 to 0.5 nanometers in width.[1] One nanometer is about 100,000 times smaller than the width of a human hair.[2] This makes atoms impossible to see without special tools. Equations must be used to see the way they work and how they interact with other atoms.
Atoms join together to make molecules: for example, two hydrogen atoms and one oxygen atom combine to make a water molecule. When atoms join together it is called a chemical reaction.
Every atom is made up of three kinds of smaller particles, called protons (which are positively charged), neutrons (which have no charge) and electrons (which are negatively charged). The protons and neutrons are heavier, and stay in the middle of the atom. They are called the nucleus. They are surrounded by a cloud of electrons which are very light in weight and are attracted to the positive charge of the nucleus. This attraction is called electromagnetic force.
The number of protons, neutrons and electrons an atom has tells us what element it is. Hydrogen, for example, has one proton, no neutrons and one electron; the element sulfur has 16 protons, 16 neutrons and 16 electrons. The number of protons is the atomic number. The number of protons and neutrons together is the atomic weight.
Atoms move faster when they are in gas form (because they are free to move) than they do in liquid form and solid matter. In solid materials, the atoms are tightly packed next to each other so they vibrate, but are not able to move (there is no room) as atoms in liquids do.
History[change | change source]
The word "atom" comes from the Greek (ἀτόμος) "atomos", indivisible, from (ἀ)-, not, and τόμος, a cut. The first historical mention of the word atom came from works by the Greek philosopher Democritus, around 400 BC.[3] Atomic theory stayed as a mostly philosophical subject, with not much actual scientific investigation or study, until the development of chemistry in the 1650s.
In 1777 French chemist Antoine Lavoisier defined the term element for the first time. He said that an element was any basic substance that could not be broken down into other substances by the methods of chemistry. Any substance that could be broken down was a compound.[4]
In 1803, English philosopher John Dalton suggested that elements were tiny, solid balls made of atoms. Dalton believed that all atoms of the same element have the same mass. He said that compounds are formed when atoms of more than one element combine. According to Dalton, in a certain compound, the atoms of the compound's elements always combine the same way.
In 1827, British scientist Robert Brown looked at pollen grains in water under his microscope. The pollen grains appeared to be jiggling. Brown used Dalton's atomic theory to describe patterns in the way they moved. This was called brownian motion. In 1905 Albert Einstein used mathematics to prove that the seemingly random movements were caused by the reactions of atoms, and by doing this he conclusively proved the existence of the atom.[5] In 1869 scientist Dmitri Mendeleev published the first version of the periodic table. The periodic table groups elements by their atomic number (how many protons they have. This is usually the same as the number of electrons). Elements in the same column, or period, usually have similar properties. For example, helium, neon, argon, krypton and xenon are all in the same column and have very similar properties. All these elements are gases that have no colour and no smell. Also, they are unable to combine with other atoms to form compounds. Together they are known as the noble gases.[4]
The physicist J.J. Thomson was the first person to discover electrons. This happened while he was working with cathode rays in 1897. He realized they had a negative charge, unlike protons (positive) and neutrons (no charge). Thomson created the plum pudding model, which stated that an atom was like plum pudding: the dried fruit (electrons) were stuck in a mass of pudding (protons). In 1909, a scientist named Ernest Rutherford used the Geiger–Marsden experiment to prove that most of an atom is in a very small space called the atomic nucleus. Rutherford took a photo plate and covered it with gold foil, and then shot alpha particles (made of two protons and two neutrons stuck together) at it. Many of the particles went through the gold foil, which proved that atoms are mostly empty space. Electrons are so small they make up only 1% of an atom's mass.[6]
Ernest Rutherford
In 1913, Niels Bohr introduced the Bohr model. This model showed that electrons travel around the nucleus in fixed circular orbits. This was more accurate than the Rutherford model. However, it was still not completely right. Improvements to the Bohr model have been made since it was first introduced.
In 1925, chemist Frederick Soddy found that some elements in the periodic table had more than one kind of atom.[7] For example, any atom with 2 protons should be a helium atom. Usually, a helium nucleus also contains two neutrons. However, some helium atoms have only one neutron. This means they truly are helium, because an element is defined by the number of protons, but they are not normal helium, either. Soddy called an atom like this, with a different number of neutrons, an isotope. To get the name of the isotope we look at how many protons and neutrons it has in its nucleus and add this to the name of the element. So a helium atom with two protons and one neutron is called helium-3, and a carbon atom with six protons and six neutrons is called carbon-12. However, when he developed his theory Soddy could not be certain neutrons actually existed. To prove they were real, physicist James Chadwick and a team of others created the mass spectrometer.[8] The mass spectrometer actually measures the mass and weight of individual atoms. By doing this Chadwick proved that to account for all the weight of the atom, neutrons must exist.
In 1937, German chemist Otto Hahn became the first person to create nuclear fission in a laboratory. He discovered this by chance when he was shooting neutrons at a uranium atom, hoping to create a new isotope.[9] However, he noticed that instead of a new isotope the uranium simply changed into a barium atom, a smaller atom than uranium. Apparently, Hahn had "broken" the uranium atom. This was the world's first recorded nuclear fission reaction. This discovery eventually led to the creation of the atomic bomb.
Further into the 20th century, physicists went deeper into the mysteries of the atom. Using particle accelerators they discovered that protons and neutrons were actually made of other particles, called quarks.
The most accurate model so far comes from the Schrödinger equation. Schrödinger realized that the electrons exist in a cloud around the nucleus, called the electron cloud. In the electron cloud, it is impossible to know exactly where electrons are. The Schrödinger equation is used to find out where an electron is likely to be. This area is called the electron's orbital.
Structure and parts[change | change source]
Parts[change | change source]
A helium atom, with the nucleus shown in red (and enlarged), embeded in a cloud of electrons.
The complex atom is made up of three main particles; the proton, the neutron and the electron. The isotope of Hydrogen Hydrogen-1 has no neutrons, just the one proton and one electron. A positive hydrogen ion has no electrons, just the one proton and one neutron. These two examples are the only known exceptions to the rule that all other atoms have at least one proton, one neutron and one electron each.
Electrons are by far the smallest of the three atomic particles, their mass and size is too small to be measured using current technology.[10] They have a negative charge. Protons and neutrons are of similar size and weight to each other,[10] protons are positively charged and neutrons have no charge. Most atoms have a neutral charge; because the number of protons (positive) and electrons (negative) are the same, the charges balance out to zero. However, in ions (different number of electrons) this is not always the case, and they can have a positive or a negative charge. Protons and neutrons are made out of quarks, of two types; up quarks and down quarks. A proton is made of two up quarks and one down quark and a neutron is made of two down quarks and one up quark.
Nucleus[change | change source]
The nucleus is in the middle of an atom. It is made up of protons and neutrons. Usually in nature, two things with the same charge repel or shoot away from each other. So for a long time it was a mystery to scientists how the positively charged protons in the nucleus stayed together. They solved this by finding a particle called a gluon. Its name comes from the word glue as gluons act like atomic glue, sticking the protons together using the strong nuclear force. It is this force which also holds the quarks together that make up the protons and neutrons.
A diagram showing the main difficulty in nuclear fusion, the fact that protons, which have positive charges, repel each other when forced together.
The number of neutrons in relation to protons defines whether the nucleus is stable or goes through radioactive decay. When there are too many neutrons or protons, the atom tries to make the numbers the same by getting rid of the extra particles. It does this by emitting radiation in the form of alpha, beta or gamma decay.[11] Nuclei can change through other means too. Nuclear fission is when the nucleus splits into two smaller nuclei, releasing a lot of stored energy. This release of energy is what makes nuclear fission useful for making bombs and electricity, in the form of nuclear power. The other way nuclei can change is through nuclear fusion, when two nuclei join together, or fuse, to make a heavier nucleus. This process requires extreme amounts of energy in order to overcome the electrostatic repulsion between the protons, as they have the same charge. Such high energies are most common in stars like our Sun, which fuses hydrogen for fuel.
Electrons[change | change source]
Electrons orbit, or travel around, the nucleus. They are called the atom's electron cloud. They are attracted towards the nucleus because of the electromagnetic force. Electrons have a negative charge and the nucleus always has a positive charge, so they attract each other. Around the nucleus, some electrons are further out than others, in different layers. These are called electron shells. In most atoms the first shell has two electrons, and all after that have eight. Exceptions are rare, but they do happen and are difficult to predict.[12] The further away the electron is from the nucleus, the weaker the pull of the nucleus on it. This is why bigger atoms, with more electrons, react more easily with other atoms. The electromagnetism of the nucleus is not strong enough to hold onto their electrons and atoms lose electrons to the strong attraction of smaller atoms.[13]
Radioactive decay[change | change source]
Some elements, and many isotopes, have what is called an unstable nucleus. This means the nucleus is either too big to hold itself together[14] or has too many protons or neutrons. When this happens the nucleus has to get rid of the excess mass or particles. It does this through radiation. An atom that does this can be called radioactive. Unstable atoms continue to be radioactive until they lose enough mass/particles that they become stable. All atoms above atomic number 82 (82 protons, lead) are radioactive.[14]
There are three main types of radioactive decay; alpha, beta and gamma.[15]
• Alpha decay is when the atom shoots out a particle having two protons and two neutrons. This is essentially a helium nucleus. The result is an element with atomic number two less than before. So for example if a beryllium atom (atomic number 4) went through alpha decay it would become helium (atomic number 2). Alpha decay happens when an atom is too big and needs to get rid of some mass.
• Beta decay is when a neutron turns into a proton or a proton turns into a neutron. In the first case the atom shoots out an electron. In the second case it is a positron (like an electron but with a positive charge). The end result is an element with one higher or one lower atomic number than before. Beta decay happens when an atom has either too many protons, or too many neutrons.
• Gamma decay is when an atom shoots out a gamma ray, or wave. It happens when there is a change in the energy of the nucleus. This is usually after a nucleus has already gone through alpha or beta decay. There is no change in the mass, or atomic number or the atom, only in the stored energy inside the nucleus.
Every radioactive element or isotope has what is named a half-life. This is how long it takes half of any sample of atoms of that type to decay until they become a different stable isotope or element.[16] Large atoms, or isotopes with a big difference between the number of protons and neutrons will therefore have a long half life, because they must lose more neutrons to become stable.
Marie Curie discovered the first form of radiation. She found the element and named it radium. She was also the first female recipient of the Nobel Prize.
Frederick Soddy conducted an experiment to observe what happens as radium decays. He placed a sample in a light bulb and waited for it to decay. Suddenly, helium (containing 2 protons and 2 neutrons) appeared in the bulb, and from this experiment he discovered this type of radiation has a positive charge.
James Chadwick discovered the neutron, by observing decay products of different types of radioactive isotopes. Chadwick noticed that the atomic number of the elements was lower than the total atomic mass of the atom. He concluded that electrons could not be the cause of the extra mass because they barely have mass.
Enrico Fermi, used the neutrons to shoot them at uranium. He discovered that uranium decayed a lot faster than usual and produced a lot of alpha and beta particles. He also believed that uranium got changed into a new element he named hesperium.
Otto Hanh and Fritz Strassmann repeated Fermi's experiment to see if the new element hesperium was actually created. They discovered two new things Fermi did not observe. By using a lot of neutrons the nucleus of the atom would split, producing a lot of heat energy. Also the fission products of uranium were already discovered: thorium, palladium, radium, radon and lead.
Fermi then noticed that the fission of one uranium atom shot off more neutrons, which then split other atoms, creating chain reactions. He realised that this process is called nuclear fission and could create huge amounts of heat energy.
That very discovery of Fermi's led to the development of the first nuclear bomb code-named 'Trinity'.
References[change | change source]
Other websites[change | change source] |
7fc64af67b151bd7 | index next
1. Rapid Review of Early Quantum Mechanics
Michael Fowler, UVa
Note: This is stuff you -- hopefully -- already know well from an undergraduate Modern Physics course. We’re going through it quickly just to remind you.
The next three lectures online are a more detailed account of this introductory material: lecture 2 on the birth of Quantum Mechanics includes historical stuff you don't really need, but I think it's worth reminding yourself how the physics evolved. Lectures 3, 4 and 5 cover undergraduate material you absolutely need to have down.
Concerning the course as a whole, our text is Shankar, but a substantial fraction of the material covered in the first semester can be found in a good undergraduate quantum text, such as Griffiths. .
What was Wrong with Classical Mechanics?
Basically, classical statistical mechanics wasn’t making sense...
Maxwell and Boltzmann evolved the equipartition theorem: a physical system can have many states (gas with particles having different velocities, or springs in different states of compression).
At nonzero temperature, energy will flow around in the system, it will constantly move from one state to another. So, what is the probability that at any instant it is in a particular state with energy E ?
M&B proved it to be proportional to e E/kT . This proportionality factor is also correct for any subsystem of the system: for example a single molecule.
Notice this means if a system is a set of oscillators, different masses on different strength springs, for example, then in thermal equilibrium each oscillator has on average the same energy as all the others. For three-dimensional oscillators in thermal equilibrium, the average energy of each oscillator is 3kT, where k is Boltzmann’s constant.
Black Body Radiation
Now put this together with Maxwell’s discovery that light is an electromagnetic wave: inside a hot oven, Maxwell’s equations can be solved yielding standing wave solutions, and the set of different wavelength allowed standing waves amount to an infinite series of oscillators, with no upper limit on the frequencies on going far into the ultraviolet. Therefore, from the classical equipartition theorem, an oven at thermal equilibrium at a definite temperature should contain an infinite amount of energy of order kT in each of an infinite number of modes and if you let radiation out through a tiny hole in the side, you should see radiation of all frequencies.
This is not, of course, what is observed: as an oven is warmed, it emits infrared, then red, then yellow light, etc. This means that the higher frequency oscillators (blue, etc.) are in fact not excited at low temperatures: equipartition isn’t true.
Planck showed that the experimentally observed intensity/frequency curve was exactly reproduced if it was assumed that the radiation was quantized: light of frequency f could only be emitted in quanta now photons having energy hf,h being Planck’s constant. This was the beginning of quantum mechanics.
The Photoelectric Effect
Einstein showed the same quantization of electromagnetic radiation explained the photoelectric effect: a photon of energy hf knocks an electron out of a metal, it takes a certain work W to get it out, the rest of the photon energy goes to the kinetic energy of the electron, for the fastest electrons emitted (those that come right from the surface, so encountering no further resistance). Plotting the maximum electron kinetic energy as a function of incident light frequency confirms the hypothesis, giving the same value for h as that needed to explain radiation from an oven. (It had previously been assumed that more intense light would increase the average kinetic energy of each emitted electron this turned out not to be the case.)
The Bohr Atom
Bohr put together this quantization of light energy with Rutherford’s discovery that the atom had a nucleus, with electrons somehow orbiting around it: for the hydrogen atom, light emitted when the atom is thermally excited has a particular pattern, the observed emitted wavelengths are given by
1 λ = R H ( 1 4 1 n 2 ),n=3,4,5.
( R H is now called the Rydberg constant.) Bohr realized these were photons having energy equal to the energy difference between two allowed orbits of the electron circling the nucleus (the proton), E n E m =hf , leading to the conclusion that the allowed levels must be E n =hc R H / n 2 .
How could the quantum hf restricting allowed radiation energies also restrict the allowed electron orbits? Bohr realized there must be a connection because h has the dimensions of angular momentum! What if the electron were only allowed to be in circular orbits of angular momentum nKh with n an integer? Bohr did the math for orbits under an inverse square law, and found that the observed spectra were in fact correctly accounted for by taking K=1/2π.
But then he realized he didn’t even need the experimental results to find K: quantum mechanics must agree with classical mechanics in the regime where we know experimentally that classical mechanics (including Maxwell’s equations) is correct, that is, for systems of macroscopic size. Consider a negative charge orbiting around a fixed positive charge at a radius of 10cm., the charges being such that the speed is of order meters per second (we don’t want relativistic effects making things more complicated). Then from classical E&M, the charge will radiate at the orbital frequency. Now imagine this is actually a hydrogen atom, in a perfect vacuum, in a high state of excitation. It must be radiating at this same frequency. But Bohr’s theory can’t just be right for small orbits, so the radiation must satisfy E n E m =hf. The spacing between adjacent levels will vary slowly for these large orbits, so h times the orbital frequency must be the energy difference between adjacent levels. Now, that energy difference depends on the allowed angular momentum step between the adjacent levels: that is, on K. Reconciling these two expressions for the radiation frequency gives K=1/2π.
This classical limit argument, then, predicts the Rydberg constant in terms of already known quantities:
R H = ( 1 4π ε 0 ) 2 . 2 π 2 m e 4 c h 3 .
What’s right about the Bohr atom?
1. It gives the Balmer series spectra.
2. The first orbit size is close to the observed size of the atom: and remember there are no adjustable parameters, the classical limit argument determines the spectra and the size.
What’s wrong with the Bohr atom?
1. No explanation for why angular momentum should be quantized.
This was solved by de Broglie a little later.
2. Why don’t the circling electrons radiate, as predicted classically?
Well, the fact that radiation is quantized means the classical picture of an accelerating charge smoothly emitting radiation cannot work if the energies involved are of order h times the frequencies involved.
3. The lowest state has nonzero angular momentum.
This is a defect of the model, corrected in the truly quantum model (Schrödinger’s equation).
4. In an inverse square field, orbits are in general elliptical.
This was at first a puzzle: why should there be only circular orbits allowed? In fact, the model does allow elliptical orbits, and they don’t show up in the Balmer series because, as proved by Sommerfeld, if the allowed elliptical orbits have the same allowed angular momenta as Bohr’s orbits, they have the same set of energies. This is a special property of the inverse square force.
De Broglie Waves
The first explanation of why only certain angular momenta are allowed for the circling electron was given by de Broglie: just as photons act like particles (definite energy and momentum), but undoubtedly are wave like, being light, so particles like electrons perhaps have wave like properties. For photons, the relationship between wavelength and momentum is p=h/λ. Assuming this is also true of electrons, and that the allowed circular orbits are standing waves, Bohr’s angular momentum quantization follows.
Schrödinger’s Wave Equation
De Broglie’s idea was clearly on the right track but waves in space are three-dimensional, thinking of the circular orbit as a string under tension can’t be right, even if the answer is.
Photon waves (electromagnetic waves) obey the equation
2 E 1 c 2 2 E t 2 =0. .
A solution of definite momentum is the plane wave:
( 2 x 2 1 c 2 2 t 2 ) E 0 e i(kxωt) =( k 2 ω 2 c 2 ) E 0 e i(kxωt) =0. .
Notice that the last equality is essentially just ω=ck, where for a plane wave solution the energy and momentum of the photon are translated into differential operators with respect to time and space respectively, to give a differential equation for the wave.
Schrödinger’s wave equation is equivalently taking the (nonrelativistic) energy-momentum relation E= p 2 /2m, and using the same recipe to translate it into a differential equation:
Making the natural extension to three dimensions, and assuming we can add a potential term in the most naïve way possible, that is, going from E= p 2 /2m to E= p 2 /2m+V( x,y,z ), we get
i ψ(x,y,z,t) t = 2 2m 2 ψ(x,y,z,t)+V(x,y,z)ψ(x,y,z,t).
This is the equation Schrödinger wrote down and solved, the solutions gave the same set of energies as the Bohr model, but now the ground state had zero angular momentum, and many of the details of the solutions were borne out by experiment, as we shall discuss further later.
A Conserved Current
Schrödinger also showed that a conserved current could be defined in terms of the wave function ψ :
ρ t +div j =0 where ρ= ψ * ψ=|ψ | 2 and j = 2mi ( ψ * ψψ ψ * ).
Schrödinger’s interpretation of his equation was that the electron was simply a wave, not a particle, and this was the wave intensity. But thinking of electromagnetic waves in this way gave no clue to the quantum photon behavior -- this couldn’t be the whole story.
Interpreting the Wave Function
The correct interpretation of the wave function (due to Born) follows from analogy to the electromagnetic case. Let’s review that briefly. The basic example is the two-slit diffraction pattern, as built up by sending through one photon at a time, to a bank of photon detectors. The pattern gradually emerges: solve the wave equation, then the predicted local energy density (proportional to | E ( x,y,z,t ) | 2 dxdydz ) gives the probability of one photon going through the system landing at that spot.
Born suggested that similarly |ψ | 2 at any point was proportional to the probability of detecting the electron at that point. This has turned out to be correct.
Localizing the Electron
Despite its wavelike properties, we know that an electron can behave like a particle: specifically, it can move as a fairly localized entity from one place to another. What’s the wave representation of that? It’s called a wave packet: a localized wave excitation. To see how this can come about, first remember that the Schrödinger equation is a linear equation, the sum of any two or more solutions is itself a solution. If we add together two plane waves close in wavelength, we get beats, which can be regarded as a string of wave packets. To get a single wave packet, we must add together a continuous range of wavelengths.
The standard example is the Gaussian wave packet, ψ(x,t=0)=A e i k 0 x e x 2 /2 Δ 2 where p 0 = k 0
Using the standard result
+ e a x 2 dx= π a
we find | A | 2 = ( π Δ 2 ) 1/2 so
ψ(x,t=0)= 1 (π Δ 2 ) 1/4 e i k 0 x e x 2 /2 Δ 2 .
But how do we construct this particular wavepacket by superposing plane waves? That is to say, we need a representation of the form:
ψ(x)= + dk 2π e ikx ϕ(k)
The function ϕ( k ) represents the weighting of plane waves in the neighborhood of wavenumber k. This is a particular example of a Fourier transform we will be discussing the general case in detail a little later in the course. Note that if ϕ( k ) is a bounded function, any particular k value gives a vanishingly small contribution, the plane-wave contribution to ψ( x ) from a range dk is ϕ( k )dk/2π. In fact, ϕ( k ) is given in terms of ψ( x ) by
ϕ(k)= + dx e ikx ψ(x) .
It is perhaps worth mentioning at this point that this can be understood qualitatively by observing that the plane wave prefactor e ikx will interfere destructively with all plane wave components of ψ( x ) except that of wavenumber k, where it may at first appear that the contribution is infinite, but recall that as stated above, any particular k component has a vanishingly small weight and, in fact, this is the right answer, as we shall show in more convincing fashion later.
In the present case, the above handwaving argument is unnecessary, because both the integrals can be carried out exactly, using the standard result:
e a x 2 +bx dx= e b 2 /4a π a
ϕ(k)= (4π Δ 2 ) 1 4 e Δ 2 (k k 0 ) 2 /2 .
The Uncertainty Principle
Note that the spreads in x -space and p -space are inversely related: Δx is of order Δ, Δp=Δk/Δ. This is of course the Uncertainty Principle, localization in x -space requires a large spread in contributing momentum states.
It’s worth reviewing the undergraduate exercises on applications of the uncertainty principle. The help sharpen one’s appreciation of the wave/particle nature of quantum objects.
There’s a limit to how well the position of an electron can be determined: it is detected by bouncing a photon off of it, and the photon wavelength sets the limit on Δx. But if the photon has enough energy to create an electron-positron pair out of the vacuum, you can’t be sure which electron you’re seeing. This limits Δx/mc at best. (This is called the Compton wavelength, written λ C it appears in Compton scattering.) How much smaller than a hydrogen atom ground state wave function is this? λ C / a 0 = e 2 /c (CGS) = e 2 /4π ε 0 c (SI)=1/137 , known as the fine structure constant. This is also the ratio of the electron speed in the first Bohr orbit to the speed of light, and so is an indication of the importance of relativistic corrections to energies of electron states; the (relativistic) differences in electron orbit energies for circular and elliptical states having the same energy when calculated nonrelativistically lead to fine structure in the atomic spectra.
index next |
0459402914c85847 | Public Release:
Quantum leap: computational approach launches new paradigm in electronic structure theory
Michigan State University
EAST LANSING, Mich. -- A group of Michigan State University researchers specializing in quantum calculations has proposed a radically new computational approach to solving the complex many-particle Schrödinger equation, which holds the key to explaining the motion of electrons in atoms and molecules.
The work, led by Piotr Piecuch, university distinguished professor in the Department of Chemistry and adjunct professor in the Department of Physics and Astronomy in the College of Natural Science, was published recently in Physical Review Letters. Also involved in the work are fourth-year graduate student J. Emiliano Deustua and senior postdoctoral associate Jun Shen. The group provides details for a new way of obtaining highly accurate electronic energies by merging the deterministic coupled-cluster and stochastic (randomly determined) Quantum Monte Carlo approaches.
"Instead of insisting on a single philosophy when solving the electronic Schrödinger equation, which has historically been either deterministic or stochastic, we have chosen a third way," Piecuch said. "As one of the reviewers noted, the essence of it is remarkably simple: use the stochastic approach to determine what is important and the deterministic approach to determine the important, while correcting for the information missed by stochastic sampling."
The new idea is to use the stochastic methods to identify the leading wave function components and the deterministic coupled-cluster computations, combined with suitable energy corrections, to provide the missing information. The merging of deterministic and stochastic approaches as a general method of solving the many-particle Schrödinger equation may also impact other areas, such as nuclear physics.
"In the case of nuclei, instead of being concerned with electrons, one would use our new approach to solve the Schrödinger equation for protons and neutrons," Piecuch said. "The mathematical and computational issues are similar. Just like chemists want to understand the electronic structure of a molecule, nuclear physicists want to unravel the structure of the atomic nucleus. Once again, solving the many-particle Schrödinger equation holds the key."
|
06f9e23e2166ec0d | 1. 31
2. 12
Reading Tarn’s interviews makes me always want to become a video game developer.
1. 9
The flip side of this is that once you see Dwarf Fortress for graph traversal and topological sort, it loses a lot of its magic.
1. 9
Physics story time!
In quantum mechanics, there’s this thing called the Schrödinger equation. As an extremely oversimplified description, it says that you can describe the entire quantum in terms of the “Hamiltonian” operator. It’s a nonlinear partial differential equation, so really messy to work with, but hypothetically you can reduce everything in quantum mechanics, classical mechanics, chemistry, biology, weather patterns, etc to solving the Hamiltonian. That doesn’t mean, though, that it’s easy. Here’s roughly where we are in terms of complete solutions.
• Proton: Trivial.
• Proton + 1 electron: Tricky, but we solved this almost a century ago.
• Proton + 2 electrons: Holy shit what the fuck is going on
Even with a single unified equation, you very quickly hit systems where you’re pretty much stuck. And that’s just three particles! Once you give up analytic solutions, you’re now in a world of emergent phenomena, where small quantum rules avalanche through a system and lead to bizarre macro-level properties. For example, if you model a metal as a free sea of electrons and add a slight force coming from the ions in the lattice, you suddenly get “forbidden zones” of electron energy, aka band gaps. Then that cascades to make insulators and semiconductors possible, which cascades into transistors, which cascades into, well, computers. So a very slight change in the electron model gives you a universe where I can ramble about my undergrad classes to a complete stranger who may or may not be on the other side of the world.
Dwarf Fortress might just be graph traversal and topological sort. Glass is just a bunch of harmonic springs. Weather is just Newton’s equations spread over a lot of particles. Doesn’t mean that we understand it, can predict it, or don’t find it mysterious and full of wonder.
1. 1
Funny, that the same 1-2-3 pattern holds just for Newtonian gravity an orbits.
Single object in empty space: trivial.
Two objects: Kepler’s laws hold precisely.
Star-planet-moon: Well, up to some approximation…
Three stars of comparable masses: oh no not this.
2. 8
But does having a simple structure underneath weaken or strengthen magic-ness (especially if the details in the next level are carefully thought out)? After all, a digital clock is less magic than a digital clock running on Conway’s Life.
That’s probably a matter of perspective.
1. 6
Is that different from seeing human relations as applied decision theory?
Which immediately suggests that Tarn should add in irrationality and biases to dwarf logic… assuming he hasn’t already.
Losing my blood probably will hurt a bit immediately and may have serious long-term impacts, but those are quite a bit more difficult to measure so let’s assign that negative value at 1/10th its actual cost.
1. 4
To be fair, we don’t know that the Universe we’re currently in isn’t much more than graph traversal and topological sorts.
1. 1
What is the source for that, if I may ask? Not that I doubt you, but I’d be interested in explanations of how DF works under the hood. |
5607cdb700aa4efc | Skip to main content
Physics LibreTexts
5.6: The Covariant Derivative
• Page ID
• In the preceding section we were able to estimate a nontrivial general relativistic effect, the geodetic precession of the gyroscopes aboard Gravity Probe B, up to a unitless constant 3\(\pi\). Let’s think about what additional machinery would be needed in order to carry out the calculation in detail, including the 3\(\pi\).
First we would need to know the Einstein field equation, but in a vacuum this is fairly straightforward:
\[R_{ab} = 0.\]
Einstein posited this equation based essentially on the considerations laid out in Section 5.1.
But just knowing that a certain tensor vanishes identically in the space surrounding the earth clearly doesn’t tell us anything explicit about the structure of the spacetime in that region. We want to know the metric. As suggested at the beginning of the chapter, we expect that the first derivatives of the metric will give a quantity analogous to the gravitational field of Newtonian mechanics, but this quantity will not be directly observable, and will not be a tensor. The second derivatives of the metric are the ones that we expect to relate to the Ricci tensor \(R_{ab}\).
The Covariant Derivative in Electromagnetism
We’re talking blithely about derivatives, but it’s not obvious how to define a derivative in the context of general relativity in such a way that taking a derivative results in well-behaved tensor.
To see how this issue arises, let’s retreat to the more familiar terrain of electromagnetism. In quantum mechanics, the phase of a charged particle’s wavefunction is unobservable, so that for example the transformation \(\Psi \rightarrow − \Psi\) does not change the results of experiments. As a less trivial example, we can redefine the ground of our electrical potential, \(\Phi \rightarrow \Phi + \delta \Phi\), and this will add a constant onto the energy of every electron in the universe, causing their phases to oscillate at a greater rate due to the quantum-mechanical relation
\[E = hf.\]
There are no observable consequences, however, because what is observable is the phase of one electron relative to another, as in a double-slit interference experiment. Since every electron has been made to oscillate faster, the effect is simply like letting the conductor of an orchestra wave her baton more quickly; every musician is still in step with every other musician. The rate of change of the wavefunction, i.e., its derivative, has some built-in ambiguity.
Figure \(\PageIndex{1}\) - A double-slit experiment with electrons. If we add an arbitrary constant to the potential, no observable changes result. The wavelength is shortened, but the relative phase of the two parts of the waves stays the same.
For simplicity, let’s now restrict ourselves to spin-zero particles, since details of electrons’ polarization clearly won’t tell us anything useful when we make the analogy with relativity. For a spin-zero particle, the wavefunction is simply a complex number, and there are no observable consequences arising from the transformation
\[\Psi \rightarrow \Psi' = e^{i \alpha} \Psi\]
where \(\alpha\) is a constant. The transformation \(\Phi \rightarrow \Phi + \delta \Phi\) is also allowed, and it gives \(\alpha (t) = (\frac{q \delta \Phi}{\hbar})t\), so that the phase factor ei\(\alpha\)(t) is a function of time \(t\). Now from the point of view of electromagnetism in the age of Maxwell, with the electric and magnetic fields imagined as playing their roles against a background of Euclidean space and absolute time, the form of this time-dependent phase factor is very special and symmetrical; it depends only on the absolute time variable. But to a relativist, there is nothing very nice about this function at all, because there is nothing special about a time coordinate. If we’re going to allow a function of this form, then based on the coordinate-invariance of relativity, it seems that we should probably allow α to be any function at all of the spacetime coordinates. The proper generalization of \(\Phi \rightarrow \Phi - \delta \Phi\) is now Ab → Ab − \(\partial_{b} \alpha\), where Ab is the electromagnetic potential four-vector (section 4.2).
Figure \(\PageIndex{2}\) - Two wavefunctions with constant wavelengths, and a third with a varying wavelength. None of these are physically distinguishable, provided that the same variation in wavelength is applied to all electrons in the universe at any given point in spacetime. There is not even any unambiguous way to pick out the third one as the one with a varying wavelength. We could choose a different gauge in which the third wave was the only one with a constant wavelength.
Exercise \(\PageIndex{1}\)
Self-check: Suppose we said we would allow \(\alpha\) to be a function of t, but forbid it to depend on the spatial coordinates. Prove that this would violate Lorentz invariance.
The transformation has no effect on the electromagnetic fields, which are the direct observables. We can also verify that the change of gauge will have no effect on observable behavior of charged particles. This is because the phase of a wavefunction can only be determined relative to the phase of another particle’s wavefunction, when they occupy the same point in space and, for example, interfere. Since the phase shift depends only on the location in spacetime, there is no change in the relative phase.
But bad things will happen if we don’t make a corresponding adjustment to the derivatives appearing in the Schrödinger equation. These derivatives are essentially the momentum operators, and they give different results when applied to \(\Psi'\) than when applied to \(\Psi\):
$$\begin{split} \partial_{b} \Psi &\rightarrow \partial_{b} (e^{i \alpha} \Psi) \\ &= e^{i \alpha} \partial_{b} \Psi + i \partial_{b} \alpha (e^{i \alpha} \Psi) \\ &= (\partial_{b} + A'_{b} - A_{b}) \Psi' \end{split}$$
To avoid getting incorrect results, we have to do the substitution \(\partial_{b} \rightarrow \partial_{b} + ieA_{b}\), where the correction term compensates for the change of gauge. We call the operator \(\nabla\) defined as
$$\nabla_{b} = \partial_{b} + ieA_{b}$$
the covariant derivative. It gives the right answer regardless of a change of gauge.
The Covariant Derivative in General Relativity
Now consider how all of this plays out in the context of general relativity. The gauge transformations of general relativity are arbitrary smooth changes of coordinates. One of the most basic properties we could require of a derivative operator is that it must give zero on a constant function. A constant scalar function remains constant when expressed in a new coordinate system, but the same is not true for a constant vector function, or for any tensor of higher rank. This is because the change of coordinates changes the units in which the vector is measured, and if the change of coordinates is nonlinear, the units vary from point to point.
Figure \(\PageIndex{3}\): These three rulers represent three choices of coordinates. As in Figure \(\PageIndex{2}\) switching from one set of coordinates to another has no effect on any experimental observables. It is merely a choice of gauge.
Consider the one-dimensional case, in which a vector va has only one component, and the metric is also a single number, so that we can omit the indices and simply write v and g. (We just have to remember that v is really a covariant vector, even though we’re leaving out the upper index.) If v is constant, its derivative \(\frac{dv}{dx}\), computed in the ordinary way without any correction term, is zero. If we further assume that the coordinate x is a normal coordinate, so that the metric is simply the constant g = 1, then zero is not just the answer but the right answer. (The existence of a preferred, global set of normal coordinates is a special feature of a one-dimensional space, because there is no curvature in one dimension. In more than one dimension, there will typically be no possible set of coordinates in which the metric is constant, and normal coordinates only give a metric that is approximately constant in the neighborhood around a certain point. See Figure 5.3.7 for an example of normal coordinates on a sphere, which do not have a constant metric.)
Now suppose we transform into a new coordinate system X, which is not normal. The metric G, expressed in this coordinate system, is not constant. Applying the tensor transformation law, we have \(V = v \frac{dX}{dx}\), and differentiation with respect to X will not give zero, because the factor \(\frac{dX}{dx}\) isn’t constant. This is the wrong answer: V isn’t really varying, it just appears to vary because G does.
We want to add a correction term onto the derivative operator \(\frac{d}{dX}\), forming a covariant derivative operator \(\nabla_{X}\) that gives the right answer. This correction term is easy to find if we consider what the result ought to be when differentiating the metric itself. In general, if a tensor appears to vary, it could vary either because it really does vary or because the metric varies. If the metric itself varies, it could be either because the metric really does vary or . . . because the metric varies. In other words, there is no sensible way to assign a nonzero covariant derivative to the metric itself, so we must have \(\nabla_{X}\)G = 0. The required correction therefore consists of replacing \(\frac{d}{dX}\) with
$$\nabla_{X} = \frac{d}{dX} - G^{-1} \frac{dG}{dX} \ldotp$$
Applying this to G gives zero. G is a second-rank contravariant tensor. If we apply the same correction to the derivatives of other second-rank contravariant tensors, we will get nonzero results, and they will be the right nonzero results. For example, the covariant derivative of the stress-energy tensor T (assuming such a thing could have some physical significance in one dimension!) will be \(\nabla_{X} T = \frac{dT}{dX} − G^{-1} (\frac{dG}{dX})T\).
Physically, the correction term is a derivative of the metric, and we’ve already seen that the derivatives of the metric (1) are the closest thing we get in general relativity to the gravitational field, and (2) are not tensors. In 1+1 dimensions, suppose we observe that a free-falling rock has \(\frac{dV}{dT}\) = 9.8 m/s2. This acceleration cannot be a tensor, because we could make it vanish by changing from Earthfixed coordinates X to free-falling (normal, locally Lorentzian) coordinates x, and a tensor cannot be made to vanish by a change of coordinates. According to a free-falling observer, the vector v isn’t changing at all; it is only the variation in the Earth-fixed observer’s metric G that makes it appear to change.
Mathematically, the form of the derivative is \((\frac{1}{y}) \frac{dy}{dx}\), which is known as a logarithmic derivative, since it equals \(\frac{d(\ln y)}{dx}\). It measures the multiplicative rate of change of y. For example, if y scales up by a factor of k when x increases by 1 unit, then the logarithmic derivative of y is ln k. The logarithmic derivative of ecx is c. The logarithmic nature of the correction term to \(\nabla_{X}\) is a good thing, because it lets us take changes of scale, which are multiplicative changes, and convert them to additive corrections to the derivative operator. The additivity of the corrections is necessary if the result of a covariant derivative is to be a tensor, since tensors are additive creatures.
What about quantities that are not second-rank covariant tensors? Under a rescaling of contravariant coordinates by a factor of k, covariant vectors scale by k−1, and second-rank covariant tensors by k−2. The correction term should therefore be half as much for covariant vectors,
$$\nabla_{X} = \frac{d}{dX} - \frac{1}{2} G^{-1} \frac{dG}{dX} \ldotp$$
and should have an opposite sign for contravariant vectors.
Generalizing the correction term to derivatives of vectors in more than one dimension, we should have something of this form:
$$\begin{split} \nabla_{a} v^{b} &= \partial_{a} v^{b} + \Gamma^{b}_{ac} v^{c} \\ \nabla_{a} v_{b} &= \partial_{a} v_{b} - \Gamma^{c}_{ba} v_{c}, \end{split}$$
where \(\Gamma^{b}_{ac}\), called the Christoffel symbol, does not transform like a tensor, and involves derivatives of the metric. (“Christoffel” is pronounced “Krist-AWful,” with the accent on the middle syllable.) The explicit computation of the Christoffel symbols from the metric is deferred until section 5.9, but the intervening sections 5.7 and 5.8 can be omitted on a first reading without loss of continuity.
An important gotcha is that when we evaluate a particular component of a covariant derivative such as \(\nabla_{2} v^{3}\), it is possible for the result to be nonzero even if the component v3 vanishes identically. This can be seen in example 5 and example 21.
Example 9: Christoffel symbols on the globe
As a qualitative example, consider the geodesic airplane trajectory shown in Figure 5.6.4, from London to Mexico City. In physics it is customary to work with the colatitude, \(\theta\), measured down from the north pole, rather then the latitude, measured from the equator. At P, over the North Atlantic, the plane’s colatitude has a minimum. (We can see, without having to take it on faith from the figure, that such a minimum must occur. The easiest way to convince oneself of this is to consider a path that goes directly over the pole, at \(\theta\) = 0.)
Figure \(\PageIndex{4}\)
At P, the plane’s velocity vector points directly west. At Q, over New England, its velocity has a large component to the south. Since the path is a geodesic and the plane has constant speed, the velocity vector is simply being parallel-transported; the vector’s covariant derivative is zero. Since we have v\(\theta\) = 0 at P, the only way to explain the nonzero and positive value of \(\partial_{\phi} v^{\theta}\) is that we have a nonzero and negative value of \(\Gamma^{\theta}_{\phi \phi}\).
By symmetry, we can infer that \(\Gamma^{\theta}_{\phi \phi}\) must have a positive value in the southern hemisphere, and must vanish at the equator.
\(\Gamma^{\theta}_{\phi \phi}\) is computed in example 10.
Symmetry also requires that this Christoffel symbol be independent of \(\phi\), and it must also be independent of the radius of the sphere.
Example 9 is in two spatial dimensions. In spacetime, \(\Gamma\) is essentially the gravitational field (see problem 7), and early papers in relativity essentially refer to it that way.9 This may feel like a joyous reunion with our old friend from freshman mechanics, g = 9.8 m/s. But our old friend has changed. In Newtonian mechanics, accelerations like g are frame-invariant (considering only inertial frames, which are the only legitimate ones in that theory). In general relativity they are frame-dependent, and as we saw earlier, the acceleration of gravity can be made to equal anything we like, based on our choice of a frame of reference.
To compute the covariant derivative of a higher-rank tensor, we just add more correction terms, e.g.,
$$\nabla_{a} U_{bc} = \partial_{a} U_{bc} - \Gamma^{d}_{ba} U_{dc} - \Gamma^{d}_{ca} U_{bd}$$
$$\nabla_{a} U_{b}^{c} = \partial_{a} U_{b}^{c} - \Gamma^{d}_{ba} U_{d}^{c} + \Gamma^{c}_{ad} U_{b}^{d} \ldotp$$
With the partial derivative \(\partial_{\mu}\), it does not make sense to use the metric to raise the index and form \(\partial^{\mu}\). It does make sense to do so with covariant derivatives, so \(\nabla^{a} = g^{ab} \nabla_{b}\) is a correct identity.
Comma, Semicolon, and Birdtracks Notation
Some authors use superscripts with commas and semicolons to indicate partial and covariant derivatives. The following equations give equivalent notations for the same derivatives:
$$\partial_{\mu} X_{\nu} = X_{\nu,\; \mu}$$
$$\nabla_{a} X_{b} = X_{b;a}$$
$$\nabla^{a} X_{b} = X_{b}^{;a}$$
Figure 5.6.5 shows two examples of the corresponding birdtracks notation. Because birdtracks are meant to be manifestly coordinate-independent, they do not have a way of expressing non-covariant derivatives. We no longer want to use the circle as a notation for a non-covariant gradient as we did when we first introduced it in section 2.1.
9 “On the gravitational field of a point mass according to Einstein’s theory,” Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften 1 (1916) 189, translated in |
550490149ae93a2d | Discrete Exterior Calculus
So far we’ve been exploring exterior calculus purely in the smooth setting. Unfortunately this theory was developed by some old-timers who did not know anything about computers, hence it cannot be used directly by machines that store only a finite amount of information. For instance, if we have a smooth vector field or a smooth 1-form we can’t possibly store the direction of every little “arrow” at each point — there are far too many of them! Instead, we need to keep track of a discrete (or really, finite) number of pieces of information that capture the essential behavior of the objects we’re working with; we call this scheme discrete exterior calculus (or DEC for short). The big secret about DEC is that it’s literally nothing more than the good-old fashioned (continuous) exterior calculus we’ve been learning about, except that we integrate differential forms over elements of our mesh.
Discrete Differential Forms
One way to encode a 1-form might be to store a finite collection of “arrows” associated with some subset of points. Instead, we’re going to do something a bit different: we’re going to integrate our 1-form over each edge of a mesh, and store the resulting numbers (remember that the integral of an \(n\)-form always spits out a single number) on the corresponding edges. In other words, if \(\alpha\) is a 1-form and \(e\) is an edge, then we’ll associate the number
\[ \hat{\alpha}_e := \int_e \alpha \]
with \(e\), where the use of the hat (\(\ \hat{}\ \)) is supposed to suggest a discrete quantity (not to be confused with a unit-length vector).
Does this procedure seem a bit abstract to you? It shouldn’t! Think about what this integral represents: it tells us how strongly the 1-form \(\alpha\) “flows along” the edge \(e\) on average. More specifically, remember how integration of a 1-form works: at each point along the edge we take the vector tangent to the edge, stick it into the 1-form \(\alpha\), and sum up the resulting values — each value tells us something about how well \(\alpha\) “lines up” with the direction of the edge. For instance, we could approximate the integral via the sum
\[ \int_e \alpha \approx |e|\left(\frac{1}{N} \sum_{i=1}^N \alpha_{p_i}(\hat{e})\right), \]
where \(|e|\) denotes the length of the edge, \(\{p_i\}\) is a sequence of points along the edge, and \(\hat{e} := e/|e|\) is a unit vector tangent to the edge:
Of course, this quantity tells us absolutely nothing about the strength of the “flow” orthogonal to the edge: it could be zero, it could be enormous! We don’t really know, because we didn’t take any measurements along the orthogonal direction. However, the hope is that some of this information will still be captured by nearby edges (which are most likely not parallel to \(e\)).
More generally, a \(k\)-form that has been integrated over each \(k\)-dimensional cell (edges in 1D, faces in 2D, etc.) is called a discrete differential \(k\)-form. (If you ever find the distinction confusing, you might find it helpful to substitute the word “integrated” for the word “discrete.”) In practice, however, not every discrete differential form has to originate from a continuous one — for instance, a bunch of arbitrary values assigned to each edge of a mesh is a perfectly good discrete 1-form.
One thing you may have noticed in all of our illustrations so far is that each edge is marked with a little arrow. Why? Well, one thing to remember is that direction matters when you integrate. For instance, the fundamental theorem of calculus (and common sense) tells us that the total change as you go from \(a\) to \(b\) is the opposite of the total change as you go from \(b\) to \(a\):
\[ \int_a^b \frac{\partial\phi}{\partial x} dx = \phi(b)-\phi(a) = -(\phi(a)-\phi(b)) = -\int_b^a \frac{\partial\phi}{\partial x} dx. \]
Said in a much less fancy way: the elevation gain as you go from Pasadena to Altadena is 151 meters, so of the elevation “gain” in the other direction must be -151 meters! Just keeping track of the number 151 does you little good — you have to say what that quantity represents.
Therefore, when we store a discrete differential form it’s not enough to just store a number: we also have to specify a canonical orientation for each element of our mesh, corresponding to the orientation we used during integration. For an edge we’ve already seen that we can think about orientation as a little arrow pointing from one vertex to another — we could also just think of an edge as an ordered pair \((i,j)\), meaning that we always integrate from \(i\) to \(j\).
More generally, suppose that each element of our mesh is an oriented \(k\)-simplex \(\sigma\), i.e., a collection of \(k+1\) vertices \(p_i \in \mathbb{R}^n\) given in some fixed order \((p_1, \ldots, p_{k+1})\). The geometry associated with \(\sigma\) is the convex combination of these points:
\[ \left\{ \sum_{i=1}^{k+1} \lambda_i p_i \left| \sum_{i=1}^{k+1} \lambda_i = 1 \right. \right\} \subset \mathbb{R}^n \]
(Convince yourself that a 0-simplex is a vertex, a 1-simplex is an edge, a 2-simplex is a triangle, and a 3-simplex is a tetrahedron.)
Two oriented \(k\)-simplices have the same orientation if and only if the vertices of one are an even permutation of the vertices of another. For instance, the triangles \((p_1, p_2, p_3)\) and \((p_2, p_3, p_1)\) have the same orientation; \((p_1, p_2, p_3)\) and \((p_2, p_1, p_3)\) have opposite orientation.
If a simplex \(\sigma_1\) is a (not necessarily proper) subset of another simplex \(\sigma_2\), then we say that \(\sigma_1\) is a face of \(\sigma_2\). For instance, every vertex, edge, and triangle of a tetrahedron \(\sigma\) is a face of \(\sigma\); as is \(\sigma\) itself! Moreover, the orientation of a simplex agrees with the orientation of one of its faces as long as we see an even permutation on the shared vertices. For instance, the orientations of the edge \((p_2,p_1)\) and the triangle \((p_1,p_3,p_2)\) agree. Geometrically all we’re saying is that the two “point” in the same direction (as depicted above). To keep yourself sane while working with meshes, the most important thing is to pick and orientation and stick with it!
So in general, how do we integrate a \(k\)-form over an oriented \(k\)-simplex? Remember that a \(k\)-form is going to “eat” \(k\) vectors at each point and spit out a number — a good canonical choice is to take the ordered collection of edge vectors \((p_2 – p_1, \ldots, p_{k+1}-p_1)\) and orthogonalize them (using, say the Gram-Schmidt algorithm) to get vectors \((u_1, \ldots, u_n)\). This way the sign of the integrand changes whenever the orientation changes. Numerically, we can then approximate the integral via a sum
\[ \int_\sigma \alpha \approx \frac{|\sigma|}{N} \sum_{i=1}^N \alpha_{p_i}(u_1, \ldots, u_n) \]
where \(\{p_i\}\) is a (usually carefully-chosen) collection of sample points. (Can you see why the orientation of \(\sigma\) affects the sign of the integrand?) Sounds like a lot of work, but in practice one rarely constructs discrete differential forms via integration: more often, discrete forms are constructed via input data that is already discrete (e.g., vertex positions in a triangle mesh).
By the way, what’s a discrete 0-form? Give up? Well, it must be a 0-form (i.e., a function) that’s been integrated over every 0-simplex (i.e., vertex) of a mesh:
\[ \hat{\phi}_i = \int_{v_i} \phi \]
By convention, the integral of a function over a zero-dimensional set is simply the value of the function at that point: \(\hat{\phi}_i = \phi(v_i)\). In other words, in the case of 0-forms there is no difference between storing point samples and storing integrated quantities: the two coincide.
It’s also important to remember that differential forms don’t have to be real-valued. For instance, we can think of a map \(f: M \rightarrow \mathbb{R}^3\) that encodes the geometry of a surface as an \(\mathbb{R}^3\)-valued 0-form; its differential \(df\) is then an \(\mathbb{R}^3\)-valued 1-form, etc. Likewise, when we say that a discrete differential form is a number stored on every mesh element, the word “number” is used in a fairly loose sense: a number could be a real value, a vector, a complex number, a quaternion, etc. For instance, the collection of \((x,y,z)\) vertex coordinates of a mesh can be viewed as an \(\mathbb{R}^3\)-valued discrete 0-form (namely, one that discretizes the map \(f\)). The only requirement, of course, is that we store the same type of number on each mesh element.
The Discrete Exterior Derivative
One of the main advantages of working with integrated (i.e., “discrete”) differential forms instead of point samples is that we can easily take advantage of Stokes’ theorem. Remember that Stokes’ theorem says
\[ \int_\Omega d\alpha = \int_{\partial\Omega} \alpha, \]
for any \(k\)-form \(\alpha\) and \(k+1\)-dimensional domain \(\Omega\). In other words, we can integrate the derivative of a differential form as long as we know its integral along the boundary. But that’s exactly the kind of information encoded by a discrete differential form! For instance, if \(\hat{\alpha}\) is a discrete 1-form stored on the three edges of a triangle \(\sigma\), then we have
\[ \int_\sigma d\alpha = \int_{\partial\sigma} \alpha = \sum_{i=1}^3 \int_{e_i} \alpha = \sum_{i=1}^3 \hat{\alpha}_i. \]
In other words, we can exactly evaluate the integral on the left by just adding up three numbers. Pretty cool! In fact, the thing on the left is also a discrete differential form: it’s the 2-form \(d\alpha\) integrated over the only triangle in our mesh. So for convenience, we’ll call this guy “\(\hat{d}\hat{\alpha}\)”, and we’ll call the operation \(\hat{d}\) the discrete exterior derivative. (In the future we will drop the hats from our notation when the meaning is clear from context.) In other words, the discrete exterior derivative takes a \(k\)-form that has already been integrated over each \(k\)-simplex and applies Stokes’ theorem to get the integral of the derivative over each \(k+1\)-simplex.
In practice (i.e., in code) you can see how this operation might be implemented by simply taking local sums over the appropriate mesh elements. However, in the example above we made life particularly easy on ourselves by giving each edge an orientation that agrees with the orientation of the triangle. Unfortunately assigning a consistent orientation to every simplex is not always possible, and in general we need to be more careful about sign when adding up our piecewise integrals. For instance, in the example below we’d have
\[ (\hat{d}\hat{\alpha})_1 = \hat{\alpha}_1 + \hat{\alpha}_2 + \hat{\alpha}_3 \]
\[ (\hat{d}\hat{\alpha})_2 = \hat{\alpha}_4 + \hat{\alpha}_5 - \hat{\alpha}_2. \]
Discrete Hodge Star
As hinted at above, a discrete \(k\)-form captures the behavior of a continuous \(k\)-form along \(k\) directions, but not along the remaining \(n-k\) directions — for instance, a discrete 1-form in 2D captures the flow along edges but not in the orthogonal direction. If you paid attention to our discussion of Hodge duality, this story starts to sound familiar! To capture Hodge duality in the discrete setting, we’ll need to define a dual mesh. In general, the dual of an \(n\)-dimensional simplicial mesh identifies every \(k\)-simplex in the primal (i.e., original) mesh with a unique \((n-k)\)-cell in the dual mesh. In a two-dimensional simplicial mesh, for instance, primal vertices are identified with dual faces, primal edges are identified with dual edges, and primal faces are identified with dual vertices. Note, however, that the dual cells are not always simplices! (See above.)
So how do we talk about Hodge duality in discrete exterior calculus? Quite simply, the discrete Hodge dual of a (discrete) \(k\)-form on the primal mesh is an \((n-k)\)-form on the dual mesh. Similarly, the Hodge dual of an \(k\)-form on the dual mesh is a \(k\)-form on the primal mesh. Discrete forms on the primal mesh are called primal forms and discrete forms on the dual mesh are called dual forms. Given a discrete form \(\hat{\alpha}\) (whether primal or dual), its Hodge dual is typically written as \(\hat{\star} \hat{\alpha}\).
Unlike continuous forms, discrete primal and dual forms live in different places (so for instance, discrete primal \(k\)-forms and dual \(k\)-forms cannot be added to each other). In fact, primal and dual forms often have different physical interpretations. For instance, a primal 1-form might represent the total circulation along edges of the primal mesh, whereas in the same context a dual 1-form might represent the total flux through the corresponding dual edges (see illustration above).
Of course, these two quantities (flux and circulation) are closely related, and naturally leads into one definition for a discrete Hodge star called the diagonal Hodge star. Consider a primal \(k\)-form \(\alpha\). If \(\hat{\alpha}_i\) is the value of \(\hat{\alpha}\) on the \(k\)-simplex \(\sigma_i\), then the diagonal Hodge star is defined by
\[ \hat{\star} \hat{\alpha}_i = \frac{|\sigma_i^\star|}{|\sigma_i|} \hat{\alpha}_i \]
for all \(i\), where \(|\sigma|\) indicates the (unsigned) volume of \(\sigma\) (which by convention equals one for a vertex!) and \(|\sigma^\star|\) is the volume of the corresponding dual cell. In other words, to compute the dual form we simply multiply the scalar value stored on each cell by the ratio of corresponding dual and primal volumes.
If we remember that a discrete form can be thought of as a continuous form integrated over each cell, this definition for the Hodge star makes perfect sense: the primal and dual quantities should have the same density, but we need to account for the fact that they are integrated over cells of different volume. We therefore normalize by a ratio of volumes when mapping between primal and dual. This particular Hodge star is called diagonal since the \(i\)th element of the dual differential form depends only on the \(i\)th element of the primal differential form. It’s not hard to see, then, that Hodge star taking dual forms to primal forms (the dual Hodge star) is the inverse of the one that takes primal to dual (the primal Hodge star).
That’s All, Folks!
Hey, wait a minute, what about our other operations, like the wedge product (\(\wedge\))? These operations can certainly be defined in the discrete setting, but we won’t go into detail here — the basic recipe is to integrate, integrate, integrate. Actually, even in continuous exterior calculus we omitted a couple operations like the Lie derivative (\(\mathcal{L}_X\)) and the interior product (\(i_\alpha\)). Coming up with a complete discrete calculus where the whole cast of characters \(d\), \(\wedge\), \(\star\), \(\mathcal{L}_X\), \(i_\alpha\), etc., plays well together is an active and ongoing area of research, which may be of interest to aspiring young researchers like you (yes, you)!
A Quick and Dirty Introduction to Exterior Calculus — Part V: Integration and Stokes’ Theorem
In the last set of notes we talked about how to differentiate \(k\)-forms using the exterior derivative \(d\). We’d also like some way to integrate forms. Actually, there’s surprisingly little to say about integration given the setup we already have. Suppose we want to compute the total area \(A_\Omega\) of a region \(\Omega\) in the plane:
If you remember back to calculus class, the basic idea was to break up the domain into a bunch of little pieces that are easy to measure (like squares) and add up their areas:
\[ A_\Omega \approx \sum_i A_i. \]
As these squares get smaller and smaller we get a better and better approximation, ultimately achieving the true area
\[ A_\Omega = \int_\Omega dA. \]
Alternatively, we could write the individual areas using differential forms — in particular, \(A_i = dx^1 \wedge dx^2(u,v)\). Therefore, the area element \(dA\) is really nothing more than the standard volume form \(dx^1 \wedge dx^2\) on \(\mathbb{R}^2\). (Not too surprising, since the whole point of \(k\)-forms was to measure volume!)
To make things more interesting, let’s say that the contribution of each little square is weighted by some scalar function \(\phi\). In this case we get the quantity
\[ \int_\Omega \phi\ dA = \int_\Omega \phi\ dx^1 \wedge dx^2. \]
Again the integrand \(\phi\ dx^1 \wedge dx^2\) can be thought of as a 2-form. In other words, you’ve been working with differential forms your whole life, even if you didn’t realize it! More generally, integrands on an \(n\)-dimensional space are always \(n\)-forms, since we need to “plug in” \(n\) orthogonal vectors representing the local volume. For now, however, looking at surfaces (i.e., 2-manifolds) will give us all the intuition we need.
Integration on Surfaces
If you think back to our discussion of the Hodge star, you’ll remember the volume form
\[ \omega = \sqrt{\mathrm{det}(g)} dx^1 \wedge dx^2, \]
which measures the area of little parallelograms on our surface. The factor \(\sqrt{\mathrm{det}(g)}\) reminds us that we can’t simply measure the volume in the domain \(M\) — we also have to take into account any “stretching” induced by the map \(f: M \rightarrow \mathbb{R}^2\). Of course, when we integrate a function on a surface, we should also take this stretching into account. For instance, to integrate a function \(\phi: M \rightarrow \mathbb{R}\), we would write
\[ \int_\Omega \phi \omega = \int_\Omega \phi \sqrt{\mathrm{det}(g)}\ dx^1 \wedge dx^2. \]
In the case of a conformal parameterization things become even simpler — since \(\sqrt{\mathrm{det}(g)} = a\) we have just
\[ \int_\Omega \phi a\ dx^1 \wedge dx^2, \]
where \(a: M \rightarrow \mathbb{R}\) is the scaling factor. In other words, we scale the value of \(\phi\) up or down depending on the amount by which the surface locally “inflates” or “deflates.” In fact, this whole story gives a nice geometric interpretation to good old-fashioned integrals: you can imagine that \(\int_\Omega \phi\ dA\) represents the area of some suitably deformed version of the initially planar region \(\Omega\).
Stokes’ Theorem
The main reason for studying integration on manifolds is to take advantage of the world’s most powerful tool: Stokes’ theorem. Without further ado, Stokes’ theorem says that
where \(\alpha\) is any \(n-1\)-form on an \(n\)-dimensional domain \(\Omega\). In other words, integrating a differential form over the boundary of a manifold is the same as integrating its derivative over the entire domain.
If this trick sounds familiar to you, it’s probably because you’ve seen it time and again in different contexts and under different names: the divergence theorem, Green’s theorem, the fundamental theorem of calculus, Cauchy’s integral formula, etc. Picking apart these special cases will really help us understand the more general meaning of Stokes’ theorem.
Divergence Theorem
Let’s start with the divergence theorem from vector calculus, which says that
\[ \int_\Omega \nabla \cdot X dA = \int_{\partial\Omega} N \cdot X d\ell, \]
where \(X\) is a vector field on \(\Omega\) and \(N\) represents the (outward-pointing) unit normals on the boundary of \(\Omega\). A better name for this theorem might have been the “what goes in must come out theorem”, because if you think about \(X\) as the flow of water throughout the domain \(\Omega\) then it’s clear that the amount of water being pumped into \(\Omega\) (via pipes in the ground) must be the same as the amount flowing out of its boundary at any moment in time:
Let’s try writing this theorem using exterior calculus. First, remember that we can write the divergence of \(X\) as \(\nabla \cdot X = \star d \star X^\flat\). It’s a bit harder to see how to write the right-hand side of the divergence theorem, but think about what integration does here: it takes tangents to the boundary and sticks them into a 1-form. For instance, \(\int_\Omega X^\flat\) adds up the tangential components of \(X\). To get the normal component we could rotate \(X^\flat\) by a quarter turn, which conveniently enough is achieved by hitting it with the Hodge star. Overall we get
\[ \int_\Omega d \star X^\flat = \int_{\partial\Omega} \star X^\flat, \]
which, as promised, is just a special case of Stokes’ theorem. Alternatively, we can use Stokes’ theorem to provide a more geometric interpretation of the divergence operator itself: when integrated over any region \(\Omega\) — no matter how small — the divergence operator gives the total flux through the region boundary. In the discrete case we’ll see that this boundary flux interpretation is the only notion of divergence — in other words, there’s no concept of divergence at a single point.
By the way, why does \(d \star X^\flat\) appear on the left-hand side instead of \(\star d \star X^\flat\)? The reason is that \(\star d \star X^\flat\) is a 0-form, so we have to hit it with another Hodge star to turn it into an object that measures areas (i.e., a 2-form). Applying this transformation is no different from appending \(dA\) to \(\nabla \cdot X\) — we’re specifying how volume should be measured on our domain.
Fundamental Theorem of Calculus
The fundamental theorem of calculus is in fact so fundamental that you may not even remember what it is. It basically says that for a real-valued function \(\phi: \mathbb{R} \rightarrow \mathbb{R}\) on the real line
In other words, the total change over an interval \([a,b]\) is (as you might expect) how much you end up with minus how much you started with. But soft, behold! All we’ve done is written Stokes’ theorem once again:
\[ \int_{[a,b]} d\phi = \int_{\partial[a,b]} \phi, \]
since the boundary of the interval \([a,b]\) consists only of the two endpoints \(a\) and \(b\).
Hopefully these two examples give you a good feel for what Stokes’ theorem says. In the end, it reads almost like a Zen kōan: what happens on the outside is purely a function of the change within. (Perhaps it is Stokes’ that deserves the name, “fundamental theorem of calculus!”)
October 31, 2012 | Posted in: Notes | Comments Closed
A Quick and Dirty Introduction to Exterior Calculus — Part IV: Differential Operators
Originally we set out to develop exterior calculus. The objects we’ve looked at so far — \(k\)-forms, the wedge product \(\wedge\) and the Hodge star \(\star\) — actually describe a more general structure called an exterior algebra. To turn our algebra into a calculus, we also need to know how quantities change, as well as how to measure quantities. In other words, we need some tools for differentiation and integration. Let’s start with differentiation.
In our discussion of surfaces we briefly looked at the differential \(df\) of a surface \(f: M \rightarrow \mathbb{R}^3\), which tells us something about the way tangent vectors get “stretched out” as we move from the domain \(M\) to a curved surface sitting in \(\mathbb{R}^3\). More generally \(d\) is called the exterior derivative and is responsible for building up many of the differential operators in exterior calculus. The basic idea is that \(d\) tells us how quickly a \(k\)-form changes along every possible direction. But how exactly is it defined? So far we’ve seen only a high-level geometric description.
Div, Grad, and Curl
Before jumping into the exterior derivative, it’s worth reviewing what the basic vector derivatives \(\mathrm{div}\), \(\mathrm{grad}\), and \(\mathrm{curl}\) do, and more importantly, what they look like. The key player here is the operator \(\nabla\) (pronounced “nabla”) which can be expressed in coordinates as the vector of all partial derivatives:
\[ \nabla := \left( \frac{\partial}{\partial x^1}, \ldots, \frac{\partial}{\partial x^n} \right). \]
For instance, applying \(\nabla\) to a scalar function \(\phi: \mathbb{R}^n \rightarrow \mathbb{R}\) yields the gradient
\[ \nabla\phi = \left( \frac{\partial f}{\partial x^1}, \ldots, \frac{\partial f}{\partial x^n} \right), \]
which can be visualized as the direction of steepest ascent on some terrain:
We can also apply \(\nabla\) to a vector field \(X\) in two different ways. The dot product gives us the divergence
\[ \nabla \cdot X = \frac{\partial X^1}{\partial x^1} + \cdots + \frac{\partial X^n}{\partial x^n} \]
which measures how quickly the vector field is “spreading out”, and on \(\mathbb{R}^3\) the cross product gives us the curl
\[ \nabla \times X = \left( \frac{\partial X_3}{\partial x^2} - \frac{\partial X_2}{\partial x^3}, \frac{\partial X_1}{\partial x^3} - \frac{\partial X_3}{\partial x^1}, \frac{\partial X_2}{\partial x^1} - \frac{\partial X_1}{\partial x^2} \right), \]
which indicates how much a vector field is “spinning around.” For instance, here’s a pair of vector fields with a lot of divergence and a lot of curl, respectively:
(Note that in this case one field is just a 90-degree rotation of the other!) On a typical day it’s a lot more useful to think of \(\mathrm{div}\), \(\mathrm{grad}\) and \(\mathrm{curl}\) in terms of these kinds of pictures rather than the ugly expressions above.
Think Differential
Not surprisingly, we can express similar notions using exterior calculus. However, these notions will be a bit easier to generalize (for instance, what does “curl” mean for a vector field in \(\mathbb{R}^4\), where no cross product is defined?). Let’s first take a look at the exterior derivative of 0-forms (i.e., functions), which is often just called the differential. To keep things simple, we’ll start with real-valued functions \(\phi: \mathbb{R}^n \rightarrow \mathbb{R}\). In coordinates, the differential is defined as
\[ d\phi := \frac{\partial \phi}{\partial x^1} dx^1 + \cdots + \frac{\partial \phi}{\partial x^n} dx^n. \]
It’s important to note that the terms \(\frac{\partial \phi}{\partial x^i}\) actually correspond to partial derivatives of our function \(\phi\), whereas the terms \(dx^i\) simply denote an orthonormal basis for \(\mathbb{R}^n\). In other words, you can think of \(d\phi\) as just a list of all the partial derivatives of \(\phi\). Of course, this object looks a lot like the gradient \(\nabla \phi\) we saw just a moment ago. And indeed the two are closely related, except for the fact that \(\nabla \phi\) is a vector field and \(d\phi\) is a 1-form. More precisely,
\[ \nabla \phi = (d\phi)^\sharp. \]
Directional Derivatives
Another way to investigate the behavior of the exterior derivative is to see what happens when we stick a vector \(u\) into the 1-form \(df\). In coordinates we get something that looks like a dot product between the gradient of \(f\) and the vector \(u\):
\[ df(u) = \frac{\partial f}{\partial x^1} u^1 + \cdots + \frac{\partial f}{\partial x^n} u^n. \]
For instance, in \(\mathbb{R}^2\) we could stick in the unit vector \(u = (1,0)\) to get the partial derivative \(\frac{\partial f}{\partial x^1}\) along the first coordinate axis:
(Compare this picture to the picture of the gradient we saw above.) In general, \(df(u)\) represents the directional derivative of \(f\) along the direction \(u\). In other words, it tells us how quickly \(f\) changes if we take a short walk in the direction \(u\). Returning again to vector calculus notation, we have
\[ df(u) = u \cdot \nabla f. \]
Properties of the Exterior Derivative
How do derivatives of arbitrary \(k\)-forms behave? For one thing, we expect \(d\) to be linear — after all, a derivative is just the limit of a difference, and differences are certainly linear! What about the derivative of a wedge of two forms? Harkening back to good old-fashioned calculus, here’s a picture that explains the typical product rule \(\frac{\partial}{\partial x}(f(x)g(x)) = f’(x)g(x) + f(x)g’(x) \):
The dark region represents the value of \(fg\) at \(x\); the light blue region represents the change in this value as we move \(x\) some small distance \(h\). As \(h\) gets smaller and smaller, the contribution of the upper-right quadrant becomes negligible and we can write the derivative as the change in \(f\) times \(g\) plus the change in \(g\) times \(f\). (Can you make this argument more rigorous?) Since a \(k\)-form also measures a (signed) volume, this intuition also carries over to the exterior derivative of a wedge product. In particular, if \(\alpha\) is a \(k\)-form then \(d\) obeys the rule
\[ d(\alpha \wedge \beta) = d\alpha \wedge \beta + (-1)^{k}\alpha \wedge d\beta. \]
which says that the rate of change of the overall volume can be expressed in terms of changes in the constituent volumes, exactly as in the picture above.
Exterior Derivative of 1-Forms
To be a little more concrete, let’s see what happens when we differentiate a 1-form on \(\mathbb{R}^3\). Working things out in coordinates turns out to be a total mess, but in the end you may be pleasantly surprised with the simplicity of the outcome! (Later on we’ll see that these ideas can also be expressed quite nicely without coordinates using Stokes’ theorem, which paves the way to differentiation in the discrete setting.) Applying the linearity of \(d\), we have
&=& d(\alpha_1 dx^1 + \alpha_2 dx^2 + \alpha_3 dx^3) \\
&=& d(\alpha_1 dx^1) + d(\alpha_2 dx^2) + d(\alpha_3 dx^3).
Each term \(\alpha_j dx^j\) can really be thought of a wedge product \(\alpha_j \wedge dx^j\) between a 0-form \(\alpha_j\) and the corresponding basis 1-form \(dx^j\). Applying the exterior derivative to one of these terms we get
\[ d(\alpha_j \wedge dx^j) = (d\alpha_j) \wedge dx^j + \alpha_j \wedge \underbrace{(ddx^j)}_{=0} = \frac{\partial \alpha_j}{\partial x^i} dx^i \wedge dx^j. \]
To keep things short we used the Einstein summation convention here, but let’s really write out all the terms:
d\alpha &=& \frac{\partial \alpha_1}{\partial x^1} dx^1 \wedge dx^1 &+& \frac{\partial \alpha_1}{\partial x^2} dx^2 \wedge dx^1 &+& \frac{\partial \alpha_1}{\partial x^3} dx^3 \wedge dx^1 \\
&& \frac{\partial \alpha_2}{\partial x^1} dx^1 \wedge dx^2 &+& \frac{\partial \alpha_2}{\partial x^2} dx^2 \wedge dx^2 &+& \frac{\partial \alpha_2}{\partial x^3} dx^3 \wedge dx^2 \\
&& \frac{\partial \alpha_3}{\partial x^1} dx^1 \wedge dx^3 &+& \frac{\partial \alpha_3}{\partial x^2} dx^2 \wedge dx^3 &+& \frac{\partial \alpha_3}{\partial x^3} dx^3 \wedge dx^3. \\
Using the fact that \(\alpha \wedge \beta = -\beta \wedge \alpha\), we get a much simpler expression
d\alpha &=& ( \frac{\partial \alpha_3}{\partial x^2} - \frac{\partial \alpha_2}{\partial x^3} ) dx^2 \wedge dx^3 \\
&& ( \frac{\partial \alpha_1}{\partial x^3} - \frac{\partial \alpha_3}{\partial x^1} ) dx^3 \wedge dx^1 \\
&& ( \frac{\partial \alpha_2}{\partial x^1} - \frac{\partial \alpha_1}{\partial x^2} ) dx^1 \wedge dx^2. \\
Does this expression look familiar? If you look again at our review of vector derivatives, you’ll recognize that \(d\alpha\) basically looks like the curl of \(\alpha^\sharp\), except that it’s expressed as a 2-form instead of a vector field. Also remember (from our discussion of Hodge duality) that a 2-form and a 1-form are not so different here — geometrically they both specify some direction in \(\mathbb{R}^3\). Therefore, we can express the curl of any vector field \(X\) as
\[ \nabla \times X = \left( \star d X^\flat \right)^\sharp. \]
It’s worth stepping through the sequence of operations here to check that everything makes sense: \(\flat\) converts the vector field \(X\) into a 1-form \(X^\flat\); \(d\) computes something that looks like the curl, but expressed as a 2-form \(dX^\flat\); \(\star\) turns this 2-form into a 1-form \(\star d X^\flat\); and finally \(\sharp\) converts this 1-form back into the vector field \(\left( \star d X^\flat \right)^\sharp\). The take-home message here, though, is that the exterior derivative of a 1-form looks like the curl of a vector field.
So far we know how to express the gradient and the curl using \(d\). What about our other favorite vector derivative, the divergence? Instead of grinding through another tedious derivation, let’s make a simple geometric observation: in \(\mathbb{R}^2\) at least, we can determine the divergence of a vector field by rotating it by 90 degrees and computing its curl (consider the example we saw earlier). Moreover, in \(\mathbb{R}^2\) the Hodge star \(\star\) represents a rotation by 90 degrees, since it identifies any line with the direction orthogonal to that line:
Therefore, we might suspect that divergence can be computed by first applying the Hodge star, then applying the exterior derivative:
\[ \nabla \cdot X = \star d \star X^\flat. \]
The leftmost Hodge star accounts for the fact that \(d \star X^\flat\) is an \(n\)-form instead of a 0-form — in vector calculus divergence is viewed as a scalar quantity. Does this definition really work? Let’s give it a try in coordinates on \(\mathbb{R}^3\). First, we have
\begin{array}{rcl} \star X^\flat &=& \star( X_1 dx^1 + X_2 dx^2 + X_3 dx^3 ) \\
&=& X_1 dx^2 \wedge dx^3 + X_2 dx^3 \wedge dx^1 + X_3 dx^1 \wedge dx^2.
Differentiating we get
\begin{array}{rcl} d \star X^\flat &=& \frac{\partial X_1}{\partial x^1} dx^1 \wedge dx^2 \wedge dx^3 + \\
&& \frac{\partial X_2}{\partial x^2} dx^2 \wedge dx^3 \wedge dx^1 + \\
&& \frac{\partial X_3}{\partial x^3} dx^3 \wedge dx^1 \wedge dx^2, \\
but of course we can rearrange these wedge products to simply
\[ d \star X^\flat = \left( \frac{\partial X_1}{\partial x^1} + \frac{\partial X_2}{\partial x^2} + \frac{\partial X_3}{\partial x^3} \right) dx^1 \wedge dx^2 \wedge dx^3. \]
A final application of the Hodge star gives us the divergence
\[ \star d \star X^\flat = \frac{\partial X_1}{\partial x^1} + \frac{\partial X_2}{\partial x^2} + \frac{\partial X_3}{\partial x^3} \]
as desired.
In summary, for any scalar field \(\phi\) and vector field \(X\) we have
\nabla \phi &=& (d\phi)^\sharp \\
\nabla \times X &=& \left( \star d X^\flat \right)^\sharp \\
\nabla \cdot X &=& \star d \star X^\flat
One cute thing to notice here is that (in \(\mathbb{R}^3\)) \(\mathrm{grad}\), \(\mathrm{curl}\), and \(\mathrm{div}\) are more or less just \(d\) applied to a \(0-\), \(1-\) and \(2-\) form, respectively.
The Laplacian
Another key differential operator from vector calculus is the scalar Laplacian which (confusingly!) is often denoted by \(\Delta\) or \(\nabla^2\), and is defined as
\[ \Delta := \nabla \cdot \nabla, \]
i.e., the divergence of the gradient. Although the Laplacian may seem like just yet another in a long list of derivatives, it deserves your utmost respect: the Laplacian is central to the most fundamental of physical laws (any diffusion process and all forms of wave propagation, including the Schrödinger equation); its eigenvalues capture almost everything there is to know about a given piece of geometry (can you hear the shape of a drum?). Heavy tomes and entire lives have been devoted to the Laplacian, and in the discrete setting we’ll see that this one simple operator can be applied to a diverse array of tasks (surface parameterization, surface smoothing, vector field design and decomposition, distance computation, fluid simulation… you name it, we got it!).
Fortunately, now that we know how to write \(\mathrm{div}\), \(\mathrm{grad}\) and \(\mathrm{curl}\) using exterior calculus, expressing the scalar Laplacian is straightforward: \(\Delta = \star d \star d\). More generally, the \(k\)-form Laplacian is given by
\[ \Delta := \star d \star d + d \star d \star. \]
The name “Laplace-Beltrami” is used merely to indicate that the domain may have some amount of curvature (encapsulated by the Hodge star). Some people like to define the operator \(\delta := \star d \star\), called the codifferential, and write the Laplacian as \(\Delta = \delta d + d \delta\).
One question you might ask is: why is the Laplacian for 0-forms different from the general \(k\)-form Laplacian? Actually, it’s not — consider what happens when we apply the term \(d \star d \star\) to a 0-form \(\phi\): \(\star \phi\) is an \(n\)-form, and so \(d \star \phi\) must be an \((n+1)\)-form. But there are no \((n+1)\)-forms on an \(n\)-dimensional space! So this term is often omitted when writing the scalar Laplacian.
| Posted in: Notes | Comments Closed
A Quick and Dirty Introduction to Exterior Calculus — Part III: Hodge Duality
Previously we saw that a \(k\)-form measures the (signed) projected volume of a \(k\)-dimensional parallelpiped. For instance, a 2-form measures the area of a parallelogram projected onto some plane, as depicted above. But here’s a nice observation: a plane in \(\mathbb{R}^3\) can be described either by a pair of basis directions \((\alpha,\beta)\), or by a normal direction \(\gamma\). So rather than measuring projected area, we could instead measure how well the normal of a parallelogram \((u,v)\) lines up with the normal of our plane. In other words, we could look for a 1-form \(\gamma\) such that
\[ \gamma(u \times v) = \alpha \wedge \beta(u,v). \]
This observation captures the idea behind Hodge duality: a \(k\)-dimensional volume in an \(n\)-dimensional space can be specified either by \(k\) directions or by a complementary set of \((n-k)\) directions. There should therefore be some kind of natural correspondence between \(k\)-forms and \((n-k)\)-forms.
The Hodge Star
Let’s investigate this idea further by constructing an explicit basis for the space of 0-forms, 1-forms, 2-forms, etc. — to keep things manageable we’ll work with \(\mathbb{R}^3\) and its standard coordinate system \((x^1, x^2, x^3)\). 0-forms are easy: any 0-form can be thought of as some function times the constant 0-from, which we’ll denote “\(1\).” We’ve already seen the 1-form basis \(dx^1, dx^2, dx^3\), which looks like the standard orthonormal basis of a vector space:
What about 2-forms? Well, consider that any 2-form can be expressed as the wedge of two 1-forms:
\[ \alpha \wedge \beta = (\alpha_i dx^i) \wedge (\beta_j dx^j) = \alpha_i \beta_j dx^i \wedge dx^j. \]
In other words, any 2-form looks like some linear combination of the basis 2-forms \(dx^i \wedge dx^j\). How many of these bases are there? Initially it looks like there are a bunch of possibilities:
dx^1 \wedge dx^1 & dx^1 \wedge dx^2 & dx^1 \wedge dx^3 \\
dx^2 \wedge dx^1 & dx^2 \wedge dx^2 & dx^2 \wedge dx^3 \\
dx^3 \wedge dx^1 & dx^3 \wedge dx^2 & dx^3 \wedge dx^3 \\
But of course, not all of these guys are distinct: remember that the wedge product is antisymmetric (\(\alpha \wedge \beta = -\beta \wedge \alpha\)), which has the important consequence \(\alpha \wedge \alpha = 0\). So really our table looks more like this:
0 & dx^1 \wedge dx^2 & -dx^3 \wedge dx^1 \\
-dx^1 \wedge dx^2 & 0 & dx^2 \wedge dx^3 \\
dx^3 \wedge dx^1 & -dx^2 \wedge dx^3 & 0 \\
and we’re left with only three distinct bases: \(dx^2 \wedge dx^3\), \(dx^3 \wedge dx^1\), and \(dx^1 \wedge dx^2\). Geometrically all we’ve said is that there are three linearly-independent “planes” in \(\mathbb{R}^3\):
How about 3-form bases? We certainly have at least one:
\[ dx^1 \wedge dx^2 \wedge dx^3. \]
Are there any others? Again the antisymmetry of \(\wedge\) comes into play: many potential bases are just permutations of this first one:
\[ dx^2 \wedge dx^3 \wedge dx^1 = -dx^2 \wedge dx^1 \wedge dx^3 = dx^1 \wedge dx^2 \wedge dx^3, \]
and the rest vanish due to the appearance of repeated 1-forms:
\[ dx^2 \wedge dx^1 \wedge dx^2 = -dx^2 \wedge dx^2 \wedge dx^1 = 0 \wedge dx^1 = 0. \]
In general there is only one basis \(n\)-form \(dx^1 \wedge \cdots \wedge dx^n\), which measures the usual Euclidean volume of a parallelpiped:
Finally, what about 4-forms on \(\mathbb{R}^3\)? At this point it’s probably pretty easy to see that there are none, since we’d need to pick four distinct 1-form bases from a collection of only three. Geometrically: there are no four-dimensional volumes contained in \(\mathbb{R}^3\)! (Or volumes of any greater dimension, for that matter.) The complete list of \(k\)-form bases on \(\mathbb{R}^3\) is then
• 0-form bases: 1
• 1-form bases: \(dx^1\), \(dx^2\), \(dx^3\)
• 2-form bases: \(dx^2 \wedge dx^3\), \(dx^3 \wedge dx^1\), \(dx^1 \wedge dx^2\)
• 3-form bases: \(dx^1 \wedge dx^2 \wedge dx^3\),
which means the number of bases is 1, 3, 3, 1. In fact you may see a more general pattern here: the number of \(k\)-form bases on an \(n\)-dimensional space is given by the binomial coefficient
\[ \left( \begin{array}{c} n \\ k \end{array} \right) = \frac{n!}{k!(n-k)!} \]
(i.e., “\(n\) choose \(k\)”), since we want to pick \(k\) distinct 1-form bases and don’t care about the order. An important identity here is
\[ \left( \begin{array}{c} n \\ k \end{array} \right) = \left( \begin{array}{c} n \\ n-k \end{array} \right), \]
which, as anticipated, means that we have a one-to-one relationship between \(k\)-forms and \((n-k)\)-forms. In particular, we can identify any \(k\)-form with its complement. For example, on \(\mathbb{R}^3\) we have
\star\ 1 &=& dx^1 \wedge dx^2 \wedge dx^3 \\
\star\ dx^1 &=& dx^2 \wedge dx^3 \\
\star\ dx^2 &=& dx^3 \wedge dx^1 \\
\star\ dx^3 &=& dx^1 \wedge dx^2 \\
\star\ dx^1 \wedge dx^2 &=& dx^3 \\
\star\ dx^2 \wedge dx^3 &=& dx^1 \\
\star\ dx^3 \wedge dx^1 &=& dx^2 \\
\star\ dx^1 \wedge dx^2 \wedge dx^2 &=& 1
The map \(\star\) (pronounced “star”) is called the Hodge star and captures this idea that planes can be identified with their normals and so forth. More generally, on any flat space we have
\[ \star\ dx^{i_1} \wedge dx^{i_2} \wedge \cdots \wedge dx^{i_k} = dx^{i_{k+1}} \wedge dx^{i_{k+2}} \wedge \cdots \wedge dx^{i_n}, \]
where \((i_1, i_2, \ldots, i_n)\) is any even permutation of \((1, 2, \ldots, n)\).
The Volume Form
So far we’ve been talking about measuring volumes in flat spaces like \(\mathbb{R}^n\). But how do we take measurements in a curved space? Let’s think about our usual example of a surface \(f: \mathbb{R}^2 \supset M \rightarrow \mathbb{R}^3\). If we specify a region on our surface via a pair of unit orthogonal vectors \(u, v \in \mathbb{R}^2\), it’s clear that we don’t want the area \(dx^1 \wedge dx^2(u,v)=1\) since that just gives us the area in the plane. Instead, we want to know what a unit area looks like after it’s been “stretched-out” by the map \(f\). In particular, we said that the length of a vector \(df(u)\) can be expressed in terms of the metric \(g\):
\[ |df(u)| = \sqrt{df(u) \cdot df(u)} = \sqrt{g(u,u)}. \]
So the area we’re really interested in is the product of the lengths \(|df(u)||df(v)| = \sqrt{g(u,u)g(v,v)}\). When \(u\) and \(v\) are orthonormal the quantity \(\det(g) := g(u,u)g(v,v)-2g(u,v)\) is called the determinant of the metric, and can be used to define a 2-form \(\sqrt{\det(g)} dx^1 \wedge dx^2\) that measures any area on our surface. More generally, the \(n\)-form
\[ \omega := \sqrt{\det(g)} dx^1 \wedge \cdots \wedge dx^n \]
is called the volume form, and will play a key role when we talk about integration.
On curved spaces, we’d also like the Hodge star to capture the fact that volumes have been stretched out. For instance, it makes a certain amount of sense to identify the constant function \(1\) with the volume form \(\omega\), since \(\omega\) really represents unit volume on the curved space:
\[ \star 1 = \omega \]
The Inner Product on \(k\)-Forms
More generally we’ll ask that any \(n\)-form constructed from a pair of \(k\)-forms \(\alpha\) and \(\beta\) satisfies
\[ \alpha \wedge \star \beta = \langle\langle \alpha, \beta \rangle\rangle \omega, \]
where \(\langle\langle \alpha, \beta \rangle\rangle = \sum_i \alpha_i \beta_i\) is the inner product on \(k\)-forms. In fact, some authors use this relationship as the definition of the wedge product — in other words, they’ll start with something like, “the wedge product is the unique binary operation on \(k\)-forms such that \(\alpha \wedge \star \beta = \langle\langle \alpha, \beta \rangle\rangle \omega\),” and from there derive all the properties we’ve established above. This treatment is a bit abstract, and makes it far too easy to forget that the wedge product has an extraordinarily concrete geometric meaning. (It’s certainly not the way Hermann Grassmann thought about it when he invented exterior algebra!). In practice, however, this identity is quite useful. For instance, if \(u\) and \(v\) are vectors in \(\mathbb{R}^3\), then we can write
\[ u \cdot v = \star\left(u^\flat \wedge \star v^\flat\right), \]
i.e., on a flat space we can express the usual Euclidean inner product via the wedge product. Is it clear geometrically that this identity is true? Think about what it says: the Hodge star turns \(v\) into a plane with \(v\) as a normal. We then build a volume by extruding this plane along the direction \(u\). If \(u\) and \(v\) are nearly parallel the volume will be fairly large; if they’re nearly orthogonal the volume will be quite shallow. (But to be sure we really got it right, you should try verifying this identity in coordinates!) Similarly, we can express the Euclidean cross product as just
\[ u \times v = \star(u^\flat \wedge v^\flat)^\sharp, \]
i.e., we can create a plane with normal \(u \times v\) by wedging together the two basis vectors \(u\) and \(v\). (Again, working this fact out in coordinates may help soothe your paranoia.)
| Posted in: Notes | Comments Closed |
3e79ff2efbbd3ba5 | BITS Pilani
• Page last updated on Tuesday, April 23, 2019
Innovate. Achieve. Lead.
• Courses for M. Sc. (Hons.) programme
CHEM F110 : Chemistry Laboratory
This laboratory course consists of experiments based on fundamental principles and techniques of chemistry emphasizing on physical-chemical measurements, quantitative & qualitative analysis and preparations.
CHEM F111 : General Chemistry
CHEM F211 : Physical Chemistry I
Kinetic-molecular theory of gases; perfect gas; pressure and temperature; Maxwell distribution; collisions, effusion, mean free path; Boltzmann distribution law and heat capacities; first law of thermodynamics; p-V work, internal energy, enthalpy; Joule-Thomson experiment; second law; heat engines, cycles; entropy; thermodynamic temperature scale; material equilibrium; Gibbs energy; chemical potential; phase equilibrium; reaction equilibrium; standard states, enthalpies; Temperature dependence of reaction heats; third law; estimation of thermodynamic properties; perfect gas reaction equilibrium; temperature dependence; one component phase equilibrium, Clapeyron equation; real gases, critical state, corresponding states; solutions, partial molar quantities, ideal and non-ideal solutions, activity coefficients, Debye-Huckel theory; standard state properties of solution components; Reaction equilibrium in non-ideal solutions, weak acids-buffers, coupled reactions; multi- component phase equilibrium-colligative properties, two and three component systems, solubility; electrochemical systems- thermodynamics of electrochemical systems and galvanic cells, standard electrode potentials, concentration cells, liquid junction, ion selective electrodes, double layer, dipole moments and polarizations, applications in biology, concept of over voltage.
CHEM F212 : Organic Chemistry I
Basic terminology and representation of organic reactions; thermodynamics and kinetics of reactions; reactive intermediates (carbocations, carbanions, free radicals, nitrenes carbenes); aromatic chemistry; properties, preparation and reactions of alkyl halides, alcohols, ethers, amines and nitrocompounds; carbonyl compounds; carboxylic acid and derivatives; carbohydrates.
CHEM F213 : Physical Chemistry II
Origin of quantum theory-black body radiation, line spectra, photoelectric effect; wave particle duality; wave equation: normal modes, superposition; postulates of quantum mechanics, time dependence, Hermitian operators, commutator; Schrödinger equation, operators, observables, solution for particle in a box, normalization, variance, momentum; harmonic oscillator, vibrational spectroscopy; rigid rotor, angular momentum, rotational spectroscopy; Hydrogen atom-orbitals, effect of magnetic field; Variation method, variation theorem, secular determinants; Many electron atoms and molecules; Born Oppenheimer approximation, VB Theory, H2 in VB, Coulomb, ex-change, overlap integrals states of H2; antisym-metric wavefunctions two electron systems, Slater determinants, HF method; SCF method; term symbols and spectra, configuration, state, Hund’s rules, atomic spectra, spin orbit interaction; basic MO theory, homonuclear diatomics-N2, O2, SCF-LCAO-MO, molecular term symbols; HMO theory-π electron approximation, conjugated, cyclic systems.
CHEM F214 : Inorganic Chemistry I
Structure of molecules: VSEPR model; ionic crystal structure, structure of complex solids; concepts of inorganic chemistry: electronegativity, acid- base chemistry, chemistry of aqueous and non- aqueous solvents; descriptive chemistry of some elements: periodicity, chemistry of transition metals, halogens and noble gases; inorganic chains, rings, cages and clusters.
CHEM F223 : Colloid and Surface Chemistry
Surface phenomena; intermolecular forces relevant to colloidal systems; forces in colloidal systems; experimental and theoretical studies of the structure, dynamics and phase transitions in micelles, membranes, monolayers, bilayers, vesicles and related systems; technical applications.
CHEM F241 : Inorganic Chemistry II
Coordination Chemistry: Bonding - Valence Bond, Crystal Field, and Molecular Orbital theories; Complexes - nomenclature, isomerism, coordination numbers, structure, electronic spectra, magnetic properties, chelate effect; Reactions nucleophilic substitution reactions, kinetics, mechanisms; descriptive chemistry of Lanthanides and A ctinides; Organometallic Chemistry: structure and reaction of metal carbonyls, nitrosyls, dinitro- gens, alkyls, carbenes, carbynes, carbides, alkenes, alkynes, and metallocenes; catalysis by organometallic compounds; stereochemically non- rigid molecules.
CHEM F242 : Chemical Experimentation I
This course is based on laboratory experiments in the field of organic chemistry. Qualitative organic analysis including preliminary examination, detec- tion of functional groups, preparation and recrys- tallization of derivatives, separation and identifi- cation of the two component mixtures using chemical and physical methods; quantitative analysis such as determination of the percentage/ number of hydroxyl groups in organic compounds by ace- tylation method, estimation of amines/ phenols us- ing bromate-bromide solution/ acetylation method, determination of iodine and saponification values of an oil sample; single step synthesis such as benzaldehyde to cinnamic acid; multistep synthesis such as phthallic anhydride, phthallimide, anthranillic acid; extraction of organic compounds from natural sources: isolation of caffeine from tea leaves, casein from milk, lactose from milk, lycopene from tomatoes, beta- carotene from carrots etc.; demonstration on the use of software such as Chem Draw, Chem-Sketch or ISI-Draw.
CHEM F243 : Organic Chemistry II
Introduction to stereoisomers; symmetry ele- ments; configuration; chirality in molecules devoid of chiral centers (allenes, alkylidenecycloalkanes, spiranes, biphenyl); atropisomerism; stereochemi- stry of alkenes; conformation of acyclic molecules; conformations of cyclic molecules; reaction me- chanisms; asymmetric synthesis; photochemistry and pericyclic reactions.
CHEM F244 : Physical Chemistry III
Symmetry: symmetry operations, point groups, reducible and irreducible representations, charac- ter tables, SALC, degeneracy, vibrational modes IR-Raman activity identification; matrix evaluation of operators; stationary state perturbation theory; time dependent perturbation theory; virial and Hellmann-Feynmann theorems; polyatomic mole- cules: SCF MO treatment, basis sets, population analysis, molecular electrostatic potentials, loca- lized MOs; VB method; configuration interaction, Moller Plesset perturbation theory; semi empirical methods-all valence electron methods: CNDO,INDO, NDDO; Density Functional Theory: Hohenberg-Kohn theorems, Kohn-Sham self con- sistent field approach, exchange correlation func- tional; molecular mechanics.
CHEM F311 : Organic Chemistry III
Applications of important reagents and reactions in organic synthesis and disconnection or synthon approach will be emphasized in this course. Basic principles of disconnection, order of events, che- mioselectivity, regioselectivity etc. Common or- ganic reagents, Organometallic reagents, Transi- tion metal catalyzed reactions, introduction to re- terosynthetic analysis using one group C-X and C- C disconnections, two group C-X and C-C discon- nections, ring synthesis (saturated heterocycles), synthesis of heterocyclic compounds and complex molecules.
CHEM F312 : Physical Chemistry IV
Weak forces; surface chemistry: interphase re- gion, thermodynamics, surface films on liquids, adsorption of gases on solids, colloids, micelles, and reverse micellar structures; transport processes: kinetics, thermal conductivity, viscosi- ty, diffusion, sedimentation; electrical conductivity in metals and in solutions; reaction kinetics, mea- surement of rates; integrated rate laws; rate laws and equilibrium constants for elementary reac- tions; reaction mechanisms; temperature dependence of rate constants; rate constants and equi- librium constants; rate law in non ideal systems; uni, bi and tri molecular reactions, chain reactions, free-radical polymerizations; fast reactions; reac- tions in solutions; heterogeneous and enzyme catalysis; introduction to statistical thermodynamics theories of reaction rates; molecular reaction dynamics.
CHEM F313 : Instrumental Methods of Analysis
Principles and practice of modern instrumental methods of chemical analysis. Emphasis on spec- troscopic techniques such as UV-Visible, infrared, NMR (1H, 13C and other elements, NOE, correla- tion spectroscopies), ESR, atomic absorption and emission, photoelectron, Mössbauer, and fluores- cence. Other topics will include mass spectrome- try, separation techniques, light scattering, electroanalytical methods, thermal analysis, and dif- fraction methods.
CHEM F323 : Biophysical Chemistry
The principles governing the molecular shapes, structures, structural transitions and dynamics in some important classes of biomolecules and bio- molecular aggregates will be discussed. The top- ics will include: structure, conformational analysis, conformational transitions and equilibria in pro- teins and nucleic acids; protein folding; lipids - monolayers,bilayers and micelles; lipid-protein in- teractions in membranes.
CHEM F324 : Numerical Methods in Chemistry
Selected problems in chemistry from diverse areas such as chemical kinetics and dynamics, quantum mechanics, electronic structure of molecules, spectroscopy, molecular mechanics and conformational analysis, thermodynamics, and structure and properties of condensed phases will be discussed. The problems chosen will illustrate the application of various mathematical and numerical methods such as those used in the solu- tion of systems of algebraic equations, differential equations, and minimization of multidimensional functions, Fourier transform and Monte Carlo methods.
CHEM F325 : Polymer Chemistry
Types of polymers; structures of polymers; mole- cular weight and molecular weight distributions; kinetics and mechanisms of major classes of polymerization reactions such as step growth, radi- cal, ionic, heterogeneous, and copolymerization methods; polymer solutions- solubility, lattice model and the Flory- Huggins theory, solution vis- cosity; bulk properties- thermal and mechanical properties such as the melting and glass transi- tions, rubber elasticity, and viscous flow; polymerization reactions used in industry.
CHEM F326 : Solid State Chemistry
X-ray diffraction; point groups, space groups and crystal structure; descriptive crystal chemistry; factors which influence crystal structure; crystal defects and nonstoichiometry; solid solutions; interpretation of the phase diagrams; phase transitions; ionic conductivity and solid electrolytes; electronic properties and band theory; magnetic properties; optical properties; analysis of single crystal X RD data; preparation of solid state materials and the chemistry of device fabrication.
CHEM F327 : Electrochemistry : Fundamentals and Applications
Electrode Processes: Overpotential, Faradaic and non-faradaic processes, the ideal polarized electrode, capacitance and charge of an electrode, electrical double layer; primary and secondary cells, variables in electrochemical cells, factors af- fecting electrode reaction, cell resistance; Mass transfer: steady-state mass transfer, semiempiri- cal treatment of the transient response, coupled reversible and irreversible reactions, reference electrodes; Kinetics of electrode reactions: Arrhe- nius equation and potential energy surfaces, equi- librium conditions, Tafel Plots; rate determining electron transfer, Nernstian, quasireversible, and irreversible multistep processes; Marcus Theory; mass transfer by migration and diffusion; basic potential step methods; Ultramicroelectrodes (UME) potential sweep methods; polarography and pulse voltammetry; controlled current tech- niques; impedance; bulk and flow electrolysis; electrochemical instrumentation; scanning probe techniques, STM, AFM, Scanning Electrochemical Microscopy, approach curves, imaging surface to- pography and reactivity, potentiometric tips, applications.
CHEM F328 : Supramolecular Chemistry
Non-covalent interactions and their role in “su- permolecules” and organized polymolecular sys- tems; concepts of molecular recognition, information and complementarity; molecular receptors: design principles, binding and recognition of neu- tral molecules and anionic substrates, coreceptor molecules and multiple recognition, linear recognition of molecular lengths by ditopic coreceptors, heterotopic coreceptors, amphiphilic receptors, large molecular cages; supramolecular dynamics; supramolecular catalysis: reactive macrocyclic cation and anion receptor molecules, cyclophane type receptor, metallocatalysis, catalysis of syn- thetic reactions, biomolecular and abiotic cataly- sis, heterogeneous catalysis; transport processes and carrier design: cation and anion carriers, elec- tron, proton and light coupled transport processes, transfer via transmembrane channels; supramole- cular assemblies: heterogeneous molecular rec- ognition, supramolecular solids, molecular recog- nition at surfaces, molecular and supramolecular morphogenesis; supramolecular photochemistry: photonic devices, light conversion and energy transfer devices, photosensitive molecular recep- tors, photoinduced electron transfer and reac- tions, non-linear optical properties; supramolecu- lar electrochemistry: electronic devices, molecular wires, polarized molecular wires, switchable mo- lecular wires, molecular magnetic devices; ionic devices, tubular mesophases, ion-responsive mo- nolayers, molecular protonics, ion and molecular sensors, switching devices and signals, photos- witching and electroswitching devices, switching of ionic and molecular processes, mechanical switching processes; self-assembly: inorganic ar- chitectures, organic structures by hydrogen bond- ing; helical metal complexes, supramolecular arrays of metal ions – racks, ladders and grids, mo- lecular recognition directed self-assembly of orga- nized phases; supramolecular polymers; ordered solid-state structures; supramolecular synthesis, assistance, replication; supramolecular chirality; supramolecular materials.
CHEM F329 : Analytical Chemistry
Data handling; sample preparation; unit opera- tions; volumetric and gravimetric analysis; chro- matography; solvent and solid phase extraction; absorption and emission techniques; potentiome- try, voltammetry; trace metal separation and esti- mation in biological and environmental samples with emphasis on green chemistry, sensors; laboratory training in some of these techniques.
CHEM F330 : Photophysical Chemistry
Absorption of the electromagnetic radiation; pho- tophysical processes such as fluorescence, phos- phorescence, non-radiative transitions, and de- layed luminescence, excimer and exciplex forma- tion; triplet state: radiative and non-radiative tran- sitions; energy transfer, fluorescence resonance energy transfer (FRET), quenching of fluores- cence; fluorescence decay; protein and DNA fluo- rescence; time-resolved emission spectra (TRES); time-dependent anisotropy decays; application of photophysics for the characterization of biological and bio-mimicking systems. In addition to the theory, through simple experiments, laboratory training will be imparted.
CHEM F333 : Chemistry of Materials
Solid state structure : unit cells, metallic crystal structures, polymorphism and allotropy, crystallo- graphic direction and planes, closed packed crys- tal structures, polycrystalline materials, anisotro- py; meso and micro porous materials: zeolites, composites, synthesis, characterization (XRD, SEM, TEM, AFM, FTIR, NMR, TGA, and DTA) and applications; ceramics and glass materials: crystalline and non-crystalline nature, glass- ceramics, processing; polymers: synthesis, struc- ture, properties, inorganic polymers; mechanical properties: stress and strain, elastic and tensile properties, hardness, phase transformations, mi- crostructure, alteration of mechanical properties; magnetic properties: atomic magnetism in solids, the exchange interaction, classification of magnet- ic materials, diamagnetism, pauli paramagnetism, ferromagnetism, antiferromagnetism, ferrimagnet- ism, superparamagnetism, ferromagnetic do- mains, hysteresis loop, hard and soft ferrites, ap- plications; electrical properties: conductivity, band theory, types of semiconductors, time depen- dence of conductivity, mobility of charge carriers, metal-metal junction, metal–semiconductor junc- tion, n-type and p-type semiconductors; optical properties: refraction, reflection, absorption, transmission, luminescence, photoconductivity, opacity and translucency in insulators, optical fi- bers; thermal properties: heat capacity, thermal expansion, conductivity, thermal stresses; corro- sion: electrochemistry of corrosion of metals, dif- ferent forms, environmental effects, prevention.
CHEM F334 : Magnetic Resonance
Classical treatment of motion of isolated spins; quantum mechanical description of spin in static and alternating magnetic fields; Bloch equations; spin echoes; transient and steady state res- ponses; absorption and dispersion; magnetic di- polar broadening; formal theory of chemical shifts; Knight shift; second order spin effects; spin-lattice relaxation; spin temperature; density matrix; Bloch-Wangsness-Redfield theory; adiabatic and sudden changes; saturation; spin locking; double resonance; Overhauser effect; ENDOR; pulsed magnetic resonance: Carr-Purcell sequence, phase alternation, spin-flip narrowing, real pulses; electric quadrupole effects; spin-spin coupling; 2D correlation spectroscopies: COSY , DQF, INADE- QUA TE experiments; CIDNP; electron paramag- netic resonance (EPR); nuclear quadrupolar re- sonance; muon spin resonance; magnetic reson- ance imaging.
CHEM F335 : Organic Chemistry and Drug Design
An introduction to organic chemistry principles and reactivities vital to drug design, drug devel- opment and drug action; the role of molecular size, shape, and charge, and in drug action; pro- teins and nucleic acids as drug targets; bioisoster- ism; ADME, QSAR and drug design; applied mo- lecular modeling and combinatorial synthesis; Synthesis of some selected chemotherapeutic agents (e.g antifungal, antibacterial, antimalarial, anticancer etc.)
CHEM F336 : Nanochemistry
Nano and nature, importance of nanoscience, chemistry behind nano; instruments for characte- rizing nanomaterials; diversity in nanosystems: chemical aspects of metallic, magnetic and semi- conducting nanomaterials, carbon nanotubes and fullerenes, self-assembled monolayers, monolayer protected metal nanomaterials, core-shell nano- materials; applications of nano materials in nano- biology, nanosensors and nanomedicine; hands on experience in laboratory.
CHEM F337 : Green Chemistry and Catalysis
Definition and overview of the twelve principles of Green Chemistry, alternative starting materials; al- ternative synthesis and reagents; E factor and the concept of atom economy; the role of catalysis, alternate energy sources (microwave & ultra- sound), catalysis by solid acids and bases, bio- catalysis, catalytic reduction, catalytic oxidation, catalytic C–C bond formation, cascade catalysis, enantioselective catalysis, alternative reaction media, renewable raw materials, industrial applications of catalysis.
CHEM F341 : Chemical Experimentation II
This course is based on laboratory experiments in the fields of inorganic, physical and analytical chemistry. Quantitative separation and determina- tion of pairs of metal ions using gravimetric and volumetric methods; Ion exchange chromatogra- phy; Separation & estimation of metal ions using ion exchangers and solvent extraction techniques; Determination of Keq of M – L systems by colori- metry; Preparation, purification and structural stu- dies (magnetic, electronic and IR) of inorganic complex compounds; Physical property mea- surements such as conductance, pH, viscosity, surface tension, refractive index, specific rotation etc. Experiments to illustrate the principles of thermodynamics, kinetics, chemical equilibrium, phase equilibrium, electrochemistry, adsorption, etc.
CHEM F342 : Organic Chemistry IV
The fundamental structural characteristics, syn- thesis and reaction of various heterocyclic com- pounds, natural products and biomolecules will be emphasized in this course. Structure, nomencla- ture and common reactions of heterocyclic com- pounds; synthesis, properties and reactions of three-, four-, five-, and six membered ring sys- tems; condensed five and six membered ring sys- tems, introduction to natural products; terpenoids, steroids, lipids, alkaloids, amino acids, peptides, proteins and vitamins.
CHEM F343 : Inorganic Chemistry III
Inorganic elements in biological systems: role of alkali and alkaline earth metal ions, iron, copper and molybdenum; metalloenzymes. Metals in medicine: metal deficiency and disease; toxicity of mercury, cadmium, lead, beryllium, selenium and arsenic; biological defence mechanisms and chelation therapy. Molecular magnetic materials: tri- nuclear and high nuclearity compounds; magnetic chain compounds; magnetic long-range ordering in molecular compounds; design of molecular magnets. Other emerging topics in inorganic chemistry.
CHEM F412 : Photochemistry and Laser Spectroscopy
Photochemical events : absorption, fluorescence and phosphorescence; Jablonski diagrams; phys- ical properties of molecules after photoexcitation; photochemical tools and techniques: spectropho- tometers, fluorescence decay time measurement and analysis, flash photolysis; fundamental prop- erties of laser light; principles of laser operation ; description of some specific laser systems : Helium, Neon, Argon ion, CO2, Nd-YAG and ultrafast Titanium : Sapphire lasers.
CHEM F413 : Electron Correlation in Atoms and Molecules
Matrix algebra, Matrix representation of operators; mean-field approach: the Hartree-Fock method- formulation, coulomb and exchange integrals, Fock-operator, second quantization, Slater rules, self-consistency, correlation energy; Brillouin's theorem, Koopmans' theorem; basis-sets, re- stricted Hartree-Fock, Roothan-Hall equations; unrestricted Hartree-Fock method, spin- contamination; restricted open-shell Hartree-Fock method; Recovery of correlation energy time in- dependent perturbation approach: Brillouin- Wigner and Rayleigh-Schrodinger perturbation theories; Møller Plesset and Epstein-Nesbeth partitioning of molecular Hamiltonion, many-body per- turbation theory; Feynman diagrams, connected and disconnected terms, size-consistency; Re- covery of correlation energy: configuration interac- tion and other non-perturbative approaches, varia- tional and projection approaches for obtaining CI anasatz, truncated CI and size-consistency prob- lem, Davidson correction, pair-coupled-pair theory, coupled-electron-pair method and coupled-cluster approach; Density functional theory, N-representability, V-representability, Kohn-Sham approach, natural orbitals, exchange- correlation functionals, Levy functional.
CHEM F414 : Bio and Chemical Sensors
Biological and chemical recognition: reaction ki- netics, signals and noise, sensitivity, specificity, selectivity; IUPAC definition of biosensors, their classification based on receptors and transducers; analytical characteristics of various types of bio and chemical sensors, performance criteria of biosensors; electrochemical, optical, thermal, pie- zoelectric transducer selections for immunosen- sors and enzyme sensors; sufrace functionaliazai- ton of transducers, novel self assembly tech- niques, coupling of biomolecules on different sur- faces and their characterization; thermal biosen- sors, enzyme thermistor; miniaturization of sen- sors and flow injection techniques; applications in analysis such as urea, penicillin, pesticides, cho- lesterol; optical biosensor mechanisms: fluores- cence and chemiluminescence techniques; elec- trochemical biosensors: impedimetric and ampe- rometric biosensors; electrochemical quartz crys- tal micro balance, applications in chemical and bi- ological analysis; flow injection systems vs. static measurements, protein-protein interaction and quantification; principle of inhibition based biosen- sor for enzyme and immunoassay, pretreatment techniques in bio-analysis.
CHEM F415 : Frontiers in Organic Synthesis
Traditional and classic organic synthesis; modern synthetic strategies; systematic approach in terms of progress in reaction methodologies in synthe- sizing complex natural molecules; metal-catalyzed C-C and C-X couplings; direct functionalization via C-H and C-C activation; development of organo- catalysis: metal-free catalysis; direct functionaliza- tion of olefins including hydroamination, hydrogenation, hydrosilylation, hydroformylation and other C-C bond forming reactions; the potential of radi- cal chemistry for C-C and C-X bond formation; metal-catalyzed carbocyclization: from Ru and Rh-mediated cycloadditions to Pt and Au chemistry; one-pot multi-steps reactions: avoiding time and resource-consuming isolation procedures; tracing the development from the first total syn- thesis to the state of the art for some complex molecules.
CHEM F422 : Statistical Thermodynamics
Review of classical thermodynamics, principles of statistical thermodynamics, ensemble averages; Boltzmann distribution; partition functions and thermodynamic quantities; ideal gases and crys- tals; thermodynamic properties from spectroscop- ic and structural data; dense gases and the second virial coefficient; statistical mechanics of solutions; Bose-Einstein and Fermi-Dirac statistics.
CHEM C211 : Atomic and Molecular Structure
CHEM C222 : Modern Analytical Chemistry
Data handling and analysis; sample preparation; unit operations; volumetric and gravimetric analysis; oxidation-reduction and complexometric titrations; electroanalytical methods: potentiometry, ion selective electrodes, conductometry, polarography; separation techniques : chromatography, solvent extraction; introduction to spectroscopic methods; radiochemical methods; specific applications to problems in air and water quality analysis, toxic and trace metal estimation in biological and environmental samples.
CHEM C231 : Chemistry Project Laboratory
CHEM C232 : Chemistry of Organic Compounds
CHEM C311 : Chemical Kinetics
CHEM C321 : Chemical Thermodynamics
CHEM C322 : Quantum Chemistry
CHEM C331 : Structure and Reactivity of Organic Compounds
CHEM C332 : Synthetic Organic Chemistry
CHEM C352 : Bonding in Inorganic Compounds
CHEM C362 : Chemistry of Inorganic Compounds
CHEM C391 : Instrumental Methods of Analysis
CHEM C411 : Chemical Experimentation
CHEM C491 : Special Projects
Short-term research-based course.
• Courses for Ph.D. program
CHEM G513 : Advanced Nuclear and Radiochemistry
Nuclear stability, binding energy, properties of nucleons; Nuclear models (Shell Model, Liquid drop model), Radioactive decay characteristics, decay kinetics, α, β and γ decay, nuclear reactions, types, radiative capture, reaction cross section, theory of fission; Nuclear reactors – classification, Reactor power, Breeder reactors, Nuclear reactors in India, Reprocessing of spent fuel, Nuclear waste management (HLW, LLW and ILW); Detection and measurement of activity, GM counters, Gamma counters, Liquid Scintillation counting; Application of radioactivity, Szilard Chalmers reaction, Isotope dilution analysis, Neutron activation analysis, Diagnostic and therapeutic applications of radionucleides, interaction of radiation with matter.
CHEM G521 : Environmental Chemistry
Energy-flows and supplies, fossil fuels, nuclear energy, nuclear waste disposal, renewable energy, industrial ecology, green chemistry, ozone chemistry, effect of SOx, NOx as pollutants, reformulated gasoline, water pollution and treatment, organochlorine and organophosphate pesticides, eco-system effects, Toxic chemicals – Effect of dioxins, polychlorinated biphenyls (PCBs) and species of metals such as lead, mercury, cadmium etc.
CHEM G531 : Recent Advances in Chemistry
The course is aimed at providing an overview of recent developments in selected areas of chemistry. Topics to be covered may be drawn from: modern theories of structure, bonding and reactivity, spectroscopy, chemical dynamics, phase transitions, surface phenomena, solid state materials, and synthetic and mechanistic organic and inorganic chemistry, or such other topics as may emerge in the development of the subject.
CHEM G541 : Chemical Applications of Group Theory
Groups, subgroups and classes : definitions and theorems; molecular symmetry and symmetry groups; representation of groups; character tables; wave functions as bases for irreducible representations; direct product; symmetry adapted linear combinations; symmetry in molecular orbital theory; hybrid orbitals; molecular orbitals of metal sandwich compounds; ligand field theory; molecular vibrations; space groups.pastingd nucleophilic addition reactions; oxidation and reduction; enolates in organic synthesis; retro synthetic analysis; multiple step synthesis; protecting groups.
CHEM G551 : Advanced Organic Chemistry
Recent advances in aromatic electrophilic and nucleophilic substitution reactions and nucleophilic addition reactions; oxidation and reduction; enolates in organic synthesis; retro synthetic analysis; multiple step synthesis; protecting groups.
CHEM G552 : Advanced Inorganic Chemistry
Advanced coordination chemistry, reactions, ki- netics and mechanism; advanced organometalic chemistry, bonding models in inorganic chemistry, inorganic chains, rings, cages and clusters; group theory and its applications to crystal field theory, molecular orbital theory and spectroscopy (elec- tronic and vibrational); inorganic chemistry in bio- logical systems.
CHEM G553 : Advanced Physical Chemistry
Equilibrium: The laws of Thermodynamics, appli- cations to phase equilibrium, reaction equilibrium, and electrochemistry; Structure: Principles and techniques of quantum mechanics, applications to atomic and molecular structure and spectroscopy, statistical thermodynamics, molecular interactions, macromolecules, solid state; Dynamics: Molecular motion in gases and liquids, reaction rate laws, mechanisms and rate theories of complex reac- tions, molecular reaction dynamics, surface processes, electron transfer dynamics.
CHEM G554 : Physical Methods in Chemistry
Advanced spectroscopic and non-spectroscopic techniques used in chemistry; Topics will include electronic absorption spectroscopy of organic and inorganic compounds, ORD, CD; vibrational rota- tional spectroscopy symmetry aspects; Dynamic and Fourier transform NMR, NOE, Multipulse me- thods, Two-Dimensional NMR; EPR; NQR; Moss- bauer spectroscopy; Magnetism; Ionization Me- thods: Mass spectrometry, Ion Cyclotron Reson- ance; Photoelectron Spectroscopy; Microscopic techniques: TEM, STM, AFM; EXAFS, XANES; X- ray Crystallography.
CHEM G555 : Chemistry of Life Processes
Synthesis and structures of biopolymers such as proteins and nucleic acids; nucleic acid replica- tion, transcription and translation; lipids and bio- membranes; transport across membranes; neuro- transmission; enzyme and enzyme inhibitors; citric acid cycle, pentose phosphate pathway and nucleic acid metabolisms; photosynthesis; elec- tron transport systems in respiration and oxidative phosphorylation.
CHEM G556 : Catalysis
A comprehensive survey of the catalytic processes along with the fundamental aspects of the catalyst design and evaluation; several classes of heterogeneous industrial catalysts; their preparation, characterization and applications, recent developments in catalysis, applica- tion of nanomaterials in catalysis.
CHEM G557 : Solid Phase Synthesis and Combinatorial Chemistry
A comprehensive understanding of solid phase synthesis and combinatorial chemistry, basic prin- ciples of solid phase organic synthesis; solid phase organic synthesis strategies; introduction to combinatorial chemistry; analytical techniques in combinatorial chemistry; applications of the com- binatorial approach in chemistry, drug develop- ment and biotechnology.
CHEM G558 : Electronic Structure Theory
Advanced methods in theoretical and computa- tional chemistry based on Quantum Mechanics: Review of mathematical background, N- Dimension complex vector spaces, linear varia- tional problem, many electron wave functions and operators, operators and matrix elements; Ab- initio methods: Hartree-Fock (H-F), Configuration Interaction (CI), Many Body Perturbation Theory (MBP T); Density Functional Theory: Thomas- Fermi model, Hohenberg-Kohn theorems, deriva- tion of Kohn-Sham equations; Development and use of software for such models.
CHEM G559 : Bioinorganic Chemistry
Fundamentals of inorganic biochemistry; essential and non-essential elements in bio-systems, metal- loproteins and metalloenzymes; role of metal ions in oxygen carriers, synthetic oxygen carriers, bio- inorganic chips and biosensors; fixation of dinitro- gen, environmental bioinorganic chemistry; trans- port and storage of metal ions in vivo, metal com- plexes as probes of structure and reactivity with metal substitution; fundamentals of toxicity and detoxification, chelating agents and metal che- lates as medicines, nuclear medicines.
CHEM G561 : Heterocyclic Chemistry
The fundamental structural characteristics; synthesis and reactions of various heterocycles with nitrogen, oxygen and sulphur heteroatom in the ring; heterocyles such as pyrrole, thiophene, furan, imidazole, thiazole, oxazole, indole, benzofuran, pyridine and quinoline; advanced synthesis and reaction mechanism of heterocyclic compound.
CHEM G562 : Solid State Chemistry
Basics of solid state chemistry, comprehensive survey of different synthesis techniques, properties and their structural-property relationship of solid materials; introduction to special nanomaterials, ceramics, polymers, biopolymers and nanocomposites; thermal and mechanical properties of nanomaterials; nanocomposites in hydrophobic applications; recent advances in material science.
CHEM G563 : Advanced Statistical Mechanics
Review of ensembles, fluctuations, Boltzmann statistics, quantum statistics, ideal gases and chemical equilibrium; imperfect gases; distribution function theories and perturbation theories of classical liquids; electrolyte solutions; kinetic theory of gases; continuum mechanics; Boltzmann equation; transport processes in gases and Brownian motion; introduction to time- correlation function formalism.
Chemistry Association
Recent Publications
Picture Gallery for Chemistry
Picture Gallery for Chemistry
Quick Links
Designed and developed by fractal | ink design studios |
dd7c64683e11c85c | This follows on from;
Part 1
Point 5: The double slit experiment with an electron and a detector
Here's the experiment now:
The arena is drawn like this:
Screenshot 0
As for the ball, the detector works as follows: If the electron is near enough to the detector, probability is transferred from the left, electron-undetected, arena, to the right, electron-detected, arena (and vice versa).
Here goes:
Screenshot 0
At first glance, the situation with and without the detector both look similar, but let's look at frame 150:
No Detector:
Screenshot 0
Yes Detector:
Screenshot 0
You can see here that if an electron is detected (right hand arena), there's no diffraction pattern.
If an electron is not detected, there's a slight interference pattern, but it's much less than if there was no detector. That's because we're only interfering two possible things: (1) The electron went through the lower slit. (2) The electron went through the upper slit, but somehow escaped detection. Because the detector is reasonably likely to detect the electron, the two things we're interfering aren't equally probable. And that lack of equally-probableness is something that reduces the interference pattern.
Philosophical stuff
From here on, it's just a discussion of the philosophy of the matter. If you're not interested, congratulations, you've reached the end.
The fact that we have an interference pattern, and it goes away if we know which slit it goes through leads us to a conclusion (not unavoidably, but suggestively):
The electron travels through both slits. Any "simple" truth, such as "the electron went through the lower slit", is wrong, or at least incomplete. At least on a microscopic level, both possible scenarios must have played out because the interference pattern needed a contribution from both slits.
What about consciousness?
Well ... the detector could have been a person who looks for electrons nearby. It would have only been slightly different: The simulated electron detector flips between yes detection and no detection when in the presence of an electron, whereas a human would not be able to forget that they'd seen an electron.
Nevertheless, the simulation would be largely the same. This, so far, means that consciousness can fit into quantum mechanics seamlessly, and without causing a collapse of the wavefunction in any way.
Conclusion: Consciousness doesn't necessarily cause a wavefunction collapse.
Maybe the wavefunction collapses anyway.
Yes, just because we can put consciousness in our picture of the world without it causing a wavefunction collapse, doesn't mean that all possible outcomes actually happen.
However, the Schrödinger equation, which has been experimentally verified, doesn't have anything in it which causes the wavefunction to collapse. Put another way, if the Schrödinger equation is all there is, all possible universes just carry on going.
So in some ways, assuming that all possible scenarios play out in some way, for ever, is the default position.
Doesn't that raise problems?
Yes, if you say that the wavefunction never collapses, and you think that the wavefunction is "real", whatever that means, then you're more-or-less forced into a many-worlds view of Quantum mechanics: The electron is now either detected or not, and both scenarios play out until the end of the universe or for eternity (whichever is sooner), but in different parts of an increasingly complicated arena.
So what?
The problem with this is that that means that every decision you made as a kid was played out in a different part of the universal wavefunction. And many people find that "absurd": common sense says that only one possibility actually happened - the other possibilities are just hypothetical but didn't actually happen.
If we assume that common sense is *always* correct when it makes unverifiable statements about quantum mechanics, then we have a problem.
The ways round this really fall into two categories: The problem with the 2nd way out is that it feels contrived, and many people consider it to be complicated or philosophically unnecessary, or problematic for a variety of reasons. It feels like a redefinition of "real". If the wavefunction affects reality, which it does, even in these wave-guide theories, surely it's real by definition? I certainly use the term "real" to encompass anything that materially affects the movement of atoms, which the wavefunction does. Wave-guide theorists must surely be in a difficult position where they admit that the wavefunction affects reality, but deny that it is real?
Fine, we'll just say that the wavefunction collapses - that at some point one reality is chosen as the real reality, and the probabilities of the others fade to zero.
The problem is that this would be nightmarishly difficult to describe mathematically: There are all sorts of conservation laws and symmetries that quantum mechanics has to obey (because experiment shows that there are these conservation laws and symmetries), and that makes it virtually impossible to find a satisfactory model for how the wavefunction might "choose" one possible universe to be more real than the other possible universe.
Also, if there's really a wavefunction collapse, we may be able to detect it experimentally. This actually makes this different from the other interpretations - you could devise a huge double slit (much bigger than the earth, with zero light) that people would be diffracted through, and you could see if there's still an interference pattern. An actual, physical, collapse is often considered an interpretation of quantum mechanics, but some forms of it are actually a testably different theory that has yet to be verified or falsified.
So ... what's the answer.
These questions really are hard to answer - some people think it's a real problem and it makes them doubt all of quantum mechanics - a view I consider a bit unthinking. Some people think that it's not a problem: A sentence is only really meaningful if it conveys a statement that can be falsified experimentally, and wavefunction collapse can't be falsified experimentally, so this question is a non-problem.
They're basically saying (and I largely agree), that the difference between the many-worlds universe and the common sense universe is only a language game. It's like one person saying that the sky is blue, and another saying that the sky is bleu.
Lots more people just say that surely one interpretation is "correct", and the others "incorrect", but they don't know which, or they do think they know (for philosophical or religious reasons which are typically not universally convincing) which is correct.
At the end of the day, no experiment can tell whether these alternate possibilities are real at macroscopic levels, so it's really a matter of personal preference.
Personally, I prefer the many worlds interpretation, but I agree that it's not really a meaningful question. All you need is the Schrödinger equation, and that's the only thing we can check experimentally.
Widget is loading comments...
© Hugo2015. Session @sessionNumber |
6353327d65caeb03 | Take the 2-minute tour ×
I often hear about the wave-particle duality, and how particles exhibit properties of both particles and waves. I most recently heard this in this video. However, I wonder; is this actually a duality? At the most fundamental level, we 'know' that everything is made up out of particles, whether those are photons, electrons, or maybe even strings. That light for example, also shows wave-like properties, why does that even matter? Don't we know that everything is made up of particles? In other words, wasn't Young wrong and Newton right, instead of them both being right?
share|improve this question
"we 'know' that everything is made up out of particles, whether those are photons, electrons, or maybe even strings." Actually, we also know that those particles are properly described by a mathematical framework--path integrals--in which the wave properties are of fundamental importance. – dmckee Dec 7 '12 at 22:20
I suggest you to read [dx.doi.org/10.1209/0295-5075/1/4/004 ] (Grangier, P., Roger, G., & Aspect, A. (1986). Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences. Europhysics Letters (EPL), 1(4), 173–179.) and then to try to change your mind. This is clear experiment showing that light is neither a particle nor a wave field: it's both a particle and a wave. Best regards – FraSchelle Dec 9 '12 at 20:42
Wow, I am getting answers from both ends of the spectrum, don't know which ones are correct. – user14445 Dec 11 '12 at 13:28
The EPL paper states clearly: "This result is in contradiction with any classical wave model of light, but in agreement with a quantum description involving single-photon states". As said in my answer and in further comments, the wave model is only an approximation to the underlying model of particles; photons are quantum particles (see CERN website link). – juanrga Dec 11 '12 at 15:31
@user14445 I think Lubos gives a comprehensive exposition of what the duality means in terms of quantum mechanics. As I stress in my complementary answer, if you keep in mind that "wave" is a probability wave, not an amplitude wave the different terminologies stop being confusing. – anna v Dec 17 '12 at 15:55
show 1 more comment
7 Answers
Duality is the relationship between two entities that are claimed to be fundamentally equally important or legitimate as features of the underlying object.
The precise definition of a "duality" depends on the context. For example, in string theory, a duality relates two seemingly inequivalent descriptions of a physical system whose physical consequences, when studied absolutely exactly, are absolutely identical.
The wave-particle duality (or dualism) isn't far from this "extreme" form of duality. It indeed says that the objects such as photons (and electromagnetic waves composed of them) and electrons exhibit both wave and particle properties and they are equally natural, possible, and important.
In fact, we may say that there are two equivalent descriptions of particles – in the position basis and the momentum basis. The former corresponds to the particle paradigm, the latter corresponds to the wave paradigm because waves with well-defined wavelengths are represented by simple objects.
It's certainly not true that Young was wrong and Newton was right. Up to the 20th century, it seemed obvious that Young was more right than Newton because light indisputably exhibits wave properties, as seen in Young's experiments and interference and diffraction phenomena in general. The same wave phenomena apply to electrons that are also behaving as waves in many contexts.
In fact, the state-of-the-art "theory of almost everything" is called quantum field theory and it's based on fields as fundamental objects while particles are just their quantized excitations. A field may have waves on it and quantum mechanics just says that for a fixed frequency $f$, the energy carried in the wave must be a multiple of $E=hf$. The integer counting the multiple is interpreted as the number of particles but the objects are more fundamentally waves.
One may also adopt a perspective or description in which particles look more elementary and the wave phenomena are just a secondary property of them.
None of these two approaches is wrong; none of them is "qualitatively more accurate" than the other. They're really equally valid and equally legitimate – and mathematically equivalent, when described correctly – which is why the word "duality" or "complementarity" is so appropriate.
share|improve this answer
The wave-particle duality is an old misconception which is avoided in modern textbooks and papers. – juanrga Dec 10 '12 at 10:46
Particle physics works in the momentum representation, whereas the double slit experiment generating the interference pattern uses the position representation. It is weird that you believe that particle physics deals with "the wave paradigm". Particle physics deals with particles: quarks, electrons, photons, neutrinos... – juanrga Dec 10 '12 at 10:53
Fields are believed to be fundamental only in the old-fashioned approach, which is open to several objections: among others the fields are unobservable. As Weinberg explains in his recent textbook, the old approach "is certainly a way of getting rapidly into the subject, but it seems to me that it leaves the reflective reader with too many unanswered questions". The modern picture is developed in Weinberg textbook where "here particles come first -- they are introduced in Chapter 2"... – juanrga Dec 10 '12 at 11:19
Fields are not fundamental. Weinberg introduces them in Chapter 5 and only as a technical tool (as said before the fields are unobservable) valid for certain kind of interactions and dynamical regimes. Particles are much more fundamental than fields and that is the reason why particles are used in generalized theories beyond the scope of field theory. The equivalence or duality that you pretend is only in your imagination, not in nature :-) – juanrga Dec 10 '12 at 11:21
@user14445 it is not only me who disagrees with his answer, but all the references that I have cited: Klein, Ballentine, Weinberg, Mandl & Shaw, CERN,... – juanrga Dec 11 '12 at 15:35
show 10 more comments
Effectively, as the CERN website emphasizes
It must be emphasized that they refer to quantum particles. A quantum particle is not a Newtonian particle. A quantum particle is not a wave. A quantum particle never behaves as a wave and this is the reason why the discipline that studies quantum particles such as electrons, quarks, or photons is named "particle physics" not "wave physics".
Your question about the wave-particle duality is well answered in the Klein site:
true wave-particle duality does not exist.
The site also reveals interesting historical details on how the incorrect beliefs on duality and complementarity were based in early misunderstandings of quantum theory plus some technological limitations of the apparatus used in early double-slit interference experiments.
Today we know that wave-particle duality does not exist and modern literature avoids the term:
The miraculous ”wave-particle duality” continues to flourish in popular texts and elementary text books. However, the rate of appearance of this term in scientific works has been decreasing in recent years (the same is true for Bohr’s notion of complementarity).
In fact, if a wave-particle duality existed or played a fundamental role it would be found in modern textbooks. A critic in the comments appeal to quantum field theory, but the fact is that you cannot find the term "wave-particle duality" in the indices of recent quantum field theory textbooks such as Weinberg (Volume I) or in classics as that by Mandl & Shaw. Why? Because, there is no "wave-particle duality" in nature.
You can also check the CERN scientific glossary and verify that there is none entry or mention to "wave-particle duality". Why? Because, there is no "wave-particle duality" in nature.
Some people believes that the wavefunctions used in some formulations of QM are real waves, but this is a mistake. A wave is a physical system which carries energy and momentum. A wavefunction is a mathematical function which cannot be observed. Wavefunctions are only an approximated way to represent the states of true quantum objects in certain formulations of QM. The quantum state of an open system cannot be represented by a wavefunction. It is not a mere question of semantics.
As the Klein site cited above clearly explains, all the quantum phenomena including interference patterns can be explained without any wave-particle duality.
One would also analyse experiments such as that of the double slit with electrons. As stated above, today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots. To obtain the statistical interference pattern you need to repeat the experiment during a period of time and superpose the results of each one of the individual runs in a final statistical figure
The statistical interference pattern observed corresponds to a statistical distribution of positions of different particles at different time. There is no wave-behaviour for a single electron:
The manifestations of wave-like behavior are statistical in nature and always emerge from the collective outcome of many electron events. In the present experiment nothing wave-like is discernible in the arrival of single electrons at the observation plane. It is only after the arrival of perhaps tens of thousands of electrons that a pattern interpretable as wave-like interference emerges.
Notice that the author correctly write "wave-like", because no real wave is detected in the experiment, only a statistical pattern is observed in the detector.
@annaV wrote an excellent remark about our modern understanding of this experiment. I would add that recent advances in quantum theory allow us to compute the trajectory of each particle in the experiment. The result of the theoretical simulation of the particle followed by each particle in a double slit experiment is
which predicts exactly the observed behaviour and the exact interference pattern in the double slit experiment.
Unfortunately, the development of quantum mechanics has been plagued with myths and misconceptions. I would recommend Ballentine textbook for a rigorous and advanced treatment of quantum mechanics without old misconceptions such as "wave-particle duality":
This approach replaces the heuristic but inconclusive arguments based upon analogy and wave–particle duality, which so frustrate the serious student.
Quantum Mechanics a Modern Development is considered one of the best textbooks today.
share|improve this answer
The mathematics of path integrals and quantum field theories has wave equations embedded in it, and particle experimenters regularly measure the interference between terms in the perturbative expansion as a tool for probing physics. The wave nature remains every bit as real as the particle nature. – dmckee Dec 8 '12 at 19:36
I agree with dmckee, the wave nature is an indispensable part of the physics. The observed behaviour of matter can only be explained through physical concepts such as spreading, diffraction and interference, all of which are associated with waves, not particles. In fact, I would go so far as to say that the real physics is mostly that of waves! The discrete particle nature only really becomes apparent during the measurement process. – Mark Mitchison Dec 8 '12 at 19:54
Sorry, but you are still just arguing on semantics. The wave might be considered (by you) just a calculational tool not a physical wave, but it's still a "wave", insofar as that word has meaning (it's the only word we have!). As I have already stated, particles are associated to measurement only. If you think that physics is just about predicting the outcomes of measurements, that's fine. Personally I take the view that physics is about understanding, and for that you require fields. If Weinberg is your main reference I recommend supplementing it with Zee for quite a different view. – Mark Mitchison Dec 9 '12 at 18:44
@MarkMitchison: Ok, but let me emphasize that it is not a question of interpretation or semantics. Change "wave" by any other word that you prefer but it continues without existing any "ket-particle duality", "wavefunction-particle duality"... Change the interpretation of QM and the wavefunction continues being unobservable. It has been shown that the fields can be eliminated completely, as Feynman first suggested; how could something unneeded be fundamental? – juanrga Dec 10 '12 at 9:57
@MarkMitchison The spacetime curvature is not real, but a mere "geometric analogy" as Weinberg calls it. This is why you can reformulate the theory in flat spacetime and describe the same physical phenomena without any curvature. One would differentiate a physical system from formal elements specific to a given model of it. – juanrga Dec 11 '12 at 16:11
show 10 more comments
I think you will be less confused by the answers if you keep clearly in mind that wave equations are specific differential equations which apply to many classical systems which have been studied for over two centuries in great detail as they applied to light and sound and fluids.
It so happened that the differential equations which first described the observed quantized behavior of the microcosm , like the Schroedinger equation, are also wave equations. That is why one talks of wave functions. But, and it is something that has to be emphasized time and time again, what the quantum mechanical solutions describe are not waves in the size of the "particle" in (x,y,z,t) but the probability of finding a "particle" at (x,y,z,t) or with a four vector (p_x,p_y,p_z,t).
The terminology "particle" which is useful in classical physics as for example in the molecules of an ideal gas, is what creates the confusion here. We should be calling them "elementary entities" which can be described as probability waves for some manifestations, as in the two slit image in Juanrga's reply here, and sometimes as particles of classical behavior, i.e having specific coordinates and specific four vectors describing their motion, for other behaviors.
electron positron pairs
These electron positron pairs appear at specific (x,w,z,t) with specific four vectors in this bubble chamber photo.
share|improve this answer
add comment
Look, the firing of sequential electrons one at a time in the double-slit experimental set-up does indeed reveal single electron detection events on the detector plate; and it is also true that after many such events a pattern emerges that is consistent with an interference pattern. Simply saying that the interference pattern results from the statistical pattern of many detection events does not explain at all why that pattern happens to be one that is consistent with wave interference! The single detection events are indeed consistent with the particle nature of the electron, but the wave interference pattern after many such single events are accumulated is consistent with the wave nature of the electron. Rather than dismiss the wave nature of the electron, what has been described actually demonstrates quite clearly the wave-particle duality that some have attempted to deny as real. The interference pattern must be explained exclusively in terms of particle physics if one wants to deny the wave nature of the electron and I have not seen that yet. On the other hand, I have not yet heard an explanation for how a "probability wave" can exhibit actual physical interference if it is only a mathematical abstraction. So the wave aspect of wave particle duality also needs to be further explained or understood.
share|improve this answer
add comment
Localization defines what most physicists would think of as particles ie. yes, Newton's aether - ridding nature of its inert stage. But 20th century physics still hinges on the inert stage and cannot deny that waves are at the heart of the SM. But if we can modify the mathematics, then do we get rid of waves (like someone at CERN says)? Still NO. The duality is a deep principle for a quantum world, even if the nature of waves still needs to be sorted out in quantum information theory.
Recall that Heisenberg's uncertainty principle can be derived by taking de Broglie's rule for waves-matter (wavelengths limit resolution). This use of mass is more physical than the classical one, where it is really just a parameter. (Ironically, as you know, it was Newton (and Descartes and Galileo) who initiated the confusion of the inert stage). Now we are taught to think of light waves in a 'vacuum' a la Maxwell, but this would have Newton turning in his grave. We need to think of the background spacetime emerging from the em fields. This is the modern point of view (but no one seems to understand it yet). Then waves and particles describe two distinct properties of spacetimes - one local (events) and one nonlocal (interference etc). We assume that new theories require both types of information. This is all an oversimplification but see how Newton is only right for 20th century ideas, and not beyond. So Young is still wrong in the context of the old aether, but the continuity of ideas from classical optics to QM and QFT cannot be forgotten as we pull apart the idea of wave functions. Note also that the historical experiments were very careful to demonstrate that both waves and particles are aspects of underlying nature - and our weak understanding.
Where is de Broglie now then. The uncertainty principle in string theory uses deep mathematical dualities (STU). In principle it comes from a modified de Broglie principle (I don't know a good ref sorry). This goes far beyond the original WPD, but I think highlights the importance of WPD. An event is not just a point of classical spacetime (because this is unphysical in a theory with uncertainty) so WPD is in some sense the best idea we have for building spacetime states from both local and non local information.
share|improve this answer
add comment
Whilst everything is made up of particles, they are not your typical "billiard ball" particles because they have a phase.
Duality source
The consequence of this is that they demonstrate examples of interference when adequately set up. For example:
• In the double slit experiment, particles hit the screen according to interference patterns instead of simple scatterings
• In an atom, electrons are bound to specific orbitals which correspond to its resonating frequencies
and many more.
share|improve this answer
Ballentine textbook, cited in my answer, devotes several sections to show how the identification of a single particle with a wave-packet or with a wavefunction gives contradictions and experimental discrepancies. – juanrga Dec 10 '12 at 10:38
@juanrga which sections exactly please? – Revo Nov 29 '13 at 2:23
@Revo Check chapter 9. – juanrga Mar 13 at 12:16
add comment
Your perception of reality is based on your IQ developed in everyday world. Don't apply it to understand Quantum world.
All of denizens of Quantum realm are something we haven't yet understood fully. They are neither particles nor waves... they are something else. Our everyday languages don't have words to name these kind of things.
Young's double-slit experiment says that they are waves (Double-slit experiment can also be performed with atoms, electrons etc., not just light). Compton Scattering & Photoelectric Effect say that they are particles. Combining results of all valid experiments, they posses properties of both wave & particle at the same time. Common sense can deny that, but its true.
The modern version of Young's double-slit experiment:
In case you don't know, when light from same source are passed through two parallel slits, an interference pattern like barcode is formed on second screen. Its like water wave interference.
Animated GIF of Young's Double-Slit Experiment Double-slit experiment
In modern version of the experiment, sensitive detectors are placed on many places of second screen to count arrival of photons. The results are interesting: Its same as that of original Young's result. White band gets very high number of photons & black band gets almost no photon. But, the problem is: Interference is a property of waves. How can it be with particle model? There's no co-ordination between photons. They are fully alone. How can a photon knows where its fellow photon would land? Well, solution to this is somewhat tricky. See below.
To visualize the concept of duality more clearly, look at the modern explanation of Young's double-slit experiment with the Schrödinger Equation:
Light reveals itself either as a stream of particles or as a wave. We don't see both sides of the coin at the same time. So, when we observe light as a stream of particles, there is no wave in existence to inform those particles about how to behave & vice versa. To solve the problem, Erwin Schrödinger proposed an idea (Physicists launched at it initially, but it became game-changer of whole physics). He imagined an abstract mathematical wave that spread through space, encountering obstacles and being reflected and transmitted, just like a water wave spreading on a pond. In places where the height of the wave was large, the probability of finding a particle was highest, and in locations where it was small, the probability was lowest.
With probability wave described by Schrödinger Equation, one can see this extraordinary property of photon: Since the photon can either be transmitted from slit 1 or slit 2, the Schrödinger equation must permit the existence of two waves, one corresponding to the photon going through slit 1 and another corresponding to the photon going through slit 2. Nothing surprising here. However, if two waves are permitted to exist, a superposition of them is also permitted to exist. For waves at sea such a combination is nothing out of the ordinary. But, here the combination corresponds to something extraordinary: The photon being transmitted from both slits simultaneously!
The same is true for any other denizen of Quantum world. It means that an atom, electron etc can exist at more than one places at once & do multiple things at once (the fundamental of upcoming quantum computer). If you see particle model this way (which is 100% correct), your common sense won't reject wave model.
share|improve this answer
The image of the two slit experiment that Juanrga shows in his answer has been taken according to the wikipedia article with particle detectors at the slits en.wikipedia.org/wiki/… .quote "And in 2012, researchers finally succeeded in correctly identifying the path each particle had taken without any adverse effects at all on the interference pattern generated by the particles." This shows that the interference is pure probability, and the particle passes through one slit whole. – anna v Dec 9 '12 at 21:20
I would add that there is novel theoretical techniques based in Bohmian mechanics that allow the computation of the trajectory of each particle in the double slit experiment. – juanrga Dec 9 '12 at 22:23
@anna Interesting.. Looks like I am running little outdated. I am happy to know Quantum Decoherence isn't an issue. – Sachin Shekhar Dec 10 '12 at 4:53
add comment
Your Answer
|
0fadf2ee816e3b97 | Optical frequency combs and frequency comb spectroscopy
Document Sample
Optical frequency combs and frequency comb spectroscopy Powered By Docstoc
Optical frequency combs and
frequency comb spectroscopy
Frequency Combs: A revolution in measuring
“for their contributions to the
development of laser-based
precision spectroscopy including the
optical frequency comb technique”
Nobel 2005
J. Hall T.W. Hänsch
Wim Ubachs TULIP Summer School IV 2009
Noordwijk, April 15-18
On Pulsed and Continuous wave lasers
A laser consists mainly of a gain medium and an optical cavity:
Consider from time and frequency domain perspectives
Modelocking a laser
Basic idea:
build a laser cavity that is low-loss for intense pulses,
but high-loss for low-intensity continuous beam
Intracavity saturable absorber, or Kerr-lensing:
• Intensity-dependent refractive index: n = n0 + nKerr I
• Gaussian transverse intensity profile leads to a refractive index
gradient, resembling a lens!
A laser running on multiple modes: a pulsed laser
lasers with “mode-locking”
f = fa
f = f a + Δf
f = f a + 2Δf
f = f a + 3Δf
f = f a + 4Δf
And so forth: add 30 waves:
Ultrafast lasers
Pulsing back and forth inside the cavity
Ultrafast lasers
Fourier principle for short pulses
Time Domain:
Short pulse
Spectral Domain:
Wide spectrum
Frequency comb principle
Time Domain:
Pulse train
Spectral domain:
‘Comb-like’ spectrum
Many narrow-band,
Well-defined frequencies
Some math: Propagation of a single pulse (described as a wave packet)
E (t , z ) = ∫ E (ω )eik (ω ) z e −iωt dω
Insert an inverse Fourier transform E(τ) for E(ω)
1 ∞ iωτ ik (ω ) z −iωt
E (t , z ) = ∫ ∫ E (τ )e dτe e dω ∞
−∞ 2π −∞ E (t , z ) = ∫ E (τ )G (t − τ , z )dτ
1 ∞ i (ω (t −τ ) − k (ω ) z )
Propagator G (t − τ ) = ∫e dω
2π −∞
Propagation of the field
This can be used with k (ω ) = k0 + (ω − ωl ) + O(k 2 )
dω ωl
1 1 z
E (t , z ) = exp[iωl ( − ) z ]E (t − )
v g vφ vg
Difference between group and phase velocity When traveling through dispersive medium
causes an extra phase The carrier/envelop phase continuously changes
Some math: Propagation of a multiple pulses in a train
N −1
E (t ) = ∑ Esingle (t − nT )
n =0
T is time delay between pulses
N −1 1 − e −iNωT
Etrain (ω ) = Esingle (ω ) ∑ e −inωT
= Esingle (ω )
n =0 1 − e − iω T
sin 2 ( NωT / 2 ) ∞
I train (ω ) = I single (ω ) In the limit I train,∞ (ω ) = I single (ω ) ∑ δ (ωT − 2πn )
sin 2 (ωT / 2 ) n =0
With dispersion I train,∞ (ω ) = I single (ω ) ∑ δ (ωT − 2πn − φCE )
n =0
Phase shift
Frequency comb principle
ϕceo= π ϕceo= π/2 ϕceo= 0
2 RF frequencies
determine the entire
T optical spectrum!
fceo=(Δϕceo/2π) frep frep= 1/T
f = n frep + fceo
tested to <10-19 level
Stabilization of frep
Both frep and fceo are in the radio-frequency domain
can be detected using RF electronics.
Measuring frep is straightforward: Counting
Detection of fceo
Measuring fceo is more difficult, requires production of a beat
signal between a high-f comb mode and the SHG of a low-f comb
f:2f interferometer
Supercontinuum generation
This f-to-2f detection scheme requires an octave-wide spectrum
spectral broadening in nonlinear medium
Photonic crystal fiber:
Detection of fceo
f : 2f
Beat-note measurement
(frequency counter)
Stabilization of fceo
The f-to-2f interferometer output is used in a feedback loop.
An AOM controls the pump power to stabilize fceo
Scanning of frep
Linear cavity required for long-range scanning
Multiple reflections on single mirror to increase scan range
Scan range determined by:
– Cavity stability range
– Alignment sensitivity
A frequency comb as a calibration tool
for “spectroscopy laser”
The frequency of a laser can directly be determined
by beating it with the nearest frequency comb mode:
flaser = n frep + fceo + fbeat
Cf: Hänsch and co-workers: atomic hydrogen
Direct frequency comb spectroscopy
Full control over
pulse timing
Cf :
Ramsey spectroscopy
Atomic fountain clocks
QM analysis of pulse sequences
Wave function of two-level atom:
From Schrödinger equation, and some approximations (dipole, rotating wave) the upper state
density can be calculated for two-pulse sequence:
T is time between pulses
φ is difference in fceo between pulses
For N pulses:
Excited state population N=3
“the comb superimposed
onto the atom”
Feasibility experiment in deep-UV (Kr atom)
With amplification in Titanium:Sapphire
(Amplification == Phase control)
ionization limit
532 nm
4p5 5p [1/2]0
τ=23 ns
2 x 212 nm
212 nm 532 nm
60 ns time
13.3 ns
Problem with frequency comb calibration:
mode ambiguity
84Kr: 4p6 – 4p5 5p [1/2]0
3.5 MHz accuracy with
THz bandwidth laser pulses
Combs in the VUV and beyond
Harmonic conversion
IR harmonic UV DUV VUV XUV
frequency comb = high power pulses = 'easy' harmonic generation
combination of high peak power and accuracy
Combs in the VUV and beyond
Comb is retained in harmonics due to pulse structure
Phase control/measurement is the crucial issue
Measurements at the 7th harmonic (of Ti:Sa)
Probing Xe (5p6 5p55d) at 125 nm (Vacuum ultraviolet frequency comb)
Phase stability (between pulses) in the VUV
(effect on relative phase)
O2 pressure dependence:
-0.12 (0.29) mrad/mbar=
-1.5(3.4) kHz/mbar
UV dependence:
-8.7(5.8) mrad/μJ =
-104(70) kHz/μJ
Novel development:
Miniaturisation of frequency comb lasers
I -V
Needle probes Mode-locked diode lasers
InP quantum dot material
~1 cm
Result from hybrid modelocking
Shared By: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.